BlackBerry's Cylance AI Antivirus Defeated by Embarrassing Universal Bypass

Skylight Cyber researchers from Australia were able to show the first universal method to trick an “artificial intelligence-based antivirus” into thinking that malware is non-harmful software and let it run on supposedly secured machines. 

The AI antivirus in question is called CylancePROTECT, which BlackBerry purchased last year with plans to integrate Cylance’s AI antivirus technology with its Spark communications platform for the Internet of Things (IoT). The recent discovery points to a potential roadbump in Blackberry's pivot from smartphones to IoT security

Creating Universal Bypasses for AI Antivirus

The Skylight researchers said that by analyzing Cylance’s engine and neural net model, they were able to see that the AI antivirus made heavy use of string analysis and had a bias for a particular game. 

This allowed the security researchers to use this bias against the antivirus -- they appended a selected list of strings to any malicious file that would normally be detected and avoided detection. They noted that this method worked for 100% of the top 10 malware (for May 2019) and for 90% of the larger sample of 384 malware. 

Malicious hackers create new malware based on code from older malware that is still able to bypass modern antiviruses all the time. However, that requires a significant amount of work for every new piece of malware. The difference here is that by discovering the AI antivirus’ weakness, the bad actors can easily torpedo machines protected by such antivirus with all sorts of slightly modified malware. 

How AI Can Easily Be Fooled

With a single piece of research, the Skylight security experts were able to show just how catastrophically vulnerable AI-based antivirus tools like CylancePROTECT can be before they even had a chance to get popular the cybersecurity industry. 

The researchers offered an easy-to-understand analogy for how AI-based antivirus solutions can be so easy to trick. If the AI would be trained to learn the difference between birds and humans, it would eventually learn that one of the primary differences between a bird and a human is that birds have beaks and humans don’t. 

'Eureka,' the AI vendor might say. Now the vendor can assume that because its AI isn't highly effective at detecting birds but is effective at detecting humans, AI should be able to tell a picture of one from the other. That's the logic behind AI antivirus. If AI antivirus can look at thousands of existing samples of malware and identify the vast majority of them as malware, then the vendor can presume that it’s highly effective at detecting similar malware.

However, according to the Skylight researchers, malware makers are not “wooden dummies” -- they fight back and can come up with clever tricks that are easy to implement and could completely confuse the AI. In the above bird versus human example, if a human would wear a mask with a bird beak, said human could be confused for a bird. 

The AI model could be fixed to take other features into account, but those could similarly be fooled, too, which is why the researchers concluded that AI antivirus is nowhere near being the cybersecurity silver bullet that vendors have promised it would be.

Should We Just Kill the AI Antivirus Hype Now?

We’ve seen in different industries that AI that tends to be highly-optimized (or biased) for a particular set of features based on the data it was given. It also seems to be difficult to give the AI the right balance of data covering all aspects of the problem areas. 

Vendors of future AI technologies will likely continue to struggle to find the right balance for the training of their AI solution potentially in perpetuity. Similarly, security experts will have to keep fighting bad actors and exploits against their system and app protections in perpetuity, too.

For businesses, AI antiviruses seem problematic, not just because they can be universally bypassed by malware once whatever bias they have in their model is found, but also because AI-based security solutions need to be fed virtually all of the data that passes through a company’s network. 

This turns the AI antivirus into yet another potentially enormous liability both in terms of privacy and security. The AI antivirus becomes a single point of failure, so if its servers are hacked, all of that enterprise customer data could then be in the hands of bad actors.

However, as long as adding the buzzword "AI" to a product's description increases the hype around it, security vendors will likely continue to benefit from such marketing tactics, even as integrating such AI solutions could potentially weaken enterprise networks. 

Lucian Armasu
Lucian Armasu is a Contributing Writer for Tom's Hardware US. He covers software news and the issues surrounding privacy and security.
  • redgarl
    Yeah... but AI is able to learn from its mistakes... what you don't seem to be able to do yourself...

    BB will update its software periodically, I am not afraid. It will get better with time.
    Reply
  • bit_user
    redgarl said:
    Yeah... but AI is able to learn from its mistakes...
    Yeah, but at whose expense?

    redgarl said:
    BB will update its software periodically, I am not afraid. It will get better with time.
    This already happens with conventional antivirus. I think the key selling point of AI-powered antivirus is that it doesn't need to be told about new malware - it will recognize them without requiring updates.

    The fatal flaw with this approach seems to be:
    malware makers are not “wooden dummies” -- they fight back and can come up with clever tricks that are easy to implement and could completely confuse the AI

    If the malware makers have access to the AV software, then they can keep trying different things until it passes. Suddenly, your AI-powered AV software is now no better than conventional AV software - once again, requiring patches before it's able to recognize new malware.
    Reply