Good AI vs. evil malware: The cyber arms race starts now

Good AI vs. evil malware: The cyber arms race starts now

Automation is moving into more and more sectors as intelligent machines — AIs — are deployed across a range of functions and while much of the negative focus has been on job loss and human redundancy, there is another even more worrying possibility. What if these intelligent machines are used by hackers? In a recent BBC article, Andy Powell, head of cybersecurity at Capgemini UK, warned that “AI will power malware, and will use data from the target to send phishing emails that replicate human mannerisms and content”.

Nathaniel Borenstein, chief scientist at Mimecast, believes that software innovations by the good guys give them a 10-minute head start over the bad. He says there is no doubt that any serious malware player is watching carefully as AI technology is deployed on the anti-malware side, and will respond in kind.

“These technological innovations generally cause, at most, a transient change in the balance of power,” he says, adding that while the security industry can use AI to better spot attacks, cybercriminals can use AI to quickly figure out which phishing messages are getting through and which sites are most vulnerable.

Can ‘good’ machine learning take on global cybercrime? We catch-up with Dave Palmer, Director of Technology at Darktrace to learn more about machine learning and the war against cybercrime.

“It's an ongoing arms race that no organisation can successfully fight alone. AI will force both sides to invest ever more expertise and computational resources in the battle.”

Tim Stiller, Senior Systems Engineer: Incident Detection and Response at Rapid7, agrees and says that much of the mainline malware seen every day comes from automated builders and spam networks, which use much of the same infrastructure that security teams leverage for their everyday DevOps. “Think continuous integration/deployment. AI is just the next logical step in malware development and evolution.”

Stiller says that, as technology and defences evolve, so does the adversary. “The adoption of AI to drive phishing campaigns that replicate human behaviour is a concern and likely being developed now, if not already in testing/use.” He foresees social media being the primary data source that fuels AI-based targeting, since this type of data is perfect to build a profile of users to target and achieve a higher success ratio, as it is rich with an individual’s behaviour, habits and interests.

“Beyond the machine, think of the threat of malware alone. Something that could think/spread on its own, mutate and evolve to forever evade antivirus and other modern day defences.”

The threat, however, is still some way off and may not quite be the game changer some believe it could. Josh Mitchell, Principal Security Researcher at Nuix, and Andrew Spangler, Principal Malware Reverse Engineer at Nuix, say that, in the near term, this threat is unlikely due to the effectiveness of current attacker paradigms.

What will the ‘mega security breach’ of the future look like?  We ask security professionals about the future of security breaches.

“As such, the attackers have no reason to elevate their attack methodologies to include artificial intelligence. Once network defenders are able to effectively and repeatedly interrupt an attacker's decision making cycle of observation, orientation, decision, and action (OODA); attackers will be driven to implement force-multiplying technologies such as AI,” they say.

For the Nuix engineers, the low return on investment for the development of a weaponized AI framework does not justify the creation of such a system. “People still click on links they receive if promised an enticing cat video in return.”

There are, however, still specific dangers related to automation, which has long been used by hackers and malware authors. Mitchell and Spangler explain: “In a non-targeted campaign, automation is largely the method by which new targets are identified and infected. An example of this is the recent Mirai botnet that infected vulnerable Internet of Things (IoT) devices.  Each new infection scanned the internet for more vulnerable devices to increase its reach.”

Like Spangler and Mitchell, Péter Gyöngyösi, Product Manager at Balabit, also highlights the existing use of automation in malware. “Intelligent malware, self-evolving exploit codes or plain-old auto-generated and dynamically changing spam emails have been around for a while. Of course, AI is a very broad term and the capabilities of today’s most common malwares and attack tools are closer to that of an intelligent thermostat than HAL 9000s, but the trend is clear.”

He explains that the incentive for the attackers is simple: signature-based malware detection is widely used so malicious code needs to adapt and evolve automatically to remain efficient. “Malware can be made much more powerful by enabling it to make its own decisions and choose the right weapons based on the detected environment.”

While the form of automation currently employed focuses on repetition of tasks and largescale deployment, it does not offer the Skynet-esque threat that many envision when artificial intelligence is mentioned. The current evolution of AI is in the early stages and is far from realising the kind of sophisticated intelligence that science fiction has shown us.

Javvad Malik, Security Advocate at AlienVault, says: “Generally speaking AI is in the early stages, and there is much room for it to grow. We’re currently seeing more parroted AI, which has its limitations, as Microsoft saw when it released its Twitter chatbot ‘Tay’ and had to be pulled in 24 hours because users were teaching it inappropriate behaviours.” 

Malik says that much of AI today automates manual tasks, in as much as it mines large amounts of data, then uses statistical probabilities and patterns to make predictions or generate responses. “But this will improve as we see more services crop up like Google-owned – a platform that allows users to create their own chatbots.”

He points to breakthroughs made by companies like Automated Insights with their wordsmith for spreadsheets or Quill by Narrative Science, which can create narratives and even articles from raw spreadsheet data that is indistinguishable – or very close to – what a human would produce.

“These technologies and their adoption demonstrate that there is great potential in automating repetitive writing tasks and producing prose that is close to human quality. Given time, as costs come down, it will be likely that this can be adopted for phishing emails, or ‘long con’ type scams which require building trust with a target over a period of time such as through dating sites.”

For Scott Zoldi, Chief Analytics Officer at FICO, the current state of cybersecurity boils down to a cat-and-mouse game played to get around rules - or signature-based monitoring, which is an inherently reactive approach. He says that, by definition, the bad guys are always one step ahead because the process requires that someone must be impacted by a new threat, the threat detected, codified and deployed across the ecosystem in order to protect other organisations from the same threat suffered by the first organisation. 

Zoldi says the future lies in using AI-driven solutions which can detect and react to new threats that are missed. “This is important as malware adapts to new trends continuously and nefarious criminals are always looking to circumvent the detection capabilities of systems based on fixed rules or heuristics. For example, a piece of malware might be adjusted very slightly by an attacker to evade a signature - while the rules wouldn’t trigger an alert, an AI-based model would because it’s looking at the behaviour of all devices on the network.”


«Graph databases strive to tackle fake news


Which Docker container monitoring solutions work the best?»
Bianca Wright

Bianca Wright is a UK-based freelance business and technology writer, who has written for publications in the UK, the US, Australia and South Africa. She holds an MPhil in science and technology journalism and a DPhil in Media Studies.

Our Case Studies

IDG Connect delivers full creative solutions to meet all your demand generatlon needs. These cover the full scope of options, from customized content and lead delivery through to fully integrated campaigns.


Our Marketing Research

Our in-house analyst and editorial team create a range of insights for the global marketing community. These look at IT buying preferences, the latest soclal media trends and other zeitgeist topics.



Should the government regulate Artificial Intelligence?