What might ‘bad guy’ machine learning mean for security?

What might ‘bad guy’ machine learning mean for security?

Recent breaches show many companies still fail at the absolute basics of cybersecurity. Workplace devices get infected routinely by phishing scams and (often simple) malware makes it through to the corporate enterprise. More worryingly still, organisations take months to spot intruders and once a problem is detected there is often no proper plan in place to deal with the situation.

To counter this, any number of big data crunching, machine learning solutions have popped up to detect threats. These include the likes of Darktrace, Cylance and Vectra Networks which scan the network for oddities. However, the flip side is they also open the door for ‘the bad guys’ using the same techniques.

“It's interesting we talk about the promise of machine learning or AI as an industry but I think it also holds a promise to our adversaries,” suggested Roark Pollock, senior VP of marketing at security firm Ziften, at a recent press and analyst security debate in Silicon Valley. “It's a tool that can be used by both sides and at the end of the day this is potential for a stalemate if we're just using it to play a cat and mouse game.”

Professor Giovanni Vigna, CTO and co-founder of Lastline, on why Machine Learning in security can be a tricky game. Check out: Welcome to the world of adversarial machine learning

This looks likely to ramp up in the near future and Anup Ghosh, chief strategist of Next Gen Endpoint at Sophos believes that we will see a “rapid adoption of machine learning for adversarial purposes” over the next 12 to 18 months’ time.

There hasn’t been too much of this to date added Pollock because “it's so easy for people to get into our networks” as defence is poor. “The easy stuff still works and so we haven't forced them, the adversaries, to have to adopt machine learning or any other advanced techniques to get in.”

The rise of extremely high-profile security breaches is likely to raise the bar higher. This is because every organisation knows they have to remain vigilant. Nobody wants to be the next Equifax. Yet every cybercriminal on the Dark Web wants to continue making their living from corporate theft.  

People will use machine learning to get better and more efficient at stealing people's credit card information, explained John Michelsen, chief product officer at Zimperium. “So what we're seeing now is exploratory work and how do you develop better phishing campaigns that you get over a 20 per cent click through rate. If I can get to a 40 per cent click through rate I've made money.”

In the same way there have been a lot of technology innovations in software testing, he added. These include using machine learning to find vulnerabilities in software. However, this kind of approach can be used by people on each side of the fence.

“I don't think it's mainstream yet,” said Michelsen. “I do think people who are looking at industrialising hacking for commercial purposes will definitely use this technology because it makes them far more efficient. If you're a nation state you're already using this technology because you're trying to find [O-days].”

The conversion rate problem in malware campaigns is similar – in a lot of ways – to the issues faced by marketers (except the former don’t have to worry if they’re being too spammy).

“If you're in that business you've got infrastructure for developing botnets,” said Ghosh. “You're leveraging micro targeting for malvertising, you're automating so much of this process and you are looking at conversation rates.

“Ransomware is a conversion rate problem. So to that extent you can use machine learning to craft really good campaigns, whether it's Twitter, Facebook, email, to get humans to click on links and the evidence is out there that actually machine learning is far better at crafting emails and Tweets that get humans to click on these. You will see that rapid adoption.”

At present is it hard to know how much of this is going on at the minute because we just see the “end result”, said Oliver Tavakoli, CTO at Vectra. In some ways it doesn’t matter how it is being done, of course, but “as machine learning comes along and automates that process, what we're basically talking about is being aware that there is a tidal wave of it heading our way.” 

This means if we get signs that automation is being used, that the game has been “upped by a couple or orders of magnitude” in terms of quantity and scale and speed, he said. “We already see a lot of people playing around with things like Virustotal where you have huge archives of malware and you have engines that basically are run against that malware.”

This is no silver bullet. As Ron Green, executive VP and CSO at Mastercard concluded: “as we roll out AI or machine learning in more of the practices that we use to protect our environments, the adversaries will have to and more frequently use those techniques to try and defeat what we put in place.”


«What does $1 billion buy you as IoT moves computing to the edge?


A business case for NarrowBand IoT in Africa»

Our Case Studies

IDG Connect delivers full creative solutions to meet all your demand generatlon needs. These cover the full scope of options, from customized content and lead delivery through to fully integrated campaigns.


Our Marketing Research

Our in-house analyst and editorial team create a range of insights for the global marketing community. These look at IT buying preferences, the latest soclal media trends and other zeitgeist topics.



Should the government regulate Artificial Intelligence?