What might 'bad guy' machine learning mean for security?

Recent breaches show many companies still fail at the absolute basics of cybersecurity. Workplace devices get infected routinely by phishing scams and (often simple) malware makes it through to the corporate enterprise. More worryingly still, organisations take months to spot intruders and once a problem is detected there is often no proper plan in place to deal with the situation.

To counter this, any number of big data crunching, machine learning solutions have popped up to detect threats. These include the likes of Darktrace, Cylance and Vectra Networks which scan the network for oddities. However, the flip side is they also open the door for ‘the bad guys’ using the same techniques.

“It's interesting we talk about the promise of machine learning or AI as an industry but I think it also holds a promise to our adversaries,” suggested Roark Pollock, senior VP of marketing at security firm Ziften, at a recent press and analyst security debate in Silicon Valley. “It's a tool that can be used by both sides and at the end of the day this is potential for a stalemate if we're just using it to play a cat and mouse game.”

Professor Giovanni Vigna, CTO and co-founder of Lastline, on why Machine Learning in security can be a tricky game. Check out: Welcome to the world of adversarial machine learning

This looks likely to ramp up in the near future and Anup Ghosh, chief strategist of Next Gen Endpoint at Sophos believes that we will see a “rapid adoption of machine learning for adversarial purposes” over the next 12 to 18 months’ time.

To continue reading...


« A business case for NarrowBand IoT in Africa


What does $1 billion buy you as IoT moves computing to the edge? »


Do you think your smartphone is making you a workaholic?