Data Mining

Welcome to the world of adversarial machine learning

From startups such as Darktrace, Cylance, and ZoneFox to the established giants like FireEye and IBM, there’s few companies in the security space today that don’t claim to use either Machine Learning or Artificial Intelligence in some way or another.

And there’s good reason. Once you brush aside the “me too” marketing hype – of which there is no shortage – Machine Learning has the potential to help automate processes, reduce the number of false positives, and general make life easier for the overworked and often beleaguered security professional.

But as interest and use of Machine Learning for security purposes increases, so too will awareness from hacker and cyber criminals. Which inevitably leads to hackers trying to counter these technologies any way they can. And for companies looking to deploy their own Machine Learning-based systems for security use, this could lead to problems if they’re not careful.

“I see people taking machine learning techniques that we have been using in image processing and language processing and transferring them directly to the malware or the security domain,” says Professor Giovanni Vigna, CTO and co-founder of security startup Lastline. “And that doesn't work for a number of reasons.”

Vigna cofounded the California-based Lastline in 2011 to focus on offering breach detection and sandboxing technologies. Vigna himself is a Professor in the Department of Computer Science at the University of California in Santa Barbara, and part of the Shellphish group which won 3rd place at the DARPA Cyber Grand Challenge last year.

To continue reading...


« C-suite talk fav tech: Jason Atkins, 360 insights


The CMO Files: Ben Geller, Datical »
Dan Swinhoe

Dan is a journalist at CSO Online. Previously he was Senior Staff Writer at IDG Connect.

  • twt
  • twt
  • twt
  • Mail