An introduction to AI for security professionals

We interview one of the authors of Cylance’s AI for security professionals and offer the full book for download.

At the end of 2017 cybersecurity firm, Cylance, published a new book entitled, “Introduction to Artificial Intelligence for Security Professionals”, written by its data science team. IDG Connect has secured a copy of the full book for download and also caught up with data scientist, Andrew Davis, below.


What prompted you to write this book?  Who is the main target audience?

The book is written for security professionals who may not be aware that many tedious (or interesting!) aspects of their day-to-day work might be assisted with AI methods. With many easy-to-use AI software packages, MOOCs, informative YouTube channels, and so on, security professionals should feel empowered to pick up these tools to more effectively fight cybercriminals.


What makes this book unique?

This book began as an informative pamphlet to hand out at security conferences. It was designed to be a non-academic, friendly, example-driven reference to illustrate various use cases of AI in a cybersecurity context.

How will AI change the role of cyber-pros and their businesses? Check out: The future of machine learning in cybersecurity: What can CISOs expect?

What other sources would you recommend for those who wish to get into this subject?

Chris Bishop's “Pattern Recognition and Machine Learning” [PDF] is an excellent reference.


What confuses people most about AI in cybersecurity?

There is a healthy amount of skepticism surrounding applications of AI in cybersecurity, given AI's ascent to “buzzword” status in the industry over the past two or three years. However, there are well-defined problems solvable with today's AI methods that clearly help solve short as well as long-term problems in security. In malware detection, clustering, and intrusion detection where hand-written rules solve many problems, taking advantage of large data sets and training models to separate nominal from malicious is an appealing option. There are many other potential applications  – using AI models to help fuzz programs to more quickly find and patch exploits in the software we all use; models that locate machine code buried in a binary, completely invisible even to excellent disassemblers; models that de-obfuscate gibberish JavaScript back into a comprehensible form.  AI continues to find novel applications in this space, despite its frequent abuse as a meaningless buzzword.


Is there anything you find yourself repeating over and over again?

While there have been fantastic advances in the field as of late, the algorithms used are not fundamentally different from the ones being used one or two decades ago. In other words, machines are no more generally intelligent or aware than they were five years ago. There are still basic problems being solved – reasoning about uncertainty, learning a new concept with minimal examples – and even the discovery of a solution to these challenges is not likely to get us closer to something resembling actual intelligence.

Professor Giovanni Vigna, CTO and co-founder of Lastline, discusses why Machine Learning in security can be a tricky game: Welcome to the world of adversarial machine learning


Is it a challenge to get buy in for AI from senior management or is easy as this is trending so much?

Not in my experience.


How far advanced is adversarial machine learning?

The adversarial applications of machine learning are shaping up to be quite interesting. There are already real-world proof-of-concept attacks on image classifiers, where an object shaped as, say, a turtle is wrapped with a special texture, at which point it is misrecognized as a rifle, despite the object or the texture looking nothing like a rifle to a human. Such PoCs call into question the security and reliability of self-driving cars – what would happen if one were to print a similar fooling example onto a sticker, and place the sticker onto a stop sign so that the stop sign is no longer recognized?  As approaches based on AI continue to see adoption across various industries, practitioners need to be vigilant in defending against known vulnerabilities, and protecting against potential unknown ones.