The future of machine learning in cybersecurity: What can CISOs expect?
Cybercrime

The future of machine learning in cybersecurity: What can CISOs expect?

August saw the Defense Advanced Research Projects Agency (DARPA) host its first Cyber Grand Challenge – the first hacking competition not involving people. During this event, teams left their systems alone to single-handedly find, diagnose and fix software flaws in real time.

Elsewhere, researchers at MIT are not only developing machine learning systems that automatically mine dark web marketplaces for vulnerabilities and zero-day attacks and reports them back as well as software that automatically fixes buggy code, but also a platform that can predict 85% of cyber-attacks.

Machine learning, deep learning, and Artificial Intelligence (AI) are hot topics at the moment, and while there’s plenty of research going on, there’s also some practical applications that can be deployed right now to make life easier for cybersecurity professionals.

A glut of new start-ups, from the likes of Darktrace, Cylance, Deep Instinct, and HackerONE, plus established player such as FireEye, IBM, and Forcepoint, are all working on bringing self-learning systems into the world of security.

 

Why is machine learning starting to make its way into cybersecurity?

There are plenty of reasons why machine learning is starting to make its way into cybersecurity. Reports of skills shortages in tech are rife, but the problem is especially acute within the security world. Too many tools, too many false positives, and an adversary that is often better equipped and funded means security workers are over-worked and unable to keep the cyber-criminals from breaching the gates.

Hackers can have the advantage of the unknown, and often attacks go unnoticed for months. OPM, Sony, Ashley Madison, all of these attacks were massive, creating headlines across the world. Machine learning offers a way for organisations to parse real-time info from across their network, and, in theory see attacks as they happen or even prevent them before they start.

“The cybersecurity industry is always changing as hackers frequently change tactics, reuse old tricks that hadn’t been used in years, or launch completely new, never-seen-before malicious campaigns. As a result, cybersecurity professionals are always trying to keep up,” says Jeb Linton, Watson Chief Security Architect at IBM. “Machine learning  offers the industry a strong tool in staying up to-date-on these changing tactics, can help reduce the time to understand that there is potential malicious activity in a network or even uncover where a network is most vulnerable.

“According to the Ponemon Institute, security analysts waste nearly 21,000 hours a year in chasing false positives within their network. If machine learning helps reduce these wasted hours, then security professionals can spend more time focusing on true threats and taking the right actions to remediate them.”

Wherever there are large amounts of data that need crunching and acting on, machine learning and AI have a role to play. And in the security world, where there are millions of new vulnerabilities, security events, malware strains, downloads, and users doing all kinds odd things that systems don’t like, the space is ripe for a revolution. UK start-up Darktrace, for example, uses machine learning to understand the individual behaviour of each network, and will flag up unusual activity in real time. Acuity’s BluVector product hunts Advanced Persistent Threats (again learning as it goes), Israel’s Deep Instinct uses deep learning (machine learning  designed to mimic the neural patterns of our own brains – the same used by IBM’s Watson) to create a file-scanning tool that can detect unknown variants of malware more successfully than traditional vendors. 

While their specific tasks all differ, they share some common traits: everything is real-time, and designed to simplify complex tasks to make the CISO’s job easier. Identity analytics, malware detection, and eventually incident response are just three key areas within security that machine learning could have some major benefits.

“I’ve no doubt at all that every single facet of the cyber security industry is going to be changed by the developments in machine learning and in future AI. What we'll see is resources moving more into dealing with risks to the business rather than a million lines of log files in a day or trying to guess how the salesforce should operate and what exactly any individual person should and shouldn't be doing at a detailed level,” says Darktrace’s Director of Technology Dave Palmer, when asked if we could one day end up with something akin to a unified AI security stack which handles every aspect of a business’ defences.

“So really focusing on what the risks are to the business, telling the systems what to prioritise and what the key services are in the business, rather than sitting there waiting for the firewalls and the log-in systems and the SIEMs to start giving them alerts that they have to respond to in human time.”

IBM is one of the companies that want to move into more proactive security. According to Linton’s vision, Watson will eventually be able to take the intelligence it’s gathered – including automatically ingesting newly published security research papers - and “apply it to a network it monitors and help security analysts plug potential vulnerabilities that a new threat may take advantage of before it even happens.”

For the majority of these systems, current deployments are generally within larger enterprises: financial institutions, healthcare companies, mobile network operators, government agencies, your Fortune500 types. Almost everyone we speak to feels that wider adoption – to SMBs and even the consumer market – is at around three to five years away.

“We're at the beginning part of the hype cycle,” says John Lucker, Deloitte’s Global Advanced Analytics & Modelling Market Leader. “In order for it to spread to smaller enterprises, and ultimately to the consumer market for home use, it's going to have to get a lot easier, a lot friendlier, and a lot more full-proof.”

Lucker also warns that these new security companies’ own hurry to ship a product is ultimately harming their chances for helping wider adoption. “These companies have so much rush to market that they tend to introduce buggy, sloppy designs, and sloppily-coded products.”

“They really need to focus on quality control, because what happens is they take a step forward, then they take five steps backward when they introduce things that end up being buggy or spotty in various design issues, and they create kind of a gag reflex in the marketplace.”

 

Are CISOs worried about this trend?

A survey of over 500 software developers by Evans Data Corp found nearly a third were worried about automation taking their jobs, a fear greater than failing platforms, pensions, bad management or out of date skillsets. So does the prospect of HAL 9000 taking care of security threats on their behalf scare the bejesus out of CISOs and other cyber-pros?

“If you go to Reuters or Disney or Deutsche Bank or any other huge company, you could face somewhere between 20 to 40 defence solutions,” says Nadav Manan, co-founder and VP of R&D at Deep Instinct. “Just imagine all the amount of logs and alerts that you are getting from each and every one of them, it's not good enough. Plus all the maintenance and the costs and all the effort that you are putting in, it's endless.

“We are meeting those engineers and CISOs face-to-face, and they are super excited because, for the first time, the machine can make the most important decisions by itself.”

While most of the people we’ve spoken to for this piece agree there may be the less jobs in the future, no one is currently under threat. Kris Lovejoy, President and Chief Executive of Acuity Solutions Corporation, believes there will be a ‘change in the dynamics’ of how security is run. The lower-level, ‘eyes on glass’ type analysts who are there merely to log incident and discard the false positives, may end up seeing their services required less and less and the systems learn what’s right and wrong. The people above them and providing initial analysis into what kind of threat the business is facing, however, will be more useful.

“People don't realise how much just horrible manual labour and just waiting around sitting around is involved in these kinds of investigations,” she says. “A lot of the tools that are coming out today that are automating and orchestrating key processes within the security operations centre that just haven't been automated before.”

While their jobs might be safe in the long run, what the CISOs of the future may look like, however, could be quite different.

“The type of people we need to be able to leverage machine learning  are significantly different in terms of the traditional engineer who had to know how to configure and manage and administer systems,” says Adnan Amjad, Vigilant Cyber Threat Management practice leader at Deloitte. “You need a whole lot of people who understand advanced analytics, who understand data. But you don't have a lot of people who understand security, risk, data, and analytics together, so what we're seeing a lot of companies do is take the data scientists and the analysts who correlate data and teach them cyber and risk.”

 

Could this all be a case of snake oil?

Cyber expert and founder of his eponymous security company Eugene Kaspersky recently warned of companies peddling AI snake oil, with many companies making grand promises but few actually explaining how or providing much in the way of verified results.

“I agree 100%,” says Acuity’s Lovejoy. “Two years ago, everybody was using the concept of security intelligence, now if you went to RSA, everything is machine learning.” She says that asking the vendor whether or not the system actually learns on-site or has to be processed back at HQ is a good tester for the “self-learning tag”, but ultimately these new vendors need to be more open.

“Not a lot of organisations are putting their money where their mouth is,” she says. “I think that we've really got to evolve to a place where we're validating the marketing hype with the actual practical lab testing results.”

Deloitte’s Lucker is also wary the market is bloated with peddlers of false promises. “I'm personally a bit of a sceptic around some of the tools I’ve been seeing,” he says. “They are often a fairly generic or public domain suite of algorithms that are available either through open source mechanisms or out in the academic world, but wrapped inside of a pretty software rapper and then they're hyped to be something akin to rocket science.”

According to CB Insights, funding for cybersecurity companies has risen 235% in the last five years, totalling $3.8 billion. Given the rising cost of cyber-crime, there’s no shortage of investment going around and plenty of start-ups bigging up their machine learning credentials.

“Security is over-capitalised,” Lovejoy says. “I know that's weird to say as a leader of a start-up. but there are way too many organisations out there with a lot of money that are selling snake oil.” Instead she would like to see Venture Capitalists ignore the marketing hype and instead do more technical due diligence before they invest, and more companies get third party evaluation or even government certification such as DHS anti-terrorism technology designation.

“I am hoping, hoping, that instead of the thousands of start-ups all pre-revenue with the greatest solution known to man coming out once a week, that we can get to some number of security companies out there that actually provide value.”

Darktrace, one of the most prominent start-ups in the AI security space and recently raised $65 million in its latest funding round, and has raised well over $100 million to date. As the company’s Director of Technology, Palmer agrees, the market is due for upheaval. “Not necessarily in cyber security, but across the IT industry, we've already seen people bring in AI skills by hoovering up smaller AI and machine learning companies.

“Certainly there is a prediction from a lot of places, particularly loudly from Morgan Stanley, that the cyber security industry is ripe for a huge amount of consolidation.”

Not only will we see some of the start-ups die off, but the incumbents may find themselves in difficult situations if they don’t adapt. “Some of the things we've done in the past really lend themselves to machine learning and AI approaches; some organisations will be able to pivot what they do to take advantage of machine learning and AI quite readily. But some things might just die off.

“The way that we've done antivirus in the past, might not be relevant anymore in five years’ time. The way that we have been sending threat intelligence, I’m sure it will get automated, triaged and maintained in a more effective entirely automated fashion.”

 

What about AI hackers attacking AI networks?

It is, of course, possible to create your own machine learning programs. While speaking at this year’s InfoSecurity Europe, Forcepoint’s Neil Thacker explained that there’s no shortage of Open Source AI frameworks: TensorFlow, Caffe, MXNet, CNTK, Torch, Theano, and Chainer, just to name a few, while graphics cards or cloud computing are relatively cheap. Obviously developing the understanding of how they work and what they need to do is harder to get to grips with, but there are courses available to help get you started. He did, however, reaffirm that he feels most AI in security will remain in the analytics area and be designed mainly to help human workers with decision making.

But of course anything you can do, cyber-criminals can do too. If he were criminally-inclined, Darktrace’s Palmer says he would infect a computer through whatever traditional avenue he could, then use natural language processing throughout the entire device to read and understand not only how the victim talks to individual people, but the method they use to communicate with each contact, thereby choosing their preferred method – whether Facebook, email, WhatsApp etc. – to tailor each super-specific-but-still-automated spear-phishing communiqué to each person. An example would be taking an existing meeting in a calendar, teaching the software to send automated messages about rebooking to an alternative location, and including a phony link on the venue or map.  

“We could develop a neural network that we can pre-train to do all that of those different activities with today's technologies,” he says. “If we couldn't do it today, I bet we could do it within the next 12 months, and we would have the perfect, semi-autonomous piece of malware that once we've got one person will spread like wildfire through all their contacts, and then all of their contact’s contacts.”

When asked if we could eventually have AI hackers attacking AI networks, Palmer says it is a “really realistic prospect.” However, while the nation vs. cyber hacking will remain a game of high level cat and mouse, the rest of us should benefit.

“Most attacks simply work because they do something that defenders weren't capable of thinking about in advance, and when the AIs can see those unusual things happening and block them on our behalf, then the ability to be an attacker is much reduced.”

IBM’s Linton is also optimistic: “Cybercriminals generally lack the deep research capabilities needed to build a system at the same level as what’s being done today with advanced machine learning and cognitive systems. This is part of what makes this arena so exciting – that we’ll be providing security analysts with access to a technology that will truly give them a leg up over cybercriminals.”

Deloitte, however, take a different stance. “We may have an advantage for a defined period of time,” says Amjad. “But I don't think that it will be the exclusive domain of large enterprises and companies to only be able to leverage machine learning.

“The criminal element, the people who wish to do us harm, they'll find easier faster and cheaper ways to be able to get there.”

Both he and Lucker agree that the availability of Open Source frameworks, cheap computing power, plus the large amounts of data previously stolen that could be used for training purposes, mean there’s little to stop some well-organised cybercrime organisations from benefitting from advances in machine learning .

“I also think that some of the bad actors are, in some cases, linked to state actors,” he says. “And when you pool those type of resources, it means if I’m at a large agency in a European or US Government, I would think my adversary probably has access to machine learning infrastructure and tools.

“If it's some guy sitting around drinking caffeine drinks in his apartment, maybe it's a different issue to a state sponsored bad guy.”

 

Also read:

Can ‘good’ machine learning take on global cybercrime?

PREVIOUS ARTICLE

«C-suite career advice: Elisa Steele, Jive Software

NEXT ARTICLE

The CMO Files: Lidia Lüttin, Bynder»
author_image
Dan Swinhoe

Staff Writer at IDG Connect.

  • twt
  • twt
  • Mail

Add Your Comment

Most Recent Comments

Resource Center

  • /view_company_report/775/aruba-networks
  • /view_company_report/419/splunk

Poll

Crowdfunding: Viable alternative to VC funding or glorified marketing?