Is Artificial Intelligence helping or hindering IT defenders?
Artificial Intelligence

Is Artificial Intelligence helping or hindering IT defenders?

If 2017 taught us anything, it’s that you can’t be complacent about your cybersecurity strategy. And as the driving force behind McAfee’s security research and development, you’d expect Chief Technology Officer, Steve Grobman, to have more to worry about than most.

“You could spend all day being concerned about almost anything,” he laughs, when I ask him what threats people should be looking out for.

But there is one big issue facing all companies today. How do you deal with the fast-changing threat landscape whilst continuing to protect yourself against the threats you were worried about yesterday?

Machine Learning in security can be a tricky game… Find out why: Welcome to the world of adversarial machine learning

“That’s creating a lot of new challenges for an IT defender to comprehend in order to protect their environment,” Grobman says.

Artificial Intelligence has become the latest buzzword in cybersecurity spaces and McAfee is now integrating these capabilities into its latest product offerings.

And it’s easy to see why. When technology has been trained properly, it can be very good at processing massive quantities of data and seeing patterns. Unfortunately, what technology is not so good at is using intuition to spot a new attack pattern or recognize an evasion tactic.

“One of our observations is that there are certain types of things machines are good at and there are things that humans are good at, that machines aren't,” Grobman explains. “Where a defense strategy can be most effective is when you have a strategy that has the best elements of both working together.”

To those versed in cybersecurity, it’s a well-known fact that the attacker has an inherent advantage over the defender and part of that reason is the attacker’s ability to move faster than the defender.

A panel of security experts broadly agrees that adversarial machine learning looks set to increase over the next 18 months. What might ‘bad guy’ machine learning mean for security?

“When we want to deploy it [a new product] to our customers we have to develop it, put it through our internal quality assurance cycle, have our customers acquire it. They have to put it through their quality assurance cycle then they have to go through a deployment cycle,” Grobman tells me. “All of this can take weeks, possibly months. If you're an adversary, you can build yesterday and deploy today. Time is very much on the side of the attacker.”

This scenario doesn’t change when AI becomes involved. In fact, it brings with it its own unique set of challenges. However, Grobman’s concerns aren’t from the Elon Musk school of thought.

“I am more worried about overly trusting the outcomes from AI as opposed to it going rouge per-say,” he explains. “There can sometimes be an overconfidence in the ability for AI to do things that they're not really doing. With AI or machine learning, you can actually have a model that looks very good but is actually worthless.”

To demonstrate this problem, Grobman built his own machine learning model that he claimed could predict the winner of the Super Bowl. On the surface the model worked, correctly predicting the outcome nine out of 10 years. However, Grobman intentionally over trained the model, having it learn the noise of the games he knew he would be testing it on, rather than developing it to understand anything about American football.

“The point is, when you apply it to cybersecurity there's a lot of companies that are saying 'here's how amazing our machine learning model is. Look how effective it is!' and you really just have to understand some of the nuance of how it’s being positioned. Is it being trained? Is it being tested on things that are very similar to what it was trained on? Those are the things you need to worry about.

“Most of these models don't really know what an attack is. It's not like a person who's saying, ‘there's bad things happening - this is an attack’. It's based off the attack looking similar enough to things that it's been conditioned or trained on so it's able to classify it correctly. Which is a real risk.”

This very real problem only reinforces Grobman’s belief that man and machine need to work together in order to tackle these emerging security issues. Unfortunately, an increasing cybersecurity skills gap is threatening to undermine that working model and seriously impact on how companies deal with security.

How is the industry responding to the cybersecurity skills shortage? Check out: Critical lack of skills could be the biggest security challenge

“All organizations will need a combination of technology and people and different types of organizations have different levels of ability to pay for individuals of varying talent. So, you're going to see cyber security issues impacting organizations that haven't traditionally had major issues.”

And in some ways, this is set to be the biggest security challenge facing companies in the coming years. How do companies develop and deploy technology that is not only successful from a cybersecurity point of view, but can also improve the efficiency of people, to help mitigate the labor shortage.

So, what can we expect to see more of in the future? Grobman believes we're going to see cloud breaches that will have catastrophic impacts on organizations or people.

“I think we saw the beginnings of that with the Yahoo breach. You know, one single breach impacted three billion accounts. That's a scale unlike anything we've previously seen. I think we'll see more breaches related to non-traditional devices.”

Grobman ends our conversation on a relatively somber note, keen to acknowledge that while companies like McAfee are continuing to fight the good fight against these emerging threats, the battle is nowhere near won.

“I think the sophistication of attacks will grow,” he concludes. “All the great technology that defenders are using today is going to be used to make attacks more effective and we need to get ready for it.”

PREVIOUS ARTICLE

«C-suite talk fav tech: Yaniv Romem, Excelero

NEXT ARTICLE

Millennials talk careers: James Taylor»
Charlotte Trueman

Charlotte is Staff Writer at IDG Connect

Most Recent Comments

Our Case Studies

IDG Connect delivers full creative solutions to meet all your demand generatlon needs. These cover the full scope of options, from customized content and lead delivery through to fully integrated campaigns.

images

Our Marketing Research

Our in-house analyst and editorial team create a range of insights for the global marketing community. These look at IT buying preferences, the latest soclal media trends and other zeitgeist topics.

images

Poll

Should the government regulate Artificial Intelligence?