AI arms race: LogMeIn on the rise of intelligent cyberattacks

AI arms race: LogMeIn on the rise of intelligent cyberattacks

The impact of Artificial Intelligence (AI) on the enterprise has been a matter of permeating discussion amongst a range of business professionals in recent times. Since the end of the last ‘AI winter' in the mid to late 90s, hype for AI-driven technology has progressively increased and its use-cases in business thoroughly debated, as new technologies come into play. While AI use may - at this stage - be partially misused and in some instanced overhyped, it brings a wealth of interesting and influential innovation in a lot of different areas.

Although just as it promises innovation, there is also the potential for AI to be used to harm organisations in the form of AI-based cyber-attacks.  The use of Artificial Intelligence systems by cyber criminals has been on the rise in the last couple of years, with use ranging from ML being used to study patterns of normal user behaviour within a company's network, to carrying out advanced DDoS attacks or other malicious activities using botnets.

The truth is the use of AI by malicious actors may just be starting to ramp up. According to LogMeIn Chief Information Security Officer Gerald Beuchelt, the use of AI to fuel cyberattacks is more akin to an arms race between bad actors and cyber security professionals. Beuchelt says that AI has the potential to give threat actors serious advantages over security experts, creating certain situations that promise to overwhelm enterprise systems and networks. We sat down with Buechelt to talk about this phenomenon, as well as how organisations can more generally improve their approach to developing an all-encompassing cybersecurity profile. 


AI-driven cyber-attacks have picked up a little bit of steam in recent times, can you just provide a bit of a snapshot of how you view this development?

One thing I would say is that this is a true arms race. It is a matter of who is faster to deploy (AI technology) in meaningful ways. We've already seen botnets and threat actors deploying platforms that are not necessarily fully AI-enabled or machine learning enabled, but they are very agile and easily configurable for different payloads and very adaptable to different environments.

Like every technology, AI can either be used for good or bad, but I think particularly in this space of cyber security - and security in general - it is very important to monitor because the potential advantages that you could get from a truly successful machine learning environment, or artificial narrow intelligence system, are phenomenal. These sorts of attacks could easily overwhelm any kind of defensive measures if used offensively.

Recently, we saw a router botnet that was capable of taking over home routers and then deploying standard DDoS botnets, stealing credentials, and sending emails. It essentially turned those devices into platforms, similar to what we've seen years ago in the context of a nation-state level attacks. These kinds of attacks have become capabilities that regular criminal threat actors and 'script kiddies' can start to deploy. That is obviously concerning because our defences have not necessarily kept up with these technologies.

What are some of the other technologies that actors are ‘arming' themselves with?

Once you start to go into highly advanced kind of technologies, and you see things like deep fake videos, deep-fake pictures, and voice recognition, which are all things that can be used to bypass existing systems. You really get to the point where a lot of our traditional ways of identifying humans are coming into question. We really have to critically think through the type of authentication factors and type of security controls we are putting in place - for example around access control - in order to be able to better take into consideration the kind of capabilities that malicious actors have these days. 

One thing that does concern me as a CISO for a relatively large enterprise, is the ability of threat actors to orchestrate and optimise the delivery of TTPs against us through coordinated machine learning, or some basic AI capabilities. These sorts of things could easily overwhelm our current defences.

We have analytics platforms and other smart ways of identifying signals when it comes to alerting and monitoring. However, if there was an ability for a smart bot to pass through social graphs/networks, like Facebook and LinkedIn, and leverage that information to formulate targeted phishing campaigns against our users at scale, it could actually overwhelm our capabilities to defend ourselves today. 

We haven't seen that yet and we will rely a lot on our partners for tools like phishing prevention and intrusion detection to keep up with those kinds of developments. Otherwise, our security teams are going to be completely overwhelmed. 


In talking about a 'cyberwar' between the 'good guys' and malicious threat actors, what are some of the best uses of AI in defensive cybersecurity? 

One of the most extremely helpful developments out of those that are currently available is around the advances that we're seeing in anomaly detection. Included in that is the ability to establish a baseline for any signals coming from systems that are accessing sensitive data across the board, and then having intelligent systems detect anomalies within that and raise the appropriate alerts to humans. 

Once that is working well - and it's starting to work well now in certain areas - this will allow security teams to automate responses. SOC automation has been a topic for a couple of years now and now that the quality of alerting and monitoring is increasing - becoming more useful for everyday situations and organisations - the automation of responsive measures, like shutting down ports or blocking networks, is increasingly beneficial.

Those kinds of things are going to help us defend ourselves appropriately. It's about having comprehensive sensors and sensor data and the necessary analytics systems deployed over those sensor grids, with autonomous systems operating in conjunction with alerts to deploy basic measures. That might not incorporate every aspect of a business, but over time the quality of the detection and response mechanisms will only increase, allowing us to scale up as necessary. 



Who's winning in this battle between threat actors and security professionals?

It's still a little too early to tell. My gut instinct would be that we're seeing a bit of bifurcation of the maturity of organisations' security profiles. Some organisations are still struggling with patching and basic configuration management and they're eventually going to be overwhelmed because if they don't have those kinds of things under control now then they'll be in permanent catch-up mode. These more basic problems are still some of the most common things that threat actors exploit to execute their attacks.  

On the other side, there are organisations that have been taking security seriously for a long time and have already solved those more basic dilemmas. These organisations now have the time to truly think about what advanced defences can look like. Taking that into consideration, I would say that as the capabilities on the adversarial side are increasing, you're going to see a bifurcation, with one side of companies seeing more frequent intrusions and the other more advanced companies experiencing less overall exposure. 

When looking at the threats, 10 or 15 years ago you could easily argue that it was nation-states and their intelligence agencies that had by far the most comprehensive toolset for offensive work, although I think this is changing. We are starting to see extremely advanced attacks from digital or criminal underground actors. As such, the threat-actor landscape is separated out a little bit, between professionals who are industrialising their capabilities and making them available - to governments or criminals - as-a-service (i.e. hacking-as-a-service) and on the other side, there are those that are only moderately successful.  


So the underground black-hat hackers are becoming more advanced than direct government entities - like intelligence agencies - in your view?

Well as an example, Zerodium are buying zero-day vulnerabilities for huge amounts of money, like $2 million (USD) for a zero-click iOS remote exploit. You have to ask yourself who are the buyers for this? Who can actually afford this? It would either be the large digital/criminal underground or nation-states who have that capability. 

Although as far as we know, it's a marketplace. There are other things like that elsewhere on the web, which offer interesting returns on zero-days and similar attacks, so I think that whole enterprise is shifting out of the public sector and becoming industrialised and commercialised just like we do with everything else. 


Thinking about AI-based attacks, what are some of the ways that organisations can get prepared and employ a successful approach to defensive measures? 

It depends on the size of the company and how they've structured their overall systems environment. Obviously very large organisations have huge amounts of resources to craft custom defensive measures. If we look at the advanced defensive industrial base of the finance sector, they're deploying anomaly detection systems and large SOCs in order to be able to track and target respective adversaries. 

SMBs are much more challenged. They have far fewer resources, they don't necessarily have the awareness that they could be targets. We've found that most financial losses are actually at the SMB business base and not so much larger business base as larger companies generally have more mature security processes. 

What I would argue, particularly for SMBs but also to some extent larger enterprises, is for organisations to truly adopt cloud as a model. If you have a large organisation like Amazon, who can provide a huge IaaS offering, there are certain things you just don't need to worry about. As an example, you don't need to think about whether your data centre has 24/7 protection because of its UPS. These sorts of things are guaranteed with attractive service level agreements that you have with those providers. 

It's the same thing with SaaS providers, like ourselves, Microsoft, Google, and Salesforce, all of these companies offer a huge degree of platform security and have large amounts of resources to put into customer security offerings that small businesses with a dozen employees would never be able to afford. It's about leveraging the utility-driven model that cloud service providers can offer, taking advantage of some extra security posture. 

10 or 15 years ago, you could have made an argument about whether cloud was more secure or not, and in some ways, it was less secure than on-prem. That has changed dramatically over the last 7 to 8 years. Today, if you run your own mail server, you're very likely to be less secure than if you were using Office 365 or Gmail, it's just a reality of life.


Are there any other common mistakes you see organisations making in terms of developing their security profiles? 

What is sometimes lost on people is that we shouldn't be doing security for security's sake. We do security in the first place because we have valuable assets that need to be protected. You need to assess how anything you do fits in with the larger business strategy. Any organisation has valuable assets that could be at risk of being exploited. Making sure that they keep their IP, ensure uptime for their systems, or whatever security issue they're worried about, is vital for them to have true continuity of operations.

Security is implicitly present everywhere. Making it explicit and identifying key risk indicators and key performance indicators across your different environments is when security starts to become managed appropriately as part of business functions. It should be managed in exactly the same kind of way that HR, finances, and legal exposure are managed. The more it is viewed holistically, and not in a siloed way or just as part of IT, the better your outcomes will be. 

Security must be all-encompassing, and it starts with people. You need to do background checks, define processes, and make sure that everyone in your organisation is aware of what's going on first. Only once you're finished with people and processes, do you look at technology. It has to be a holistic and comprehensive kind of approach otherwise there will invariably be mistakes.


« Secret CSO: Dimitrios Stergiou, Trustly


The CMO Files: Julian Diaz, Skynamo »
Pat Martlew

Patrick Martlew is a technology enthusiast and editorial guru that works the digital enterprise beat in London. After making his tech writing debut in Sydney, he has now made his way to the UK where he works to cover the very latest trends and provide top-grade expert analysis.

  • Mail


Do you think your smartphone is making you a workaholic?