AI arms race: LogMeIn on the rise of intelligent cyberattacks

With AI-based attacks on the rise, we speak with LogMeIn's CISO Gerald Beuchelt to find out who the threat actors are and what organisations can do to prepare.

The impact of Artificial Intelligence (AI) on the enterprise has been a matter of permeating discussion amongst a range of business professionals in recent times. Since the end of the last ‘AI winter' in the mid to late 90s, hype for AI-driven technology has progressively increased and its use-cases in business thoroughly debated, as new technologies come into play. While AI use may - at this stage - be partially misused and in some instanced overhyped, it brings a wealth of interesting and influential innovation in a lot of different areas.

Although just as it promises innovation, there is also the potential for AI to be used to harm organisations in the form of AI-based cyber-attacks.  The use of Artificial Intelligence systems by cyber criminals has been on the rise in the last couple of years, with use ranging from ML being used to study patterns of normal user behaviour within a company's network, to carrying out advanced DDoS attacks or other malicious activities using botnets.

The truth is the use of AI by malicious actors may just be starting to ramp up. According to LogMeIn Chief Information Security Officer Gerald Beuchelt, the use of AI to fuel cyberattacks is more akin to an arms race between bad actors and cyber security professionals. Beuchelt says that AI has the potential to give threat actors serious advantages over security experts, creating certain situations that promise to overwhelm enterprise systems and networks. We sat down with Buechelt to talk about this phenomenon, as well as how organisations can more generally improve their approach to developing an all-encompassing cybersecurity profile. 

 

AI-driven cyber-attacks have picked up a little bit of steam in recent times, can you just provide a bit of a snapshot of how you view this development?

One thing I would say is that this is a true arms race. It is a matter of who is faster to deploy (AI technology) in meaningful ways. We've already seen botnets and threat actors deploying platforms that are not necessarily fully AI-enabled or machine learning enabled, but they are very agile and easily configurable for different payloads and very adaptable to different environments.

Like every technology, AI can either be used for good or bad, but I think particularly in this space of cyber security - and security in general - it is very important to monitor because the potential advantages that you could get from a truly successful machine learning environment, or artificial narrow intelligence system, are phenomenal. These sorts of attacks could easily overwhelm any kind of defensive measures if used offensively.

To continue reading this article register now