Deepfakes and deep fraud: The new security challenge of misinformation and impersonation

With the significant improvements in deepfake technology, how prepared are governments and organisations in mitigating the security threats?

Deepfakes, until recently, have just been an amusing part of the internet. Videos emerged of various celebrities in the wrong movie or interview, some were quite poorly made but others were almost like the real thing. They were entertaining and funny; not really given much thought and left to a corner of the internet. However, it was not long before politicians were the next target, videos emerging of significant figures like Barack Obama, Nancy Pelosi and Donald Trump.

It was at this point that some serious concerns started to develop over the security implications of this technology.

So, what are deepfakes and how do they work? A deepfake is a video or audio clip where someone's face or voice has been replaced with another person's likeness using Artificial Intelligence. The name comes from the combination of "deep learning" and "fake", as machine learning methods are used to create these videos. Most commonly, it is a deep learning process that involves generative adversarial networks (GANs).

The reason deepfakes have mostly been reserved for public figures is because these AI models require large volumes of image/video frames to correctly overlay the target face. But the technology is rapidly developing, and the final products are becoming seamless. With these improvements, it is clear that seeing is no longer believing.

The age of fabrication

In an age of misinformation and fake news, it is more important than ever to separate fact from fiction. The internet has become a breeding ground for alternative facts and conspiracy theories, and companies like Facebook, YouTube and Twitter are under fire for not controlling the spread of misinformation. Deepfakes pose a new kind of threat - videos that can now manufacture false statements by political leaders and key figures.

A perfect example is MIT's recently launched "In Event of Moon Disaster" project, where they had Nixon announce the NASA Apollo 11 disaster. The video had Nixon read out his contingency speech in the event that the Apollo 11 was unsuccessful. Both Nixon's face and voice were synthetically produced using AI. The purpose of the project was to highlight the damage a video like this can have on unsuspecting audiences, and how easy it is becoming to undermine our trust. Unironically, this video can be used as evidence for an alternative version of history and may be believed.

Deepfakes fundamentally target a person's trust, in particular the belief that videos are reliable forms of content. Understanding plays a large role in the risk of deepfakes, and Dr Mike Lloyd, CTO at RedSeal, explains how it is similar to when photographs were considered authentic until society adjusted to idea of photo manipulation.

"People are not ready for the point that we can now fake videos well enough to trick people, and there will be a dangerous transition process as the broad population slowly cottons on," Dr Lloyd says.

"Just as we saw with the Twitter hack, where famous accounts were taken over, people need to realize that anything they see in purely digital form can be faked."

Deepfakes can have severe ramifications in a wide range of areas; it can manipulate and distort political campaigns, damage the reputation of public figures, be misused as potential evidence in court cases, undermine journalistic standards, and, deepfakes are the perfect tool for cybercrime and theft.

The deepfake cybersecurity threat

NISOS, a security consulting company, recently released a report on an attempted deepfake fraud and released the audio clip to Motherboard. The audio clip impersonating a company's CEO was sent as a voicemail to an employee. The CEO is heard asking for "immediate assistance to finalise an urgent business deal." The employee, fortunately, flagged it to the company, as the overall audio was suspicious. However, this is not the first attempt at deepfake fraud.

The most notorious case of a successful deepfake scam occurred last year, when a UK-based energy firm transferred €220,000 ($243,000) after the CEO thought he was speaking with the chief executive from their German parent company. These incidents highlight how criminals are now weaponising deepfakes for financial gain and while the technology is not perfect, it is improving significantly every day. Deepfakes may become as common as phishing emails targeting unsuspecting employees.

Companies have to do more than recognise the threat of deepfakes, they have to educate and empower their employees against potential attacks. Employees are likely to be the first line of contact for deepfake scams, but they are also the first line of defence. Daniel Cohen, Head of anti-fraud products and strategy at RSA Security, explains to IDG Connect that the aim of a social-engineering scheme is "to falsely trigger a person into an action, typically to hand over data, transfer funds, etc."

Cohen believes education and awareness are two critical factors in tackling these schemes. He states that organisations should be "helping employees identify the kinds of tricks and manipulations fraudsters use, and teach them to think twice before taking action. It is also advisable for organisations to put in place processes such as double-approval or mutually-exclusive approval, especially when it comes to the finance team or anyone else with access privileges that might be targeted by fraudsters."

An approach similar to zero-trust will likely be the best solution for organisations, with employees building a practice of verifying and double-checking requests. Which in the scenario of a deepfake voicemail, or call, could be to follow up with an email to the same person confirming the details of the request.

There is also the question of whether there will be regulation surrounding the technology. It seems unlikely. Cohen believes that introducing regulation will not stop criminals using the technology for malicious purposes.

"As with any nefarious activity, regulation itself will do very little to prevent deepfake attacks from taking place. I do believe though, that regulation should completely forbid deepfakes being used as a tool to ‘stand in' for the human, for example a deepfake of a sanctioned message from a president to their nation," he comments.

While there are legitimate uses for deepfakes, it will be good practice for governments and enterprises to start preparing for the eventual cybersecurity threat. For now, it seems the most reasonable tactic of mitigating deepfakes scams are good judgement and a healthy dose of scepticism.

Related: