Online fraud: Facing up to the challenge of spoofing

This is a contributed article by Andrius Sutas, co-founder and CEO of AimBrain


A recent report on cybercrime has revealed that European digital businesses faced 1 billion bot attacks and 80 million fraud attempts in the first quarter of this year alone – with a significant uptick in identity spoofing on last year. The knock-on effect is that vast swathes of stolen personal data is now available on the dark web; recently nearly 30 per cent of its activity was estimated to relate to leaked data.

Meanwhile, there is no shortage of new identity authentication technologies coming to the market to replace the traditional ‘password’. So why are organizations not getting ahead of identity fraud? Are the technologies not fit for the task, or are fraudsters matching security developments as fast as they appear, with ever more sophisticated ways of spoofing?


Social overload

Much of the blame is lauded onto social media channels, and it’s true - we are only now realizing the perils of sharing our photos, videos and details to audiences of millions. But we are also being monitored continually - CCTV, location tracking, calls being recorded for training purposes - making it virtually impossible to exist anonymously. So, given that our faces, voices and gestures are on permanent display, it is the technology that needs to adapt to stop our identities being used nefariously.

Developments in onboarding are thankfully improving the creation of false accounts, by combining selfies or voice challenges with liveliness checks; simple steps that include a randomized challenge such as blinking or saying a phrase or number. The advent of open APIs and open source technology means that organizations can integrate biometric steps such as facial recognition or voice recognition on a 1:1 basis (matching against a user’s template or using OCR to scan identity documents and matching the images) or a 1:many where connectivity exists, such as matching an image against a database of images.


Strength in numbers

Yet with growing investment in large-scale fraud rings, what is strong enough today may not be strong enough tomorrow if deployed in isolation. As we have always said, there is no one silver bullet when it comes to user authentication.

A simple way to bolster security is to use not just one, but a combination of multiple biometric modalities, in configurations that respect the customer and the transaction itself.

There are myriad solutions that act invisibly and can help protect users. Location tracking or permanent device identity checks, anomaly detection (using generic or existing fraud data to recognize signals that may indicate breaches) and behavioral monitoring (continually assessing interactions with devices using keyboards, keypads, touchscreens or a mouse); any combination of these can be used passively to keep a user safe.

Active modules, such as a voice or facial recognition step with integrated liveliness detection, can then be invoked for extra guarantees that it is a) a real user and b) the correct user. Furthermore, these can be deployed with various weightings or sequences, depending on the nature of the task at hand and the weight of the impact on the customer journey.

Repeat or low-value transaction? Keep the monitoring as passive as possible. Unusual or suspicious behavior? Step up to a user authentication and liveliness check such as a voice or face verification, eliminating the need for additional hardware or a secondary device and ultimately keeping it simple for the user.

The challenge however, is striking the right balance between keeping security watertight and creating minimal impact on the end user - something that dated and inconvenient concepts like hard tokens or SMS 2FA are failing to do. So how do you protect customers using today's technology from spiraling fraud in a convenient way when two-thirds of customers themselves think “there are simply too many security measures nowadays”?


Multi-device, multi-modal

We believe that the answer lies in smart combinations of biometric authentication steps, using everyday technology, for example a system that’s based on facial recognition but involves the additional step of saying a randomized number to the camera.

At the customer level this should be as simple as Facetime or Snapchat, but behind the scenes, the voice and face data are combined within an artificial neural network designed specifically for audio-visual synchronization detection. As it is a time-sensitive step, there is quite literally no way that today’s spoofing technology can conjure a response that ticks all three boxes of visual, audio and lip movement, within the requisite timeframe.


Regulation and beyond

We are seeing many financial organizations using the Strong Customer Authentication requirements of PSD2 to refresh their entire authentication approach, which is a smart approach. Patch fixes won’t work against sophisticated fraud, so it really pays to put in place something that not only satisfies the regulators and keeps customers happy, but that protects against all manner of spoofing fraud today.

Not that your customers will thank you for it; they remain frustratingly oblivious to your extraordinary efforts to keep them safe. Let’s be quietly victorious, together.


« News Roundup: Rising sea levels could impact internet infrastructure


News roundup: Light at the end of the tunnel for ZTE »
IDG Connect

IDG Connect tackles the tech stories that matter to you

  • Mail


Do you think your smartphone is making you a workaholic?