Home Slider Identity Crisis: How to Prepare for a Surge in AI-Driven Fraud

Identity Crisis: How to Prepare for a Surge in AI-Driven Fraud

by internationalbanker

By Gadi Mazor, CEO, BioCatch

 

 

 

 

Since its public release at the end of 2022, ChatGPT has enchanted, impressed and delighted people all around the world, with countless think-pieces expounding on the near-endless potential of artificial intelligence (AI). But while AI’s future (and present) utilisations and capabilities are undoubtedly exciting, they also present immediate new risks and challenges for fraud prevention and broader cybersecurity.

In this new era, passwords, device IDs (identifications), one-time passcodes and similar legacy authentication tools no longer suffice to give us online peace of mind. Long gone are the days when every scammer used some version of the same poorly written email purportedly from a Nigerian prince promising to make us rich beyond our wildest dreams. In this new era of AI-powered cyberattacks, that email, text or phone call now appears to come from a real-life family member and isn’t so obviously fraudulent. It looks real. And—perhaps more disturbingly—it feels real.

The scale of fraud in the UK

Today, fraud is the most prevalent crime in the United Kingdom, constituting as much as 40 percent of all crimes across England and Wales. The Global Anti-Scam Alliance (GASA) has estimated fraudsters worldwide steal a combined $1 trillion yearly. Brits lost £580 million to fraud in the first half of 2023 alone. UK banks were able to prevent a further £651 million of fraud over that same timespan, thanks to their use of fraud controls, but those legacy systems will struggle to keep up with the scams of the future.

We’re already seeing a rise in more intricate scams, thanks to the widespread use of AI. In 2024, I expect these kinds of scams to intensify in both number and sophistication. Many of these new scams leverage AI’s ability to mimic individuals’ appearances and voices—feigning your friend’s face or your mother’s voice—manipulating trust and fear to deceive people into parting with their hard-earned money. We find so-called deepfakes—created by AI algorithms, which can generate near-perfect audio and video forgeries—at the heart of this trend.

Deepfake scams play on emotions and impart a sense of urgency, leading victims to make snap decisions. These scams often involve AI-generated voice impersonations, with scammers pretending to be loved ones needing critical financial help as soon as possible. UK Finance data showed 45,367 cases of impersonation scams across the UK in 2022, costing a total of £177.6 million. The organisation also discovered that only around half (51 percent) of people checked to see whether a request for money or personal information was legitimate, falling to just 38 percent for 18-to-34-year-olds.

With a sizeable chunk of the UK population at risk, businesses should—and, increasingly, do—shoulder the responsibility of educating their customers and introducing stronger security tools that minimise the risk at its source. But first, what exactly are these new threats?

Today’s top AI-driven fraud risks

As discussed, deepfakes use artificial intelligence to manipulate images, videos or audios of real people. They can make people say or do things they never said or did or would never say or do. Scammers can use deepfakes to create fake news, fake interviews, fake endorsements and fake evidence. The internet is packed with artificial (but believable!) renderings of celebrities, such as Tom Cruise and Scarlett Johansson, and politicians, such as Nancy Pelosi and Barack Obama, falsely making inflammatory statements or behaving out of character.

Some voice-based scams have already grown incredibly intricate, such as the “Hey Mom, Hey Dad” scam, in which criminals use AI to mimic the voices of loved ones. Scammers then employ the technology to contact and trick family members into believing their relative is in crisis, convincing loved ones to send money that fraudsters intercept and steal.

Large language models (LLMs) use AI to generate natural language (NL) text based on a given input or prompt. They can produce coherent and fluent text that mimics the style, tone or content of a specific domain, person or genre. Fraudsters can use LLMs to create malicious chatbots as well as fake reviews, posts, profiles, emails, images and videos.

Chatbots, in particular, present massive fraud risks. We find legitimate versions of these automated messaging platforms everywhere businesses offer customer service, as they provide a cost-effective way to interact with consumers quickly and easily. A compromised chatbot, however, can use deceptive messages to convince people to provide sensitive information, such as passwords, bank information and credit-card numbers. Some LLMs—such as FraudGPT and WormGPT—designed specifically for fraudulent activities are now for sale on the dark web. These tools can identify fraud targets using defined criteria, list third parties with which potential victims have personal relationships and research the links between the victim and a trusted third party. Their capabilities only continue to grow. Soon, LLMs like these may gain the ability to create deepfake videos to solicit funds from victims and build matching websites to collect payment credentials.

How can businesses protect themselves and their customers?

From malware attacks to phishing scams, generative AI (GenAI) is powering a new wave of scams, hacks and identity thefts. The technology makes it incredibly difficult for financial institutions and other entities to detect patterns of money-laundering activities, especially when they’re hidden in a conversation generated by a GPT (Generative Pre-trained Transformer), an artificial-intelligence language model.

As a result, 86 percent of financial-crime-prevention leaders have invested in new technologies, including AI security tools. Of these leaders, 66 percent have highlighted AI-powered scams as a growing threat. As we navigate this ever-changing world, banks must add defences immediately to bolster their protection against deepfake and AI-driven fraud.

Customer-facing indicators—such as official watermarks, timestamps, labels, warnings or ratings—can help customers distinguish between a scam and legitimate content, signalling its source, authenticity and reliability. People can also use tools or apps that help them verify or analyse content—reverse image searches, fact-checking websites and deepfake-detection software are good examples.

The kind of behavioural biometric intelligence we offer at BioCatch examines and monitors how people interact with devices and apps, providing insights into digital sessions that help banks detect unusual behaviour deviating from historical patterns. By analysing physical behavioural biometric patterns (for example, mouse movements, typing speed, pressure on touchscreens) in combination with cognitive intent signals (e.g., user hesitation, distraction, disjointed typing, long-term memory recall, familiarity with the user interface), our behavioural biometric intelligence can detect and prevent activity indicative of a manipulated account holder or customer.

It can also detect mule accounts and help visualise the extensive mule networks fraudsters use to launder money generated through a relentless and ever-evolving onslaught of scams. Often, that money funds global crime rings, which traffic in drugs, arms or human beings. Behavioural biometric intelligence provides real-time visibility of these mule networks and is (and will continue to be) essential in fighting, deterring and stopping cybercriminals who use AI.

Boost your behavioural barricades

From banking to shopping to working to socialising, digital services now form the backbone of our daily lives as we merge our virtual and physical existences. As providers grow their risk appetites and harness technological developments, such as AI, to offer new services, they must prepare for the latest fraud vectors and threats.

Here, every business offering an online experience shoulders a huge responsibility to provide a digital service that customers both trust and enjoy. If an online experience grows unpleasant, unreliable or unsafe, customers will start searching for an alternative. Fortunately, measures exist to ensure a secure, seamless experience online—even among today’s (and tomorrow’s!) many new hazards. With behaviour fast becoming the only online differentiator between humans and bots, new defences can create a sanctuary of safety and trust in an increasingly fraudulent world.

 

Related Articles

Leave a Comment

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.