Why learning to combat AI fraud has never been more important

Pls share this post

Listen to this article

Artificial intelligence is not new. But rapid innovation in the last year now means consumers and businesses alike are more aware of the technology’s capabilities than ever before, and, are most likely using it themselves in some shape or form.

The AI revolution also has a downside: it empowers fraudsters. This is one of the most significant effects we’re witnessing, rather than more productivity and creativity in the workplace. The evolution of large language models and the use of generative AI is giving fraudsters new tactics to explore for their attacks, which are now of a quality, depth and scale that has the potential for increasingly disastrous consequences.

This increased risk is felt by both consumers and businesses alike. Experian’s 2023 Identity and Fraud Report found that just over half (52%) of UK consumers feel like they’re more of a target for online fraud now than they were a year ago, while over 50% of businesses report a high level of concern about fraud risk. It is vital that both businesses and consumers educate themselves on the types of attack that are happening, and what they can do to combat them.

Getting familiar with the new types of fraud attack

There are two key trends emerging in the AI fraud space: the hyper-personalization of attacks, and the subsequent increase of biometric attacks. Hyper-personalization means that unsuspecting consumers are increasingly being scammed by targeted attacks that trick them into making instant transfers and real-time payments.

For businesses, email compromise attacks can now use generative AI to copy the voice or writing style of a particular company to make more genuine-looking requests such as encouraging them to carry out financial transactions or share confidential information.

This is what Samsung's Petabyte SSD solution looks like — PB SSD v2 is a 1U server, uses an AMD EPYC CPU, offers 244TB storage with 1PB model soon

Generative AI makes it easier for anyone to launch these attacks, by allowing them to create and manage many fake bank, ecommerce, healthcare, government, and social media accounts and apps that look real.

These attacks are only set to increase. Historically, generative AI wasn’t powerful enough to be used at scale to create a believable representation of someone else’s voice or a face. But now, it impossible to distinguish with a human eye or ear a deep-fake face or voice from a genuine one.

As businesses increasingly adopt additional layers of controls for identity verification, fraudsters will need to exploit these types of attacks.

Types of attack to look out for

Types of attack to look out for include:

Mimicking a human voice: There has been substantial growth in AI-generated voices that mimic real people. These schemes mean consumers can be tricked into thinking they’re speaking to someone they know, while businesses that use voice verification systems for systems such as customer support can be misled.

Fake video or images: AI models can be trained, using deep-learning techniques, to use very large amounts of digital assets like photos, images and videos to produce high quality, authentic videos or images that are virtually indiscernible from the real ones. Once trained, AI models can blend and superimpose images onto other images and within video content at alarming speed.

Chatbots: Friendly, convincing AI chatbots can be used to build relationships with victims to convince them to send money or share personal information. Following a prescribed script, these chatbots can extend a human-like conversation with a victim over long periods of time to deepen an emotional connection.

Remote workers won't be put up for promotion, Dell declares

Text messages: Generative AI enables fraudsters to replicate personal exchanges with someone a victim knows with well-written scripts that appear authentic. They can then conduct multi pronged attacks via text-based conversations with multiple victims at once, manipulating them into carrying out actions that can involve transfers of money, goods, or other fraudulent gains.

Combatting AI by embracing AI

To fight AI, businesses will need to use AI and other tools such as machine learning to ensure they stay one step ahead of criminals.

Key steps to take include:

Identifying fraud with generative AI: Use of generative AI for fraudulent transaction screening or identity theft checks is proving more accurate at spotting fraud, compared to previous generations of AI models

Increasing use of verified biometrics data: Currently generative AI can replicate an individual’s retina, fingerprint, or the way someone uses their computer mouse.

Consolidating fraud-prevention and identity-protection processes: All data and controls must feed systems and teams that can analyze signals and build models that are continuously trained on good and bad traffic. In fact, knowing what a good actor looks like will help businesses prevent impersonation attempts of genuine customers.

Educating customers and consumers: Educating consumers in personalized ways through numerous communication channels proactively can help ensure consumers are aware of the latest fraud attacks and their role in preventing them. This helps enable a seamless, personalized experience for authentic consumers, while blocking attempts from AI-enabled attackers.

Use customer vulnerability data to spot signs of social engineering: Vulnerable customers are much more likely to fall for deep fake scams. Processing and using this data for education and victim protection will enable the industry to help the most at risk

READ ALSO  Improving website performance with multi-CDN

Why now?

The best companies used a multi-layered approach – there is no single silver bullet – to their fraud prevention, minimizing as much as possible the gaps that fraudsters look to exploit. For example, by using fraud data sharing consortiums and data exchanges, fraud teams can share knowledge of new and emerging attacks.

A well layered strategy which incorporates device, behavioral, consortia, document and ID verification, drastically reducing the weaknesses in the system.

Combating AI fraud will now be part of that strategy for all businesses which take fraud prevention seriously. The attacks will become more frequent and sophisticated, requiring a long-term protection strategy – that covers every step in the fraud prevention process, from consumer to attacker – to be implemented. This is the only way for companies to protect themselves and their customers from the growing threat of AI-powered attacks.

We’ve listed the best identity theft protection for families.

This article was produced as part of TechRadarPro’s Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro


Pls share this post
Previous articleAI Ghosts: Shaping the future of grief tech
Next articleRussia’s adapting air defenses means Ukraine’s new F-16s are an example of weapons that are ‘no longer relevant:’ senior Ukrainian officer