Deepfake scams hit 29% of New Zealanders, fuelling fears of AI-driven fraud

Deepfake scams hit 29% of New Zealanders, fuelling fears of AI-driven fraud

Deepfake scams hit 29% of New Zealanders, fuelling fears of AI-driven fraud | Insurance Business New Zealand

Cyber

Deepfake scams hit 29% of New Zealanders, fuelling fears of AI-driven fraud

Insurers and brokers could share report to clients to highlight importance of cyber insurance

Cyber

By
Roxanne Libatique

Nearly 29% of New Zealanders and 18% of businesses have been targeted by deepfake scams in the past year, according to research commissioned by Mastercard.

The report highlighted the growing threat posed by generative artificial intelligence (AI), which can impersonate people through manipulated audio, video, and images, deceiving victims into providing money or personal information.

Financial impact of deepfake scams in New Zealand

The financial toll from these scams is significant, with an estimated tens of millions of dollars lost across New Zealand in the last 12 months.

“Given many victims of these scams are not aware that they have been targeted, this is potentially only the tip of the iceberg,” she said, as reported by Security Brief.

The research found that 10% of those targeted by deepfake scams suffered financial losses, while 27% experienced non-financial impacts such as identity theft or compromised data. Despite these risks, 25% of New Zealanders have yet to implement any preventive measures.

Sathi noted that while generative AI holds promise for positive uses, it is increasingly being exploited by scammers.

See also  Why is imposter syndrome still plaguing some workers?

“Generative AI technology, while offering incredible potential, can be harnessed in both beneficial and concerning ways. Increasingly we see it is being used to manipulate consumers and businesses out of money in the form of scams involving deepfakes,” she said.

Demographics more vulnerable to deepfake scams

Certain demographics appear more vulnerable to deepfake scams.

According to the survey, 26% of respondents identified grandparents as the most likely victims, followed by mothers at 18%. However, only 21% of people have actively sought out information or taken steps to educate their families on this growing threat.

Public confidence in detecting deepfakes remains low, with just 12% of respondents expressing confidence in their ability to recognise them.

At the same time, trust in digital communications is eroding. The survey found that 61% of New Zealanders are less trusting of social media platforms, 40% are less trusting of emails, and 37% are more cautious when receiving phone calls. Of the scams reported, 13% were delivered via email, making it the most common medium for deepfake fraud.

Impact of deepfake scams on businesses

Businesses have also been impacted by deepfake scams, with 18% of New Zealand firms reporting incidents. Nearly half (47%) of those companies fell victim to fraudulent schemes, often involving impersonations of customer service agents, clients, or suppliers.

Some businesses are taking steps to address these risks, with 43% employing identification verification for accessing sensitive information, 38% offering cybersecurity training, and 29% conducting training on financial transactions. Nonetheless, 26% of businesses have not yet implemented any safeguards.

See also  Berkley outlines key financial lines claims trends

New Zealanders urged to be cautious when sharing information

Sathi advised both individuals and businesses to be cautious when sharing personal or financial information.

“Never give out your personal information or account data without verifying the identity of who you are talking to,” she said.

She also encouraged people to regularly monitor their financial accounts for any signs of fraudulent activity and to report any suspicious transactions to their financial institutions immediately.

Cybercriminal use of AI on the rise

A separate report from cybersecurity firm TrendMicro warns of the increasing use of generative AI in cybercrime.

The report highlighted that some chatbots and large language models (LLMs) being sold in criminal marketplaces are programmed specifically for malicious activities. These tools offer privacy and anonymity to users and are trained on data designed for fraudulent purposes.

Keep up with the latest news and events

Join our mailing list, it’s free!