The Battle Against Deepfake Threats

The Battle Against Deepfake Threats

By Max Dorfman, Research Writer, Triple-I

Some good news on the deepfake front: Computer scientists at the University of California have been able to detect manipulated facial expressions in deepfake videos with higher accuracy than current state-of-the-art methods.

Deepfakes are intricate forgeries of an image, video, or audio recording. They’ve existed for several years, and versions exist in social media apps, like Snapchat, which has face-changing filters. However, cybercriminals have begun to use them to impersonate celebrities and executives that create the potential for more damage from fraudulent claims and other forms of manipulation.

Deepfakes also have the dangerous potential to be used to in phishing attempts to manipulate employees to allow access to sensitive documents or passwords. As we previously reported, deepfakes present a real challenge for businesses, including insurers.

Are we prepared?

A recent study by Attestiv, which uses artificial intelligence and blockchain technology to detect and prevent fraud, surveyed U.S.-based business professionals concerning the risks to their businesses connected to synthetic or manipulated digital media. More than 80 percent of respondents recognized that deepfakes presented a threat to their organization, with the top three concerns being reputational threats, IT threats, and fraud threats.

Another study, conducted by a CyberCube, a cybersecurity and technology which specializes in insurance, found that the melding of domestic and business IT systems created by the pandemic, combined with the increasing use of online platforms, is making social engineering easier for criminals.

“As the availability of personal information increases online, criminals are investing in technology to exploit this trend,” said Darren Thomson, CyberCube’s head of cyber security strategy. “New and emerging social engineering techniques like deepfake video and audio will fundamentally change the cyber threat landscape and are becoming both technically feasible and economically viable for criminal organizations of all sizes.”

See also  Broker on diversifying the "male, pale, and stale" insurance workforce

What insurers are doing

Deepfakes could facilitate the filing fraudulent claims, creation of counterfeit inspection reports, and possibly faking assets or the condition of assets that are not real. For example, a deepfake could conjure images of damage from a nearby hurricane or tornado or create a non-existent luxury watch that was insured and then lost. For an industry that already suffers from $80 billion in fraudulent claims, the threat looms large.

Insurers could use automated deepfake protection as a potential solution to protect against this novel mechanism for fraud. Yet, questions remain about how it can be applied into existing procedures for filing claims. Self-service driven insurance is particularly vulnerable to manipulated or fake media. Insurers also need to deliberate the possibility of deep fake technology to create large losses if these technologies were used to destabilize political systems or financial markets.

AI and rules-based models to identify deepfakes in all digital media remains a potential solution, as does digital authentication of photos or videos at the time of capture to “tamper-proof” the media at the point of capture, preventing the insured from uploading their own photos. Using a blockchain or unalterable ledger also might help.

As Michael Lewis, CEO at Claim Technology, states, “Running anti-virus on incoming attachments is non-negotiable. Shouldn’t the same apply to running counter-fraud checks on every image and document?”

The research results at UC Riverside may offer the beginnings of a solution, but as one Amit Roy-Chowdhury, one of the co-authors put it: “What makes the deepfake research area more challenging is the competition between the creation and detection and prevention of deepfakes which will become increasingly fierce in the future. With more advances in generative models, deepfakes will be easier to synthesize and harder to distinguish from real.”