Insurance fraud and the importance of AI governance

Insurance fraud and the importance of AI governance

As the rising cost of living continues to place more people in difficult financial situations, authorities have warned that more people will turn to fraud in desperation. In the UK alone, cases of insurance fraud increased by more than 60% between 2022 and 2023, a jump that was estimated to cost insurers £1.1 billion ($1.4 billion).

With its unrivalled data-processing power, artificial intelligence (AI) has become an essential part in helping insurers detect, triage, and investigate fraudulent claims. However, adopting AI without effective governance exposes insurers to numerous risks that can damage both their customers’ experience and their brand’s reputation. 

What is AI governance? 
AI governance is a collection of frameworks, policies and practices that serve as guardrails to ensure that AI technologies are developed and used in a way that minimizes potential risks from bias and maximizes intended benefits for end-users, customers and the organization as a whole. This must be ensured both at the point of deployment and on an ongoing basis. AI that is properly governed is built and deployed responsibly, ethically, and in a way that is aligned with a business’s strategy while also being compliant with regulations.

The need for resilient models
When it comes to fraud detection in insurance, there are certain factors that make AI governance a necessity. 

Firstly, insurers must be confident that their fraud models are sufficiently efficient and accurate. AI governance is vital in fraud detection because it can help make models resilient to changes in their data or the outside world. Without this resilience, there is the danger of ‘data drift’, which is when external changes cause a model’s accuracy to decline. This can lead to an increase in false positives being flagged, which not only wastes valuable time and resources, but also risks more actual instances of fraud going undetected and insurers paying out substantial amounts on false claims. The more organized and syndicated fraudsters will undoubtedly take any opportunity to exploit less performant fraud detection models and weakened insurer defenses. 

See also  2023 Underwriters of the Year | Stephanie Hodson, Intermediate Commercial Underwriter, Aviva

Secondly, governance is important because the stakes are extremely high when there is suspicion of a fraudulent claim. If fraud is proven, the customer will not only have their policy cancelled, but could also face criminal prosecution with a maximum prison sentence of 10 years in the U.K. under the 2006 Fraud Act. Conversely, if an insurer makes a false allegation, the insurer can face notable reputational damage, customer losses, and even regulatory fines. These considerable legal consequences make it vital that insurers are completely confident in their AI models and their abilities to perform accurately, as well as their explainability.

In order to be able to communicate the reasoning behind a decision, the outputs of fraud detection models need to be sufficiently explainable. This is not only important for the customer who has been suspected of fraud, but also for a legal setting if criminal proceedings take place. AI governance can help insurers align their fraud models with these legal requirements and ensure clear and unambiguous grounds to validate a claim if it seems suspicious. 

Automated AI governance is the only way insurers can truly understand the performance of their fraud detection models. They can eliminate evolving fraud with automated continuous monitoring and updating to ensure their models maintain their performance levels continuously post-deployment, adapt to new and evolving fraud methods and modus operandi, and improve over time.

Scrutiny from AI regulations
A pressing concern for insurers is the growing number of AI regulations in different jurisdictions around the world — such as the newly approved EU AI Act and, in the UK, the Consumer Duty Act that will come into effect on July 31st. The latter legislation, driven out of the Financial Conduct Authority (FCA), will make it a requirement for insurers to provide evidence for how their AI has come to its decisions. The way organizations build and adopt AI systems is coming under greater scrutiny, and as a result, insurers will be held to greater levels of accountability for their AI-assisted decisions. 

See also  Innovation of the month: Bank of Ireland promotes financial wellbeing

Simply detecting fraud is no longer enough. It must be done in an explainable and transparent manner to comply with regulations — and this must be done continuously, in line with the evolving industry, as opposed to solely during initial deployment. This is where AI governance comes into action. Implementing robust AI governance ensures that all of an insurer’s AI systems, including their fraud detection models, are aligned with their business objectives, regulations, strategies and the expectations of customers.

What is next for AI in fraud detection?
As technological advancements in the insurance industry continue to evolve, so too do the strategies and sophistication of fraudsters, and so insurers must remain constantly vigilant to their evolving techniques. For instance, in the last two years criminals have begun to use generative AI to create synthetic identities and produce fake images and documents to support fraudulent claims. 

AI holds promise in enabling insurers to stay one step ahead of fraudsters. However, this will only be realized if proper AI governance practices are implemented and maintained. Without it, insurers will struggle to deliver the necessary levels of explainability and transparency across their business to protect themselves from damage. Declining model performance will go unnoticed for too long, resulting in fraudulent claims being able to evade their investigators. However, effective governance means that models can be made resistant to changes in their data and the world around them, helping insurers to be on the front foot in the fight against fraudsters.

For more on this topic:AI in the Insurance Industry: The good, the bad and the unknown

See also  Fisker Is Running Out Of Options As Stock Trading Halts And Merger Deal Falls Apart

Insurance executives keen on AI but underwriters wary

With hands in a variety of insurtech functions, Ambac thinks about how to leverage AI