Smarter than before: Can artificial intelligence amp up fraud detection?

Smarter than before: Can artificial intelligence amp up fraud detection?

ChatGPT has reached its one-year anniversary and has sparked a flood of public interest. However, artificial intelligence (AI) and machine learning have been used for fraud detection and predictive analytics in the insurance industry for at least 20 years. As generative AI continues to improve, its value to the industry increases.

Claims fraud totals $308.6 billion per year, according to the Coalition Against Insurance Fraud, and insurers continue to look for better ways to fight it. AI can extend insurer oversight to more locations than is possible otherwise. Additionally, AI can look at more data points in a claim or a claim system and consolidate observations and analysis, using predictive modeling to help point claims and investigative professionals to potential fraud.

Avoiding ‘hallucinations’
Even when a company has trained the AI, it could have “hallucinations” — that is, produce false information that the AI has created. Human review of AI outputs to identify potential bias is critical and claims professionals should not be totally dependent on AI.

Remember that AI is different from predictive analytics, which are based on the probability that a claim could be potentially fraudulent. Claims organizations must make sure that they’re providing appropriate data and taking into account the potential for false positives. Like most data sets, AI is generally only as smart as the company or person teaching it, so claims professionals should still review the data and results to uncover unintentional or hidden bias.

Some have claimed that AI could perpetuate implicit bias, which may happen when the data set is based on economic factors, such as property values in a specific ZIP code. Anytime an organization sets up an AI model, that model should be blind to those factors that could create errors. The data set must be based strictly on unbiased historical information and the probabilities of fraud in a given situation.

See also  Ex-Insurance Agent Pleads to Stealing Over $39k in Premiums

Continual monitoring
Even with precautions, implicit bias could creep into a data set. When bias is identified, it has to be corrected promptly.

Insurers need to have an ethical posture as they use any type of AI — with the understanding that when they find a problem, they must address it immediately. When a company creates a fraud detection program that uses AI, it can’t just “set it and forget it.” The program should continually be reviewed and analyzed to make sure that the AI models are working correctly, the scenarios are right, and the results are accurate. Only then can claims professionals use the AI model to help determine whether a specific claim is valid and should be paid or whether the claim requires further investigation.

The fact that an AI model identifies a particular claim for additional scrutiny in itself is not enough to deny the claim; the model has merely alerted the claims professional to take a closer look. AI can make rapid connections, for example, flagging an insured with claims that were investigated under an old policy, who is now an insured with a similar claim under a new policy. That connection would be much more difficult for a claims professional to make unless the same person was investigating both claims.

While AI can be useful in flagging certain claims for further investigation, it can also enable the claims professional to review a claim more quickly, discount factors the AI model identified, and decide that the claim should be paid. As a result, AI can help streamline and improve the efficiency of the claims-handling process.

See also  4 + 2 ≠ "Residence Premises"

AI may be better at detecting fraud within certain product lines than others based upon the type and amounts of data available. Each company should determine if it has the right kind of data to drive efficiencies and reliable results using Al models in applicable product lines.

Carriers shouldn’t rely entirely on either AI or people to identify questionable claims. AI may help insurers see a more complete picture, but it is the claims professional who has the necessary training, experience and common sense to say that a particular output is a false positive. AI might have made a connection that wasn’t there or a connection that wasn’t a strong enough indication of malfeasance for the carrier to be concerned about. Human judgment is important, and that isn’t likely to change anytime soon.

Investigate vendors carefully
As with all vendor relationships, carriers need to investigate carefully when engaging an AI company. Here are some factors to consider:

·       What is the carrier’s overall AI budget?
·       What are the carrier’s available IT resources?
·       Who from the carrier’s legal team will work with the vendor and IT?
·       Who will be tasked to continually update the data sets and scenarios?
·       Who will test and review outputs to screen out potential bias?
·       How will the use of AI fit into the carrier’s claims-handling process?
·       Who will be responsible for training staff?

Regulators are looking at the insurance industry’s use of AI, especially when it comes to fighting fraud, and they have many concerns. The National Association of Insurance Commissioners is developing best practices for the use of AI in predictive analytics and fraud detection. The Coalition Against Insurance Fraud has offered guidance on the proper and ethical usage of AI models for fraud detection. AI is here to stay, and insurers will have to navigate some rough seas ahead as regulators rethink and reshape compliance requirements in this new AI age.

See also  What Can Employers Do About High Drug Costs?

For more information on this subject, join us at DIGIN 2024 at The Boca Raton on June 27-28.