Navigating the AI insurance landscape

Navigating the AI insurance landscape

Navigating the AI insurance landscape | Insurance Business Canada

Technology

Navigating the AI insurance landscape

Experts discuss opportunities and ethical imperatives

Technology

By
Mika Pangilinan

Artificial intelligence (AI) has long been a steady force in insurance, utilized by many companies to streamline different aspects of their business.

But recent advancements in the technology, including the growing prominence of generative AI platforms like ChatGPT and Google’s Bard, have made room for further discussions about the opportunities and risks associated with its use.

In a recent roundtable discussion with Insurance Business Canada, senior representatives from Definity and Google offered their insights on the role of AI in the industry.

Neil Bunn, director of client engineering at Google’s strategic verticals in Canada, emphasized how AI has driven efficiencies across claims, compliance, fraud detection, underwriting, and customer experience.

At Definity, AI technology has been used to identify potential fraud or suspicious activity, as well as opportunities for consultative risk services and building inspections for commercial property customers.

The company has also leveraged machine learning (ML) for first notification of loss (FNOL) benefits, assigning specialized adjusters and providing personalized recommendations for repair shops and medical providers.

“Investing in innovative technologies like generative AI is one of many avenues Definity is pursuing to achieve its goal of becoming a top-five P&C insurer in Canada,” said Jeffrey Baer, VP, enterprise analytics and data office at Definity.

Addressing the risk of bias amplification in AI models

As AI becomes more pervasive in the insurance sector, ethical concerns surrounding bias have also come to the forefront.

See also  Automated DDoS tools drive spike in cyber attacks, report finds

“The number one critique of any sort of analytical model is its ability to amplify any sort of systemic bias that’s already prevalent in our society,” Elizabeth Bellefleur-MacCaul, senior actuarial analyst at Definity.  

To navigate this challenge, Bellefleur-MacCaul stressed the importance of developing an understanding about the potential for bias in analytic models, even those that have been programmed to “de-bias” certain processes.

“This would include the underlying assumptions in the data that we’re using, the tools that we are selecting, and the methodology related to predictive modelling, but also ensuring that we have a framework in place to ensure that once something has gone live, it’s not then doing the opposite of what we’re intending,” she said.

Related Stories

Keep up with the latest news and events

Join our mailing list, it’s free!