Insurtechs launch Ethical AI in Insurance Consortium

Insurtechs launch Ethical AI in Insurance Consortium

Insurtechs have launched an Ethical AI in Insurance Consortium to collaborate on industry-wide standards related to fairness and transparency in the use of artificial intelligence within the insurance industry. 

The founding members include Cloverleaf Analytics, Exavalu and Socotra. The consortium hopes to increase knowledge sharing amongst insurance companies, AI developers and regulators, as well as consumer advocacy groups. The organization is working to develop ethical guidelines for the industry related to AI use in underwriting, claims processing and pricing. 

Abby Hosseini, chief digital officer of Exavalu, said in an emailed response that each of the initial founding members are focused on specific areas within the industry. 

“We each believe that our unique focus, background and perspectives can add value to carriers and brokers that are racing to leverage the advancements in AI,” Hosseini said. “The first objective is to raise awareness of the impact of algorithmic bias on the insurance industry.

Hosseini added that it is important to acknowledge that algorithmic bias is a real problem.

“Perhaps the most important initial impact is to raise awareness of the issue of ethics in algorithmic decision making. Our hope is that by making the issue front and center for the industry, we can help accelerate the use of AI with proper guard rails and best practices rather than try to control AI’s increased adoption,” Hosseini said. “The second impact we hope for is that the ethical standard brings a higher level of transparency to the industry and hopes to undo decades of mistrust between the insured and the carrier. The success of the consortium can only be measured by its ability to scale the use of AI with sound ethical standards that are followed by the members.”

See also  Insurance for takeaway businesses

Robert Clark, CEO and founder of Cloverleaf Analytics, said in an email that the consortium is looking to establish a code of ethics.

“This is to help consumers have confidence that their insurance company is applying a code of ethics to implementations of AI that could impact them and to help insurance companies avoid bias towards gender, race, age, etc. that could lead to regulatory action, penalties, or even worse class action suits,” Clark said.

The code of ethics will include guidelines on how to avoid bias and guarantee fairness, including addressing transparency in the use of algorithmic models and analytical decision making.

For example, Clark expects to see a focus on auditing. “We believe the consortium will define how to implement continuous auditing to ensure when the first signs of a potential bias begin to surface in AI that it is quickly addressed.”

Criteria for selecting new members have not been fully determined but insurance companies and insurance solution vendors are the target. The consortium is not actively engaged with regulatory bodies but has started conversations with several on the topic.