Insurer handbook explores AI bias avoidance

Insurer handbook explores AI bias avoidance

A new guide has been created to help insurers avoid breaching anti-discrimination laws when using artificial intelligence (AI), saying underwriters must be careful assumptions are based on reasonable evidence.

While AI promises “faster and smarter” decision making, the Actuaries Institute and the Australian Human Rights Commission say without adequate safeguards algorithmic bias might cause discrimination due age, race, disability, gender and other characteristics.

“With AI increasingly being used by businesses to make decisions that may affect people’s basic rights, it is essential that we have rigorous protections in place to ensure the integrity of our anti-discrimination laws,” Human Rights Commissioner Lorraine Finlay said.

The joint publication is designed to help actuaries and insurers comply with various laws when AI is used in pricing or underwriting insurance products. It provides practical guidance and case studies to help proactively address the risk.

Actuaries Institute CEO Elayne Grace says there is urgent need for guidance to assist actuaries, and the handbook should also comfort consumers that their rights were protected.

“There is limited guidance and case law available to practitioners,” Ms Grace said. “The complexity arising from differing anti-discrimination legislation in Australia at the federal, state and territory levels compounds the challenges facing actuaries, and may reflect an opportunity for reform.”

The “explosive” growth of big data increases use and power of AI and algorithmic decision-making, she said.

“Actuaries seek to responsibly leverage the potential benefits of these digital megatrends. To do so with confidence, however, requires authoritative guidance to make the law clear,” Ms Grace said.

See also  Ecclesiastical announces winners of heritage preservation contest

The guide offers practical tips for insurers to help minimise the risks of a successful discrimination claim arising from the use of AI in pricing risk. It lists some strategies for insurers to address algorithmic bias and avoid discriminatory outcomes, including rigorous design, regular testing and monitoring of AI systems.

AI can aid pricing, underwriting, marketing, claims management and internal operations.

The guide says that where data is limited, some approaches to price setting may be more discriminatory than others, and at greater risk of constituting unlawful discrimination. Insurers should consider the potential options available carefully, it says, and whether a more discriminatory option is justified if less so options exist.

“If including a cut-off based on a customer’s age, the level of age threshold is a matter of judgement for the insurer. Similar considerations may apply to other protected attributes in other situations.

“An insurer should carefully consider all relevant factors, including the availability and impact of a less discriminatory option on the whole population, in order to justify the threshold selected,” the guide said.