Ethical AI starts with the data

Ethical AI starts with the data

Insurance companies are moving to adopt artificial intelligence despite concerns over data quality, bias and cost, according to the 2024 Ethical AI in Insurance Consortium (EAIC) Survey. Eighty percent of companies are already using AI or plan to within the year. 

However, over half of respondents, 69%, suggest that they are dissatisfied with current approaches to address and report AI model biases. The survey indicates respondents are interested in educating employees on AI biases and implementing training on ethics. 

Robert Clark, founder and CEO of Cloverleaf Analytics and a member of the EAIC, said data profiling is necessary to discover what data is available across an organization. 

“It’s a project to get into machine learning and AI. You need to have the dedication to get the data cleaned up first and then to maintain and keep that data clean. Then, once you start AI, you can’t just set it and forget it. You have to actually go in and if you’re finding anomalies in the data, you have to clean them up. … I think a lot of insurance executives don’t realize when you take on a project like this it’s not a one time budget item and it’s done. It’s ‘What is my annual budget?’ because it’s going to continue year after year.”

The Ethical AI in Insurance Consortium also released an AI Code of Ethics and a diagram on how to use AI without bias. 

Varun Mandalapu, senior data scientist at Mutual of Omaha, told Digital Insurance via an emailed response that one of the first steps necessary for data hygiene is to standardize the format of the data.

See also  Utah Appraisal Law—The Best Place to Start Is at the Beginning

“For example, it’s important to normalize textual data to follow a consistent style or template. This uniformity is especially important when extracting and interpreting complex medical conditions from unstructured clinical notes, as it helps in the use of advanced natural language processing (NLP) techniques,” Mandalapu said. “Building AI models with substandard data can severely undermine their reliability, accuracy and fairness, encapsulating the ‘garbage in, garbage out’ principle where poor input data leads to flawed outputs. Prioritizing high-quality data in AI development is crucial to avoiding these negative outcomes, ensuring the models’ operational efficacy, and maintaining the ethical and reputational standing of the organization.”

According to recent Arizent research, the top business concerns for insurers related to implementing AI include loss of personal touch with customers and clients; the introduction of new ethical concerns and biases; and loss of customer or client trust. 

Jessica Leong, CEO of Octagram Analytics and former president of the Casualty Actuarial Society, said looking at the accuracy of the data is necessary when building a model to make sure it will pass tests for unfair discrimination. 

“The first thing that we would recommend is testing outcomes based on inferred race. From the very outset. If your goal is to make sure you have a model that passes a test like that, then from the outset, have a look at your data by inferred race,” Leong said. “I will tell you now that would make every insurance company incredibly nervous. There are many that would not be willing to do that. Which, I would understand but that is one approach you can take. Have a look at the quality of your data by inferred race. There is always missing data and there is always data that is wrong but does that differ by inferred race?”

See also  Subaru Outback Leads To Penn State Professor's Arrest On Bestiality Charge

Leong said the mortgage and lending industry has been testing models for unfair discrimination for a while and there are best practices that can be transferred. 

“I think a lot of people, especially actuaries, I’m an actuary, think of the data as the data and if it says this, then it’s right. Actually, there’s a reason why it takes a year to build a model and 80% of that is spent on data because the data is not just the data. We do a lot to that data, we fill in missing values, for example. So, let’s say that 10% of the time, the last annual mileage is missing. And if you were to infer race, it’s missing for some races more than others. And you decide that we’re going to assume the worst if that happens. Then you might be putting bias into a model versus if you assume the average. Often you make that decision without thinking, let’s just assume something and we’ll move on. But you could slowly and surely bake in bias into your model by doing that.”

Mandalapu explained these considerations are even more important as the industry deploys more generative AI.

“As we integrate AI – especially generative AI and large language models – into various sectors, we must be aware of the societal biases embedded within the data. That’s why it’s crucial to ensure that these models are trained on diverse and representative datasets to mitigate the risk of perpetuating existing societal biases. It’s important to note that individual behavioral data can be a more appropriate predictor than demographic-based factors, as it is closer to the individual’s actual behavior rather than the generalized characteristics of a group. 

See also  Insurance for Restaurants: Three Key Policies to Protect Your Business

“For example, mobile phone sensors are used to analyze driver patterns, such as acceleration, braking habits and cornering style, to assess driving behavior and risk, rather than relying on demographic factors like gender or ZIP code-specific data. Working proactively in this direction is essential for developing AI that serves all sections of society equitably.”