Insurers deploy AI governance for themselves

Insurers deploy AI governance for themselves

Aside from following regulation and ethics standards for using AI, insurers are also looking at putting their own governance measures in place.

AI advances such as Gen AI have been happening faster than insurers can react, and adopting AI raises issues with biased handling of data, and making sure a carrier’s operations can handle its demands, according to tech executives from Nationwide, Falcon Risk Services and Société Générale Americas.

“The role of chief data officer, governance, is a huge essential part of using AI responsibly, and using data responsibly, both in and outside of AI,” said Doris Brophy, chief data officer, Société Générale Americas, the U.S. arm of the European financial services firm.

Gopika Shah, senior vice president and technology officer at Falcon Risk Services.

LinkedIn

Financial services and insurance tech professionals tend to focus on how to improve daily processes or how to innovate, according to Gopika Shah, senior vice president and technology Officer at Falcon Risk Services, a management, professional and cyber liability insurance service organization. “But we also have to think about it when we think about responsible AI, how are we going to impact human rights, and how are we going to impact our society as well,” she said. “In those areas, there are certain things that go beyond the traditional data governance. That is transparency, knowing where your data is getting used and how it’s getting used.”

AI technology is not a completely unknown, unfamiliar quantity, according to Brophy, but when it comes to governance, AI intensifies thought processes, accelerating the manner and pace of change.

See also  The Reasonably Comparable Shingle Debate

“The big difference is pretty much all the existing frameworks expand,” she said. “Intellectual property. That expands. What data was the model trained on? Did you have a right to train your models on that data? That expands. Transparency about the outputs.”

Jim Grafmeyer of Nationwide Jim Grafmeyer, chief enterprise architect at Nationwide.

LinkedIn

Nationwide, with its divided approach to Gen AI, has one team of professionals focused on governance and compliance, “poking holes” in the other team’s work where needed, according to Jim Grafmeyer, chief enterprise architect at Nationwide. The technology focused team will ask the compliance team what needs to be true to proceed with using the AI, he added. 

This helped Nationwide “develop some quick guardrails that still allow us to say yes to some of the new technology choices,” Grafmeyer said. “We’re in a position right now where we have more general purpose tools rolled out to every associate at Nationwide that could be developer wise would get a co-pilot too. These are associates that have a chat with your documents, chat with your data interface. It’s been really helpful to democratize this across all Nationwide.”

Insurers trying to figure out how to use AI responsibly for data management should begin by asking a few key questions, according to Shah from Falcon Risk. “What is critical for your data is: who is the owner of the data? Who’s responsible for maintaining the data? Do we trust our data or not, and how much?” she said.

Either way, insurers should have personnel dedicated solely to AI governance, as Nationwide’s Grafmeyer recalls. “We found pretty quickly that we need dedicated roles in the Gen AI space,” he said. “We were trying to shoehorn into existing structure and it did not work. Specialized roles were important, and we still probably have gaps there, but that’s been key – to not make it people’s part-time jobs.”