Insurance companies urged to step up AI governance amid increasing regulation
Insurance companies should also be increasingly engaged in the governance of AI systems in the face of growing regulatory pressure. Every organization should have an AI governance platform to avoid the risk of violating privacy and data protection laws, being accused of discrimination or bias, or engaging in unfair practices.
“As soon as a similar regulation or legislation is passed, organizations are placed in a precarious position because [lack of governance] can lead to fines, loss of market share, and bad press. Every business who uses AI needs to have this on their radar,” said Marcus Daley (pictured), technical co-founder of NeuralMetrics.
NeuralMetrics is an insurtech data provider that aids in commercial underwriting for property and casualty (P&C) insurers. The Colorado-based firm’s proprietary AI technology also serves financial services companies and banks.
“If carriers are using artificial intelligence to process personally identifiable information, they should be tracking that very closely and understanding precisely how that’s being used, because it is an area of liability that they may not be aware of,” Daley told Insurance Business.
How could AI regulations impact the insurance industry?
The Council of the European Union last month officially adopted its common position on the Artificial Intelligence Act, becoming the first major body to establish standards for regulating or banning certain uses of AI.
The law assigns AI to three risk categories: unacceptable risk, high-risk applications, and other applications not specifically banned or considered high-risk. Insurance AI tools, such as those used for the risk assessment and pricing of health and life insurance, have been deemed high-risk under the AI Act and must be subject to more stringent requirements.
What’s noteworthy about the EU’s AI Act is that it sets a benchmark for other countries seeking to regulate AI technologies more effectively. There is currently no comprehensive federal legislation on AI in the US. But in October 2022, the Biden administration published a blueprint for an AI “bill of rights” that includes guidelines on how to protect data, minimize bias, and reduce the use of surveillance.
The blueprint contains five principles:
Safe and effective systems – individuals must be protected from unsafe or ineffective systems
Algorithmic discrimination protections – individuals must not face discrimination from AI systems, which should be used and designed in an equitable way
Data privacy – individuals should be protected from abusive data practices and have agency over how their data is used
Notice and explanation – users should be informed when an automated system is being used
Alternative options – users must be able to opt out when they want to and access a person who can remedy problems
The Blueprint for an #AIBillofRights is for all of us:
– Project managers designing a new product
– Parents seeking protections for kids
– Workers advocating for better conditions
– Policymakers looking to protect constituentshttps://t.co/2wIjyAKEmy
— White House Office of Science & Technology Policy (@WHOSTP) October 6, 2022
The “bill of rights” is regarded as a first step towards establishing accountability for AI and tech companies, many of whom call the US their home. However, some critics say the blueprint lacks teeth and are calling for tougher AI regulation.
How should insurance companies prepare for stricter AI regulations?
Daley suggested insurance companies need to step up the governance of AI technologies within their operations. Leaders must embed several key attributes in their AI governance plans:
Daley stressed that carriers must be able to answer questions about their AI decisions, explain outcomes, and ensure AI models stay accurate over time. This openness also has the double benefit of ensuring compliance by providing proof of data provenance.
When it comes to working with third-party AI technology providers, companies must do their due diligence.
“Many carriers don’t have the in-house talent to do the work. So, they’re going to have to go out and seek aid from an outside commercial entity. They should have a list of things that they require from that entity before they choose to engage; otherwise, it could create a massive amount of liability,” Daley said.
To stay on top of regulatory changes and the enhancements in AI technologies, insurance companies must be consistently monitoring, reviewing, and evaluating their systems, then making changes as needed.
Rigorous testing will also help ensure that biases are eliminated from algorithms. “Governance is just a way to measure risk and opportunities, and the best way to manage risk is through automation,” Daley said. Automating inputs and testing the outputs produced creates consistent, reliable results.
To nurture trust with clients, regulators and other stakeholders, insurance companies must ensure that their AI processes remain accurate and free from bias.
Another thing for carriers to watch for is the sources of their data and whether they are compliant. “As time goes on, you see that sometimes the source of the data is AI. The more you use AI, the more data it generates,” Daley explained.
“But under what circumstances can that data be used or not used? What’s the nature of the source? What are the terms of service [of the data provider]? Ensuring you understand where the data came from is as crucial as understanding how the AI generates the results.”
Do you have any thoughts about AI regulation? Share them in the comments.