Artificial Intelligence Regulation in the ‎Insurance Industry – 2023 a Year in Review‎

Artificial Intelligence Regulation in the ‎Insurance Industry – 2023 a Year in Review‎

2023 has been a very productive year for regulators advancing their efforts to understand new technologies and consider whether and how to regulate the rapidly developing technologies, including artificial intelligence, predictive models and algorithms. Keeping in mind that existing insurance laws and regulations often are broad enough to sweep in the new technologies, both the industry and regulators desire a better understanding of how the technologies are being used and how to approach their regulation. In 2023, many initiatives took root and spurred further efforts to protect consumers in the wake of new technological innovations impacting the insurance industry. To put it in perspective, McKinsey estimates that generative artificial intelligence’s impact on productivity “could add the equivalent of $2.6 trillion to $4.4 trillion annually across the 63 use cases” that McKinsey analyzed potentially exceeding the United Kingdom’s 2021 GDP of $3.1 trillion.[1] Moreover, three fourths of that value “falls across four areas: Customer operations, marketing and sales, software engineering, and R&D.”[2] As such, the insurance industry is poised to significantly increase its profitability by deploying new innovation and artificial intelligence in its operations and distribution systems. Likewise, Locke Lord is prepared to partner with its clients in these endeavors and strives to assist its clients in navigating the dynamically changing regulatory landscape.

The following 2023 regulatory initiatives are important for the insurance industry to understand and monitor going into 2024.

NAIC

In December 2023, at the Fall NAIC Meeting, the Innovation, Cybersecurity, and Technology (H) Committee adopted the Model Bulletin on the Use of Algorithms, Predictive Models, and Artificial Intelligence Systems by Insurers (as amended during the meeting). The Model Bulletin is a template communication that insurance regulators can use to guide insurers to employ AI Systems consistent with existing market conduct, corporate governance, and unfair and deceptive trade practice laws. It is intended to create balance between encouraging innovation and protecting the insurance buying public from potential harm associated with the use of AI Systems, such as, unlawful bias or discriminatory practices.
During 2023, the NAIC Big Data and Artificial Intelligence (H) Working Group conducted separate surveys of life insurers and home insurers and issued reports relating these insurers’ use of AI in areas such as claims, underwriting, marketing, fraud detection, and loss prevention. These surveys were in addition to a similar survey completed in 2022 of private passenger auto insurers.

Colorado

Effective November 14, 2023, the Colorado Department of Regulatory Affairs (“DORA”) promulgated a regulation entitled “Governance and Risk Management Framework Requirements for Life Insurance Carrier’s Use of External Consumer Data and Information Sources, Algorithms, and Predictive Models” implementing aspects of Colo. Rev. Stat. § 10-3-1104.9 (Concerning Protecting Consumers from Unfair Discrimination in Insurance Practices). DORA has clarified that the applicability of these regulations is limited to individually issued insurance policies and does not apply to group life insurance policies or annuity contracts. Colo. Div. of Ins., Bulletin B-10.002, Concerning Applicability of Colorado Insurance Regulation 10-1-1 (December 4, 2023). DORA also clarified that it is not prescribing a specific format for insurers to use in attesting that they do not use external consumer data and information resources, or algorithms or predictive models that use ECDIS, but that such attestation must be signed by an officer of the insurer and “unambiguously state the insurer does not use ECDIS, or any algorithms or predictive model that uses ECDIS, with any insurance practice, as defined in Colorado Insurance Regulation 10-1-1.” Colo. Div. of Ins., Bulletin B-10.001, Concerning Attestations for Life Insurers that Do Not Use External Data and Information Sources (December 4, 2023).
DORA also has issued a draft proposed regulation entitled “Concerning Quantitative Testing of External Consumer Data and Information Sources, Algorithms, and Predictive Models Used for Life Insurance Underwriting for Unfairly Discriminatory Outcomes.” The newest proposed regulation addresses the quantitative testing requirements for life insurers that use ECDIS to ensure that their use is not unfairly discriminatory based upon race and ethnicity. The proposed regulations were exposed for informal comment on September 28, 2023, and a stakeholder meeting was held October 19, 2023.
Additionally, DORA hosted stakeholder meetings in relation to Unfair Discrimination in Insurance for private passenger auto insurers with respect to underwriting and governance. However, to date, no draft proposed regulations tailored to private passenger auto insurance have been exposed.

See also  Autonomous cars may be impossible without helpful human touch

Federal Initiatives

In June 2020, Senator Majority Leader Chuck Schumer announced his Safe Innovation Framework with policy objectives to address artificial intelligence. He has since hosted a series of AI Insight Forums bringing together leaders to discuss issues presented by the artificial intelligence. Nine forums have been held to date on various topics. While to date these discussions have not directly impacted insurers, insurers will want to monitor these developments in 2024 for any indirect impact.
In July 2023, the Securities and Exchange Commission (“SEC”) proposed rules “that would require broker-dealers and investment advisers (“Firm”) to take certain measures to address conflicts of interest associated with their use of predicative data analytics and similar technologies to interact with investors to prevent firms form placing their interest ahead of investors’ interests.”[3] The proposed rule would apply when Firms use or ‎reasonably foreseeably may use ‎covered technology in an investor ‎interaction.‎ The proposed rule is intended to supplement existing rules, including Reg BI. Disclosures and informal investor consent is not sufficient related to the use of covered technology, the conflict of interest must be eliminated in the proposed rule. The SEC’s Examination division has already begun collecting information on the use of artificial intelligence by investment advisors on topics such as “AI-related marketing documents, algorithmic models used to manage client portfolios, third-party providers and compliance training.”[4]

”The SEC’s requests in the sweep letter, which cover 26 broad topics, reflect known agency concerns. The letter, for example, demands that firms turn over documents on the management of potential AI-linked conflicts of interest. The letter also asks firms to provide information on their contingency plans for system failure, reports on AI systems causing regulatory or legal issues, and recent examples of advertising that mentioned AI.”[5]

On October 30, 2023, President Biden issued an Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence that focuses on the “extraordinary potential for both promise and peril” of artificial intelligence. The Executive Order sets forth certain defined terms, implementation deadlines as well as requirements for NIST and other related federal agencies to coordinate in the development of “best practices” and guidelines “to help ensure the development of safe, secure, and trustworthy AI systems. . . [6]
On November 15, 2023, U.S. Senators Amy Klobuchar (D-MN), John Thune (R-SD), Roger Wicker (R-MS), John Hickenlooper (D-CO), Shelley Moore Capito (R-WV), and Ben Ray Luján (D-NM), introduced the bipartisan Senate Artificial Intelligence Research, Innovation, and Accountability Act establishing “a framework to bolster innovation while bringing greater transparency, accountability, and security to the development and operation of the highest-impact applications of AI.”[7]
On November 21, 2023, the FTC authorized a Compulsory Process for AI-Related Products and Services, which will enhance the FTC’s ability to issue civil investigative demands relating to artificial intelligence.[8] “Although AI, including generative AI, offers many beneficial uses, it can also be used to engage in fraud, deception, infringements on privacy, and other unfair practices, which may violate the FTC Act and other laws. At the same time, AI can raise competition issues in a variety of ways, including if one or just a few companies control the essential inputs or technologies that underpin AI.”[9]
On December 15, 2023, Rep. Lisa Blunt Rochester (D-Del.) and Rep. Larry Bucshon, M.D. (R-Ind.) introduced the bipartisan House Artificial Intelligence Literacy Act, which would amend the Digital Equity Act to include AI literacy as part of digital literacy.[10]

See also  IIBC – Jump Start Program | 8 half-day sessions

European Union

On December 8, 2023, although certain approval formalities still need to be undertaken, European Union policymakers agreed to a law called the A.I. Act. The A.I. Act “set[s] a new global benchmark for countries seeking to harness the potential benefits of the technology, while trying to protect against its possible risks, like automating jobs, spreading misinformation.”[11]

Throughout 2023, Locke Lord has assisted its insurance industry clients with the development and implementation of “AI Best Practices”, including providing artificial intelligence training and education to our clients. As AI regulation evolves in 2024, we will continue to assist our clients with advice related to compliance and legal issues arising from artificial intelligence, predictive models and algorithms.

2023 “Best Practices” for AI Use and Development

Identify the problem(s) you want to solve
Confirm AI is the right solution: Consider the risks and challenges
Do not wait for regulatory regimes, laws and rules to be effective
Make a plan

Begin with established compliance infrastructure
NIST’s AI Risk Management Framework (AI RMF 1.0)
Layer-in the concepts and concerns from regulatory initiatives, bulletins, guidance, etc

Prepare for change and scrutiny (must be nimble)
Get buy-in from the top down
Build an inter-disciplinary AI Governance Team
Consider appointing an AI Chief Risk Officer
Report to the Board or Board Committee
Expand your existing compliance program (there are no AI-exceptions)
Create and implement an AI Use Policy
Implement vendor management and ensure transparency and visibility
Test regularly
Ensure that there is a Human in the loop to validate test results
Hire or retool resources to support AI Systems and related legal and compliance
Provide AI Training and Education to employees and agents
Use pilot programs
Protect your IP
Ensure documentation, including policies and procedures, and maintain recordkeeping

See also  Want out-of-this-world photos of your car? Start shooting at night

Create and maintain an inventory of predictive tools and identify the controls

Consider purchasing insurance as a risk management tool

2024 AI Trends

Generative AI is first of eight priorities for CEOs in 2024.[12] Consistent with these priorities, in 2024, we anticipate seeing the following AI legal and compliance trends:

State insurance departments will begin issuing guidance consistent with either the NAIC Model Bulletin on the Use of Algorithms, Predictive Models, and Artificial Intelligence Systems by Insurers or the Colorado Regulations Governing the Use of ECDIS;
State insurance departments will continue to hire, train and educate more staff and data scientists to address regulation of artificial intelligence, predictive modeling and algorithms;
Colorado will expand its promulgation of AI regulations to address the use of ECDIS in all lines of insurance;
Market conduct exams may be expanded to include a review of artificial intelligence, predictive modeling and algorithm governance issues;
Class action litigation will increase in connection with the insurance industry’s use of artificial intelligence and technological innovations that may result in practices that allegedly result in unfair discrimination or bias against the insurance buying public or claiming that AI used for insurance claims resulted in an unfair claims settlement practice;
State insurance departments will exercise greater scrutiny over insurance company practices and filings related to the use of new technological innovations, including artificial intelligence, predictive models and algorithms; and
SEC will continue to conduct artificial intelligence sweep exams impacting broker-dealers and investment advisers

In light of the foregoing, the insurance industry should be prepared to address new and developing regulatory challenges arising out of the use and deployment of AI Systems. 2024 will be a big, and perhaps watershed, year on this score. Please reach out to your Locke Lord attorney for further information and consultation.

[1] The economic potential of generative AI: The next productivity frontier

[2] Id.

[3] SEC Proposes New Requirements to Address Risks to Investors From Conflicts of Interest Associated With the Use of Predictive Data Analytics by Broker-Dealers and Investment Advisers

[4] SEC Probes Investment Advisers’ Use of AI

[5] SEC Probes Investment Advisers’ Use of AI

[6] Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence

[7] Klobuchar, Thune, Commerce Committee Colleagues Introduce Bipartisan AI Bill to Strengthen Accountability and Boost Innovation

[8] FTC Authorizes Compulsory Process for AI-related Products and Services

[9] Id.

[10] The Artificial Intelligence (AI) Literacy Act

[11] E.U. Agrees on Landmark Artificial Intelligence Rules

[12] What matters most? Eight CEO priorities for 2024