New York DFS adopts new AI guidance to curb underwriting biases

New York DFS adopts new AI guidance to curb underwriting biases

New York DFS adopts new AI guidance to curb underwriting biases | Insurance Business America

Insurance News

New York DFS adopts new AI guidance to curb underwriting biases

Department expects a risk-based approach from carriers

Insurance News

By
Kenneth Araullo

The New York Department of Financial Services (NYDFS) has adopted guidance aimed at preventing unintended discrimination resulting from the use of artificial intelligence (AI) in underwriting and pricing processes, according to a circular letter issued by the department.

According to a report from AM Best, the new provisions apply to all insurers authorized to write policies in New York. The NYDFS noted that while AI and external consumer data sources can streamline underwriting and pricing, it is essential to implement safeguards to protect consumers from potential harm.

The guidance provides definitions for AI systems and external consumer data sources and clarifies that the terms “unlawful” and “unfair” align with state and federal laws. The department also deliberately excluded a definition for “traditional underwriting” to ensure it does not encompass AI systems or external consumer data as specified in the guidelines.

According to the circular letter, insurers should evaluate how their data sources might correlate with protected classes and potentially lead to discrimination. If a correlation is found, insurers should assess whether using that data source is necessary.

These provisions apply only to protected classes with available data, and insurers are not required to collect additional data for these analyses.

The guidance emphasizes that senior management and boards are accountable for the outcomes of AI use but not for the day-to-day development and implementation of these systems.

See also  Near-record cat bond spreads opportunity may persist to Q3/4: Leadenhall

The NYDFS also expects insurance carriers to adopt a risk-based approach when using AI. Each carrier must determine the sufficiency threshold and standards of proof based on the technology’s application and the product in question.

Additionally, insurance companies are responsible for overseeing third-party vendors. While carriers are not expected to fully understand the intricacies of a vendor’s AI system, they should perform due diligence and provide oversight proportional to the risk presented by the vendor’s AI use.

What are your thoughts on this story? Please feel free to share your comments below.

Related Stories

Keep up with the latest news and events

Join our mailing list, it’s free!