DFS adopts insurance guidance to combat discrimination in artificial intelligence

Adrienne Harris, superintendent of the New York State Department of Financial Services (DFS). PHOTO CREDIT: NEW YORK STATE DFS WEBSITE

ALBANY — It was a step meant to protect consumers from unfair or unlawful discrimination by insurers using artificial intelligence (AI). The New York State Department of Financial Services (DFS), a financial regulatory agency, a few months ago adopted guidance focused on the topic. “New York has a strong track record of supporting responsible innovation […]

Already an Subcriber? Log in

Get Instant Access to This Article

Become a Central New York Business Journal subscriber and get immediate access to all of our subscriber-only content and much more.

ALBANY — It was a step meant to protect consumers from unfair or unlawful discrimination by insurers using artificial intelligence (AI). The New York State Department of Financial Services (DFS), a financial regulatory agency, a few months ago adopted guidance focused on the topic. “New York has a strong track record of supporting responsible innovation while protecting consumers from financial harm,” Adrienne Harris, DFS superintendent, said in the July 11 announcement. “[The] guidance builds on that legacy, ensuring that the implementation of AI in insurance does not perpetuate or amplify systemic biases that have resulted in unlawful or unfair discrimination, while safeguarding the stability of the marketplace.” The use of external consumer data and information sources (ECDIS) and artificial intelligence systems (AIS) can benefit insurers and consumers by simplifying and expediting insurance underwriting and pricing processes. However, it is important that insurers who use such technologies establish a proper governance and risk-management framework to mitigate the potential harm to consumers, the DFS said. The guidance outlines DFS’s expectations for how all insurers authorized to write insurance in New York state develop and manage the integration of ECDIS, AIS, and other predictive models. As outlined in the DFS guidance, insurers are expected to analyze ECDIS and AIS for unfair and unlawful discrimination, as defined in state and federal laws. They’re also expected to demonstrate the actuarial validity of ECDIS and AIS. In addition, insurers are required to maintain a corporate-governance framework that provides “appropriate oversight” of the insurer’s overall outcome of the use of ECDIS and AIS. DFS also expects insurers to maintain appropriate transparency, risk management, and internal controls, including over third-party vendors and consumer disclosures. DFS says it has finalized the guidance after considering the feedback it received from companies it regulates and other key stakeholders, including trade associations, advisory firms, universities, and the broader public.
Erin Webb: