California Attorney General Rob Bonta sent a letter to hospital CEOs on potential bias in artificial intelligence as part of a growing trend among attorneys general and lawmakers concerning bias in machine learning.

On August 31, California Attorney General (AG) Rob Bonta sent a letter to hospital CEOs across California, opening an inquiry about potential bias in their use of technology. In his letter, Attorney General Bonta requested information from health care facilities and other providers on how they identify and address racial and ethnic disparities in commercial decision-making tools. The health care industry, like other industries, has expanded its use of algorithms, with health care companies utilizing algorithms in various ways, ranging from administrative work to determining whether patients need a referral or specialty care.

Specifically, AG Bonta requested lists of commercially available decision-making tools, products, software, or algorithmic methods that hospitals currently use. The organizations are also asked to explain the purpose of the tools and how the tools are used to inform hospitals in how they make decisions. This includes providing the AG with any policies, procedures, and training used in applying their tools. You can read a copy of the letter from AG Bonta here. We will monitor for further developments.

This California inquiry represents a trend among AGs and lawmakers with a growing interest in how companies implement artificial intelligence and machine learning. Below find some prior examples of this growing trend:

  • In May 2022, the National Association of Attorneys General (NAAG) announced the NAAG Center on Cyber and Technology (CyTech), which will develop resources to support state AGs in understanding emerging technologies, including machine learning, artificial intelligence, and potential bias and discrimination that may result.
  • In April 2022, the U.S. Department of Commerce appointed 27 experts to the National Artificial Intelligence Advisory Committee (NAIAC) to provide recommendations regarding the United States’ competitiveness surrounding artificial intelligence. NAIAC was also directed to establish a subcommittee to advise the president on artificial intelligence bias, data security, and the adoptability of artificial intelligence for security or law enforcement.
  • In December 2021, Washington, DC Attorney General Karl Racine introduced legislation that would prohibit companies and institutions from using algorithms that produced biased or discriminatory results.
  • In March 2020, then-Vermont AG Thomas Donovan filed a lawsuit against Clearview AI regarding the company’s use of facial recognition technology. The AG alleged that Clearview used facial recognition technology to map the faces of individuals, including children, and sold the data to businesses and law enforcement in violation of the Vermont Consumer Protection Act.

These various inquiries and activities demonstrate that companies must carefully deploy their use of technology in an inclusive way to avoid potential exposure for alleged biases. Routinely reviewing technology usage and remaining nimble to shift and correct any concerns is paramount to avoiding such exposure.

More on Artificial Intelligence + the Future of Law.