Representatives of federal enforcement and regulatory agencies, including the Consumer Financial Protection Bureau (CFPB), Department of Justice (DOJ), Federal Trade Commission (FTC) and the Equal Employment Opportunity Commission (EEOC) are warning that the emergence of artificial intelligence (AI) technology do not give license to break existing laws pertaining to civil rights, fair competition, consumer protection and equal opportunity.
In a joint statement, the Civil Rights Division of the DOJ, CFPB, FTC, and EEOC outlined a commitment to enforcement of existing laws and regulations despite a lack of regulatory oversight currently in place regarding emerging AI technologies.
“Private and public entities use these [emerging A.I.] systems to make critical decisions that impact individuals’ rights and opportunities, including fair and equal access to a job, housing, credit opportunities, and other goods and services,” the joint statement notes. “These automated systems are often advertised as providing insights and breakthroughs, increasing efficiencies and cost-savings, and modernizing existing practices. Although many of these tools offer the promise of advancement, their use also has the potential to perpetuate unlawful bias, automate unlawful discrimination, and produce other harmful outcomes.”
Potentially discriminatory outcomes in the CFPB’s areas of focus are a chief concern, according to Rohit Chopra, the agency’s director.
“Technology marketed as AI has spread to every corner of the economy, and regulators need to stay ahead of its growth to prevent discriminatory outcomes that threaten families’ financial stability,” Chopra said. “Today’s joint statement makes it clear that the CFPB will work with its partner enforcement agencies to root out discrimination caused by any tool or system that enables unlawful decision making.”
These agencies have all had to address the rise of AI recently.
Last year, the CFPB published a circular confirming that consumer protection laws remain in place for its covered industries — regardless of the technology being used to serve consumers.
The DOJ’s Civil Rights division in January published a statement of interest in federal court explaining that the Fair Housing Act applies to algorithm-based tenant screening services after a lawsuit in Massachusetts alleged that the use of an algorithm-based scoring system to screen tenants discriminated against Black and Hispanic rental applicants.
Last year, the EEOC published a technical assistance document that detailed how the Americans with Disabilities Act (ADA) applies to the use of software and algorithms, including AI, to make employment-related decisions about job applicants and employees.
The FTC published a report last June warning about harms that could come from AI platforms, including inaccuracy, bias, discrimination, and “commercial surveillance creep.”
In prepared remarks during the interagency announcement, Director Chopra cited the potential harm that could come from A.I. systems as it pertains to the mortgage space.
“While machines crunching numbers might seem capable of taking human bias out of the equation, that’s not what is happening,” Chopra said. “Findings from academic studies and news reporting raise serious questions about algorithmic bias. For example, a statistical analysis of 2 million mortgage applications found that Black families were 80% more likely to be denied by an algorithm when compared to white families with similar financial and credit backgrounds.”