WASHINGTON – The Federal Trade Commission (FTC) published guidance to businesses Monday that states definitively that if a business uses a discriminatory algorithm to make automated decisions, they may be violating federal law. The U.S. Senate Commerce Committee is conducting an FTC oversight hearing Tuesday. The following is a statement from David Brody, senior counsel and senior fellow for Privacy and Technology at the Lawyers’ Committee for Civil Rights Under Law:
“This is a big and positive step forward by the FTC. We need the FTC and other federal agencies to take enforcement actions to stop discriminatory uses of artificial intelligence that deny equal opportunity to jobs, housing, education, or banking. Machine-learning algorithms are not blind or neutral. If the societal data used to train a model is the product of generations of unjust discrimination, that model will learn to repeat and amplify discrimination.”
Background:
The FTC highlighted how superficially ‘neutral’ AI can produce and reinforce discrimination on the basis of protected characteristics like race, religion, or sex. The FTC described such AI models as “digital redlining” and said it “can challenge the use of that model as unfair.” The FTC also said that an AI developer could be liable for deceptive trade practices if they build their product with data that lacks racial or gender diversity, yet claim it produces unbiased outcomes. “The FTC Act prohibits unfair or deceptive practices. That would include the sale or use of – for example – racially biased algorithms,” the FTC wrote.
This post was originally published on Radio Free.