This week, the Federal Election Commission (FEC) released a draft of a proposed ‘interpretive rule’ in response to Public Citizen’s petition for rulemaking on artificial intelligence and campaign ads.
Robert Weissman, co-president of Public Citizen, issued the following statement in response:
“With deepfakes impacting elections around the world and increasingly popping up in the U.S. election cycle, the FEC should be working actively to deter deceptive political deepfakes. Instead, the FEC is prepared to punt on a simple request to issue a rule clarifying that existing law prohibits the use of fraudulent deepfakes.
“The FEC’s “fraudulent misrepresentation” authority applies exactly to deepfakes that show candidates doing or saying things they did not, as Public Citizen explained in detail in comments to the FEC. By falsely putting words into another candidate’s mouth, or showing the candidate taking action they did not, the deceptive deepfaker fraudulently speaks or act “for” that candidate in a way deliberately intended to damage him or her. This is precisely what the statute aims to proscribe.
“Equipped with this authority, the FEC should issue a rule to send a clear message to all candidates and campaigns that deceptive and fraudulent deepfakes violate the law.
“But the anemic FEC seems to have forgotten its purpose and mission, or perhaps its spine. The FEC’s new proposed “interpretive rule” simply says that fraudulent misrepresentation law applies no matter what technology is used. That’s a resolution of a question that was never in doubt.
“All that said, political deepfakes distributed by candidates and campaigns do violate the statutory prohibition on fraudulent misrepresentation. And the compromise proposal from the FEC at least leaves the question open. When appropriate examples manifest, Public Citizen will ask the FEC to do what it already should have done, which is to find them in violation of the law.”
This post was originally published on Common Dreams.