A bogus video of President Biden reinstating the draft and sending America’s finest to aid Ukraine’s war effort.
Fake photos of former president Donald Trump being arrested in New York.
An ad rolled out by the Republican National Committee composed entirely of synthetic media.
As AI image generators and other tools have proliferated, the technology has quickly become an instrument of political messaging, mischief and misinformation. Meanwhile, the technology’s rapid development has outpaced U.S. regulation.
Some in Congress say that’s a problem. On Tuesday, Rep. Yvette D. Clarke (D-N.Y.) introduced legislation that would require disclosure of AI-generated content in political ads — part of an effort, she said, to “get the Congress going on addressing many of the challenges that we’re facing with AI.”
“Our current laws don’t begin to scratch the surface with respect to protecting the American people from what the rapid deployment of AI can mean in disrupting society,” Clarke said in an interview.
The immediate impetus for her bill, she said, was an ad released last week by the RNC that used AI-generated images to paint a dystopian picture of a potential second term for Biden. Designed to answer Biden’s announcement that he was running for reelection, the 30-second spot included fake visuals of China invading Taiwan and immigrants overwhelming the Southern border, among other scenarios. The ad included a disclaimer in the upper left corner that read, “Built entirely with AI imagery.”
Clarke said that disclosure made the RNC ad a model, in a certain sense, of the transparent deployment of AI. But not all actors will follow that example, she warned, and the consequences could be dire during the 2024 presidential campaign.
“There will be those who will not want to disclose that it’s AI-generated, and we want to protect against that, particularly when we look at the political season before us,” Clarke said.
Clarke’s bill would amend federal campaign finance law to require that political ads include a statement disclosing any use of AI-generated imagery. The Federal Election Commission recently tightened rules about sponsorship disclaimers for digital ads, making clear that the requirement to disclose who paid for ads promoted on websites also apply to advertising on other platforms, such as social media and streaming sites.
Additional reform is made necessary by “revolutionary innovations” in AI technology, as well as the potential “use of generative AI that harms our democracy,” according to Clarke’s bill.
Lawmakers have not acted with urgency on similar proposed measures aimed at curtailing the use of AI applications, including facial recognition technology. The legislation has stalled amid broader congressional gridlock that has also thwarted privacy and ad transparency proposals, though Senate Majority Leader Charles E. Schumer (D-N.Y.) recently put forward a framework for AI regulation that could galvanize action.
Last year, Clarke was among several members of Congress behind a proposed measure limiting law enforcement’s use of that technology, but it never moved beyond the two committees to which it was referred. Variations of the legislation have been introduced for years. One version, banning the government from deploying facial recognition technology and other biometric techniques, was recently reintroduced.
Some lawmakers have turned to creative strategies to build momentum for congressional action. Earlier this year, Rep. Ted Lieu (D-Calif.) introduced a measure calling on Congress to regulate the development and deployment of AI. To underline the power of the technology, Lieu created the resolution using the AI language model ChatGPT.
Humans wrote Clarke’s bill. And while she hasn’t deployed or been duped by AI-generated images, she said, the need to create controls is evident.
“I think there are really important uses of AI, but there have to be some rules to the road, so that the American people are not deceived or put into harm’s way,” she said.