A bipartisan bill titled "The AI Fraud Deterrence Act" has been proposed by two lawmakers, aiming to address the growing misuse of artificial intelligence in scams, particularly those involving deepfakes that target federal officials.
This legislation focuses on modernizing existing U.S. fraud laws to account for the role artificial intelligence technology has played in recent high-profile incidents where AI-generated audio or video tricked officials or the public.
Details of the Bipartisan Bill
The bill, proposed by Representatives Ted Lieu (D-Calif.) and Neal Dunn (R-Fla.), seeks to expand penalties for AI scams and criminalize the impersonation of federal officials using AI. Both lawmakers have expressed concerns about the ease and audacity with which criminals now employ AI in deceptive practices.
"As AI technology advances at a rapid pace, our laws must keep up," Dunn stated in an announcement for the bill. "The AI Fraud Deterrence Act strengthens penalties for crimes related to fraud committed with the help of AI. I am proud to co-lead this legislation to protect the identities of the public and prevent misuse of this innovative technology."
Lieu echoed these sentiments, telling NBC News that the majority of Americans desire "sensible guardrails on AI," as they do not believe a "complete Wild West is helpful."
Under the proposed law, the maximum penalty for defrauding would be doubled from $1 million to $2 million in cases where AI is deliberately used to facilitate the crime. It would also incorporate AI-mediated deception into the definitions of both mail fraud and wire fraud, potentially allowing individuals who use AI to conduct either type of fraud to be charged.
In both mail fraud and wire fraud cases facilitated by AI, criminals could face fines of up to $2 million and imprisonment for up to twenty years for mail fraud, and up to three decades for wire fraud.
The draft legislation also specifically addresses the impersonation of federal officials through AI deepfakes, referencing instances earlier this year where AI was used in attempts to mimic White House Chief of Staff Susie Wiles and Secretary of State Marco Rubio.
While fraud has existed throughout human history, experts note that AI has significantly enhanced the quality of fraudulent outputs.
In December, the FBI issued a warning stating that "generative AI reduces the time and effort criminals must expend to deceive their targets," and that AI "can correct for human errors that might otherwise serve as warning signs for fraud."
Maura R. Grossman, a research professor of computer science at the University of Waterloo in Ontario and a lawyer, also believes AI ushers in a new era of deception. She noted, "AI presents a scale, a scope, and a speed for fraud that is very, very different from frauds in the past."
Advancements in AI Models and Detection Challenges
Beyond its facilitation of crime, a significant concern is that current institutions, such as judicial courts, are struggling to keep pace with the rapid development occurring in the AI sector.
"AI years are dog years," remarked Hany Farid, a professor of computer science at the University of California, Berkeley, and co-founder of GetReal Security, regarding the speed of AI progress.
In the past, distinguishing AI-generated content from real content was relatively straightforward, especially concerning images. However, current AI models have become so advanced that even experienced users now find it challenging to determine if a piece of media is authentic or generated.
The FBI's December warning advised individuals to look for discrepancies in images and videos to identify AI-generated media, suggesting they "Look for subtle imperfections in images and videos, such as distorted hands or feet."
However, Farid disputes this advice, considering it outdated and potentially harmful. He stated, "The multiple hands trick, that’s not true anymore. You can’t look for hands or feet. None of that stuff works."
Lieu and Dunn's proposed bill emphasizes the importance of labeling AI-generated content. It also clarifies that AI-generated media has appropriate contexts, such as when content is labeled as satirical, thereby indicating it is not authentic and exempting it from punishment.

