Industry leaders urge Senate to protect against AI deepfakes with No Fakes Act

Tech and music industry leaders testified about the dangers of deepfakes made with artificial intelligence on Wednesday, urging lawmakers to pass legislation that would protect people’s voices and likenesses from being replicated without consent, while allowing use of the tech responsibly.

Speaking to members of the Senate Judiciary Committee’s panel on privacy, technology, and the law, executives from YouTube and Recording Industry Association of America as well as country music singer Martina McBride, championed the bipartisan No Fakes Act, which seeks to create federal protections for artists’ voice, likeness and image from unauthorized AI-generated deepfakes.

The group argued that Americans across the board — whether teenagers or high-profile music artists — were at risk of their likenesses being misused. The legislation, reintroduced in the senate last month, would combat deepfakes by holding individuals or companies liable if they produced an unauthorized digital replica of an individual in a performance.

“AI technology is amazing and can be used for so many wonderful purposes,” McBride told the panel. “But like all great technologies, it can also be abused, in this case by stealing people’s voices and likenesses to scare and defraud families, manipulate the images of young girls in ways that are shocking to say the least, impersonate government officials, or make phony recordings posing as artists like me.”

The No Fakes Act would also hold platforms liable if they knew a replica was not authorized, while excluding certain digital replicas from coverage based on First Amendment protections. It would also establish a notice-and-takedown process so victims of unauthorized deepfakes “have an avenue to get online platforms to take down the deepfake,” the bill’s sponsors said last month.

The bill would address the use of non-consensual digital replicas in audiovisual works, images, or sound recordings.

Nearly 400 artists, actors and performers have signed on in support of the legislation, according to the Human Artistry Campaign, which advocates for responsible AI use, including LeAnn Rimes, Bette Midler, Missy Elliott, Scarlett Johansson and Sean Astin.

The testimony comes two days after President Donald Trump signed the Take It Down Act, bipartisan legislation that enacted stricter penalties for the distribution of non-consensual intimate imagery, sometimes called “revenge porn,” as well as deepfakes created by AI.

Mitch Glazier, CEO of the RIAA, said that the No Fakes act is “the perfect next step to build on” that law.

“It provides a remedy to victims of invasive harms that go beyond the intimate images addressed by that legislation, protecting artists like Martina from non-consensual deepfakes and voice clones that breach the trust she has built with millions of fans,” he said, adding that it “empowers individuals to have unlawful deepfakes removed as soon as a platform is able without requiring anyone to hire lawyers or go to court.”

Suzana Carlos, head of music policy at YouTube, added that the bill would protect the credibility of online content. AI regulation should not penalize companies for providing tools that can be used for permitted and non-permitted uses, she said in written testimony, prior to addressing the subcommittee.

The legislation offers a workable, tech-neutral and comprehensive legal solution, she said, and would streamline global operations for platforms like YouTube while empowering musicians and rights holders to manage their IP. Platforms have a responsibility to address the challenges posed by AI-generated content, she added.

“YouTube largely supports this bill because we see the incredible opportunity to of AI, but we also recognize those harms, and we believe that AI needs to be deployed responsibly,” she said.

Source link

Leave a Comment