- AiNews.com
- Posts
- YouTube Expands AI Likeness Detection as Deepfake Legislation Advances
YouTube Expands AI Likeness Detection as Deepfake Legislation Advances

Image Source: Alicia Shapiro with Canva
YouTube Expands AI Likeness Detection as Deepfake Legislation Advances
As Congress continues shaping deepfake legislation, YouTube is expanding its pilot program designed to help creators and public figures detect and remove AI-generated replicas of themselves. The platform’s move comes as new bills gain momentum in Washington, with a renewed focus on deepfake accountability and non-consensual content protections.
YouTube’s “likeness management technology,” launched last year in partnership with Creative Artists Agency (CAA), now includes top creators such as MrBeast, Mark Rober, and Marques Brownlee. The tool allows public figures to identify AI-generated imitations of their likeness and submit formal takedown requests.
YouTube Backs Revised NO FAKES Act
Senators Chris Coons (D-DE) and Marsha Blackburn (R-TN) have reintroduced the NO FAKES Act (short for Nurture Originals, Foster Art, and Keep Entertainment Safe), aimed at standardizing rules around the use of AI to replicate a person’s face, name, or voice. Previously introduced in 2023 and 2024, the bill now has a major new supporter: YouTube.
In a public statement, YouTube said the bill “focuses on the best way to balance protection with innovation: putting power directly in the hands of individuals to notify platforms of AI-generated likenesses they believe should come down.” The company joins other supporters including SAG-AFTRA and the Recording Industry Association of America (RIAA), despite ongoing opposition from civil liberties advocates like the Electronic Frontier Foundation (EFF), who argue the bill remains too broad and risks limiting free expression.
Legal Landscape: Deepfake Protections and Free Speech Tensions
The updated 2024 version of the deepfake liability bill clarifies that online platforms, such as YouTube, won’t be held liable for hosting unauthorized AI replicas if they remove them promptly after receiving a valid complaint and notify the uploader. However, this immunity does not apply to platforms that are designed for or marketed as deepfake creation tools.
At a press conference, Senator Chris Coons emphasized that the latest version of the bill—referred to informally as a “2.0” update—includes provisions to address free speech concerns and limit platform liability. The balancing act between protecting individual rights and preserving expression continues to be a point of debate.
YouTube Supports Broader Legislation on AI-Generated Harm
YouTube has also backed the Take It Down Act, which would criminalize the publication of non-consensual intimate imagery—including AI-generated deepfakes—and require social platforms to establish rapid removal processes for such content.
Although the bill is aimed at protecting victims of non-consensual intimate imagery (NCII), it has drawn criticism from civil liberties groups and even some anti-NCII organizations, citing potential overreach and censorship risks. Despite the opposition, the bill has passed the Senate and recently cleared a House committee.
What This Means
YouTube’s expansion of its AI likeness detection tool signals how platforms are preparing for a future where AI-generated identity misuse could become widespread. By including major creators in the pilot, the company is setting a precedent for how content authenticity and individual control could be managed in the AI era.
At the same time, federal lawmakers are refining their approach to deepfake legislation—aiming to offer clear liability guidelines for platforms while safeguarding civil liberties. The legal conversation is increasingly focused on drawing lines between harmful synthetic content and protected creative expression.
As AI-generated media becomes more convincing and accessible, platforms like YouTube are stepping up with tools and public policy support—but the broader legal framework is still evolving.
Editor’s Note: This article was created by Alicia Shapiro, CMO of AiNews.com, with writing, image, and idea-generation support from ChatGPT, an AI assistant. However, the final perspective and editorial choices are solely Alicia Shapiro’s. Special thanks to ChatGPT for assistance with research and editorial support in crafting this article.