• AiNews.com
  • Posts
  • Microsoft Urges Legal Action Against AI-Generated Deepfake Content

Microsoft Urges Legal Action Against AI-Generated Deepfake Content

An illustration of a courtroom featuring a prominent judge's gavel and balance scales symbolizing justice. The background includes digital elements like holographic images and text representing AI-generated deepfakes. A diverse group of people, including seniors and children, are depicted as being protected by the new laws. The scene emphasizes the seriousness of legal action against AI-generated content abuse, highlighting the call for regulatory frameworks to combat fraud, abuse, and manipulation

Microsoft is urging Congress to regulate AI-generated deepfakes to protect against fraud, abuse, and manipulation. Vice chair and president Brad Smith emphasizes the need for swift action to safeguard elections and protect vulnerable groups such as seniors and children from AI-driven scams and abuse.

Comprehensive Deepfake Fraud Statute

Smith advocates for a “deepfake fraud statute” that would provide law enforcement with the tools to prosecute AI-generated scams and fraud. He also calls for updating laws on child sexual exploitation and non-consensual intimate imagery to include AI-generated content.

“While the tech sector and non-profit groups have taken recent steps to address this problem, it has become apparent that our laws will also need to evolve to combat deepfake fraud,” says Smith. “One of the most important things the U.S. can do is pass a comprehensive deepfake fraud statute to prevent cybercriminals from using this technology to steal from everyday Americans.”

Legislative Efforts and Industry Responsibilities

Recently, the Senate passed a bill allowing victims of non-consensual sexually explicit AI deepfakes to sue their creators. This legislation follows incidents involving explicit deepfakes of female students and celebrities like Taylor Swift. Microsoft has implemented additional safety controls for its AI products to prevent misuse.

Deepfake Labeling and Provenance Tooling

Microsoft wants posts containing deepfakes to be clearly labeled. Smith suggests that Congress mandate the use of provenance tooling to label synthetic content, helping the public distinguish between real and AI-generated media.

The Role of the Private Sector

The private sector must innovate and implement safeguards against AI misuse. Microsoft outlines a comprehensive approach to combat abusive AI-generated content, focusing on safety architecture, media provenance, service safeguarding, industry collaboration, modernized legislation, and public awareness.

Call to Action

Smith stresses the importance of quick and decisive action from both the public and private sectors to address the challenges posed by AI-generated content. Microsoft has published a 42-page report with policy recommendations to combat these issues, emphasizing that the greatest risk is inaction. You can read Microsoft’s blog here.