• AiNews.com
  • Posts
  • Lingo Telecom Fined $1M by FCC Over Deepfake Joe Biden Robocall

Lingo Telecom Fined $1M by FCC Over Deepfake Joe Biden Robocall

An image representing the concept of a deepfake robocall scandal. A large smartphone displays an incoming call with a blurred or distorted face, symbolizing the deepfake manipulation. Surrounding the phone are visual elements like binary code, digital waveforms, and an AI circuit, highlighting the technological aspect of the deepfake. In the background, subtle imagery of the U.S. Capitol and a gavel suggests legal actions and regulations. The color scheme uses cool tones with red accents to convey urgency and the seriousness of the issue.

Image Source: ChatGPT

Lingo Telecom Fined $1M by FCC Over Deepfake Joe Biden Robocall

Lingo Telecom has been fined $1 million by the Federal Communications Commission (FCC) after it was found to have transmitted a robocall that used a deepfake of President Joe Biden’s voice. This call, intended to mislead New Hampshire voters before the Democratic primary in January, urged them not to participate in the election. The FCC traced the origin of these AI-generated calls to political consultant Steve Kramer, who faces a separate $6 million fine.

Enhanced Compliance Measures for Lingo Telecom

As part of the settlement, Lingo Telecom is now required to comply with stringent FCC regulations. These include adhering to caller ID authentication protocols and implementing "know your customer" practices to ensure accurate identification of their clients and partners. Additionally, Lingo must verify the information provided by its customers more rigorously. The company has yet to issue a public response to the settlement.

FCC's Commitment to Communication Integrity

FCC Chair Jessica Rosenworcel highlighted the need for trust in communication systems, stating, "Every one of us deserves to know that the voice on the line is exactly who they claim to be. If AI is being used, that should be made clear to any consumer, citizen, and voter who encounters it. The FCC will act when trust in our communications networks is on the line.”

New FCC Regulations on AI-Generated Content

In response to incidents like the deepfake robocall, the FCC introduced new regulations earlier this year, prohibiting the use of AI-generated voices in robocalls without the explicit consent of recipients. The agency is also considering requirements that would mandate political advertisers to disclose the use of generative AI in broadcast media, aiming to increase transparency and safeguard the public from deceptive practices.