- AiNews.com
- Posts
- FTC Refers Complaint Against Snap’s AI Chatbot My AI to DOJ
FTC Refers Complaint Against Snap’s AI Chatbot My AI to DOJ
Image Source: ChatGPT-4o
FTC Refers Complaint Against Snap’s AI Chatbot My AI to DOJ
The Federal Trade Commission (FTC) has referred a complaint regarding Snapchat’s AI-powered chatbot, My AI, to the Department of Justice (DOJ), citing potential risks to young users.
The FTC stated on Jan. 16 that an investigation revealed concerns that Snap may have violated or is poised to violate the law. The agency described the referral as being in the public interest, a rare move for such complaints.
Allegations Against Snap
The complaint focuses on Snap’s integration of My AI within its Snapchat app and its alleged risks to younger audiences. According to the FTC’s statement, the commission voted 3-0-2 in a closed-door meeting to refer the case. The two absent commissioners later expressed strong opinions about the referral.
Snap’s Response: Snap denied the allegations, defending its transparency about My AI's capabilities and its efforts to ensure user safety.
“Unfortunately, on the last day of this Administration, a divided FTC decided to vote out a proposed complaint that does not consider any of these efforts, is based on inaccuracies, and lacks concrete evidence,” a Snap spokesperson said.
Snap also criticized the complaint for failing to identify tangible harm, citing First Amendment concerns, and warned that it could stifle innovation.
Internal Division at the FTC The referral has exposed divisions within the FTC:
Commissioner Andrew N. Ferguson, who was absent from the vote, called the referral an “unusual step” and expressed opposition to the complaint. Commissioner Melissa Holyoak also opposed the meeting, arguing that it diverted resources during a transitional period for the incoming administration. Holyoak criticized the decision as having long-term consequences for the agency, potentially impacting future litigation and policy development.
Snap’s AI Background
My AI, launched in February 2023, was designed to assist Snapchat users but came with explicit warnings from Snap about its limitations. Snap acknowledged that the chatbot could produce inaccurate or misleading responses, describing it as “prone to hallucination” and susceptible to being “tricked into saying just about anything.”
Broader Implications
The referral reflects heightened scrutiny of AI applications, particularly in cases where younger users may be affected. The FTC’s decision could signal stricter oversight of AI technologies, especially as they become integrated into popular platforms.
While the case underscores the need for safety and accountability, some industry experts argue that overly aggressive regulatory action could discourage innovation. Snap’s stance highlights the tension between fostering technological growth and ensuring ethical use, a debate that is likely to continue as AI technology evolves.
What This Means
The outcome of this case could have far-reaching implications for AI governance and the regulatory landscape. It may also set a precedent for how companies navigate the balance between innovation and compliance in developing AI tools.
Editor’s Note: This article was created by Alicia Shapiro, CMO of AiNews.com, with writing, image, and idea-generation support from ChatGPT, an AI assistant. However, the final perspective and editorial choices are solely Alicia Shapiro’s. Special thanks to ChatGPT for assistance with research and editorial support in crafting this article.