- AiNews.com
- Posts
- AI-Generated Video of Kamala Harris Raises Concerns Before Election
AI-Generated Video of Kamala Harris Raises Concerns Before Election
AI-Generated Video of Kamala Harris Raises Concerns Before Election
A manipulated video imitating the voice of Vice President Kamala Harris has sparked concerns about the power of artificial intelligence to mislead, especially with Election Day just three months away.
Video Details and Initial Sharing
The video, shared by tech billionaire Elon Musk on his social media platform X, mimics Harris's voice, making false statements. Although originally released as a parody, Musk’s post did not clarify this context.
Using visuals from a real campaign ad Harris released last week, the video swaps out the original voice-over with an AI-generated voice convincingly impersonating Harris. “I, Kamala Harris, am your Democrat candidate for president because Joe Biden finally exposed his senility at the debate,” the voice says in the video. The fake voice claims Harris is a "diversity hire" and criticizes her ability to run the country. The video retains "Harris for President" branding and includes authentic past clips of Harris.
Response from Harris Campaign
Mia Ehrenberg, a Harris campaign spokesperson, responded via email to The Associated Press: “We believe the American people want the real freedom, opportunity and security Vice President Harris is offering; not the fake, manipulated lies of Elon Musk and Donald Trump.”
Broader Implications of AI in Politics
The video highlights the growing use of AI-generated images, videos, and audio clips to mislead in politics. As high-quality AI tools become more accessible, there is a lack of significant federal regulation, leaving states and social media platforms to set their own rules.
The original creator, YouTuber Mr. Reagan, disclosed that the video is a parody. However, Musk's post, viewed over 123 million times, simply captioned “This is amazing” with a laughing emoji, did not direct users to this disclosure.
Platform Policies and Community Response
X users familiar with the platform might click through to see the original disclosure, but Musk’s post does not guide them to do so. Some participants in X’s "community note" feature have suggested labeling Musk’s post, but no label had been added as of Sunday.
X’s policy states users “may not share synthetic, manipulated, or out-of-context media that may deceive or confuse people and lead to harm.” There is an exception for memes and satire as long as they do not cause “significant confusion about the authenticity of the media.”
Expert Opinions
Two experts in AI-generated media confirmed that much of the video’s audio was generated using AI technology. Hany Farid, a digital forensics expert at the University of California, Berkeley, said the video demonstrates the power of generative AI and deepfakes. He emphasized that generative AI companies should ensure their tools are not used harmfully. “The AI-generated voice is very good,” he said in an email. “Even though most people won’t believe it is VP Harris’ voice, the video is that much more powerful when the words are in her voice.”
Rob Weissman, co-president of the advocacy group Public Citizen, argued that many people could be fooled by the video. He stressed the need for Congress, federal agencies, and states to regulate generative AI to prevent such misleading content.
AI and Misinformation in Politics
This incident is not isolated. Similar generative AI deepfakes have appeared in both the U.S. and other countries, aiming to influence voters with misinformation or humor. For instance, fake audio clips in Slovakia impersonated a candidate discussing election rigging just days before the vote, and a satirical ad in Louisiana used AI to superimpose a candidate’s face onto an actor.
Regulatory Landscape
Congress has yet to pass legislation on AI in politics, and federal agencies have taken limited steps. Most regulation is currently left to states, with over one-third enacting laws on AI use in campaigns and elections, according to the National Conference of State Legislatures.
Beyond X, other social media companies have policies on synthetic and manipulated media. For example, YouTube requires users to disclose if they used generative AI to create videos or face suspension.