- AiNews.com
- Posts
- Meta Oversight Board Calls for Deepfake Policy Updates
Meta Oversight Board Calls for Deepfake Policy Updates
Meta Oversight Board Calls for Deepfake Policy Updates
Meta's Oversight Board has recommended updating the company's policies on non-consensual deepfake images, highlighting the need for clearer wording and a more robust response to reported cases.
Key Findings and Recommendations
The quasi-independent Oversight Board, set up by Meta in 2020, reviewed two cases involving AI-generated explicit depictions of famous women, one Indian and one American, describing each only as a “female public figure.” The board criticized Meta’s handling of these cases, noting failures and delays in removing the offending content.
Case Involving Indian Woman
In one case, an AI-manipulated image of a nude Indian woman resembling a public figure was posted on Instagram. Despite being reported as pornography, the image remained online because the report was not reviewed within the 48-hour deadline, leading to an automatic closure. Subsequent appeals were also automatically closed until the user escalated the issue to the Oversight Board, which resulted in Meta acknowledging the error and removing the image. Meta disabled the account that posted the images and added them to a database to detect and remove similar violations automatically.
Case Involving American Woman
The second case involved an AI-generated image of an American woman nude and being groped, posted to a Facebook group. The image was automatically removed because it was already in Meta's database. The user who appealed the takedown to the board had their appeal upheld by the board, confirming Meta's decision to remove the image.
Policy and Database Concerns
The Oversight Board found that Meta’s policies on “derogatory sexualized photoshop” under its bullying and harassment policy were not clear to users. It recommended replacing the term “derogatory” with “non-consensual” and specifying that the rule covers a broad range of editing and media manipulation techniques beyond just “photoshop.” Additionally, the board suggested that deepfake nude images should be categorized under community standards on “adult sexual exploitation” instead of “bullying and harassment.”
The board also raised concerns about Meta’s reliance on media reports to populate its image database, highlighting the issue that many victims of deepfake intimate images are not public figures and may struggle to report violations.
Impact of Auto-Closing Appeals
The board criticized Meta’s practice of automatically closing appeals related to image-based sexual abuse after 48 hours, warning that this could have significant human rights implications.
Meta’s Response
Meta stated it welcomed the board’s recommendations and is reviewing them. The company has committed to enhancing its policies and response mechanisms to better address the issues related to non-consensual deepfake images.