Meta’s Oversight Board Says Deepfake Policy Needs Update, Response to Sensitive Images Is Limited
Meta’s oversight board said Thursday in a decision on cases involving non-consensual deepfakes that it needed to update, including “insufficiently clear” wording, that includes Meta’s policies on non-consensual deepfakes of two high-profile women.
The semi-independent oversight board said that in one case, the social media giant did not remove a deepfake intimate image of a prominent Indian woman, whose identity it did not disclose, until the company’s review board got involved.
Deepake nude photos of women and celebrities included Taylor Swift have thrived on social media as the technology used to create them has become more accessible and easier to use. Online platforms have been face the pressure do more to solve the problem.
Read more: You Might Soon Get Snapchat-Like 3D Face Filters In This Google App, Here’s What We Know So Far
The table that Meta sets up in 2020 to act as an arbiter of content on its platforms including Facebook and Instagram, spent months reviewing two cases involving AI-generated images depicting famous women, one Indian and one American. The panel did not identify any of the women, describing each only as “a female public figure.”
Meta said it welcomed the board’s recommendations and was considering them.
One case involved an “AI-edited image” posted on Instagram depicting a naked Indian woman from behind with her face visible, resembling “a female public figure.” The board said a user reported the image as pornographic, but the report was not reviewed within 48 hours, so it was automatically closed. The user filed an appeal with Meta, but the appeal was also automatically closed.
Read more: Google Integrates Mistral AI’s Codex Model: What Is It and How Will It Help Developers?
It wasn’t until the user appealed to the Oversight Board that Meta decided that the initial decision not to remove the post was a mistake.
Meta also disabled the account that posted the images and added them to a database used to automatically detect and remove images that violate the rules.
In the second case, an AI-generated image of a naked American woman being groped was posted to a Facebook group. It was automatically removed because it was already in the database. A user appealed the takedown to the board, but Meta’s decision was upheld.
The board said both images violated Meta’s ban on “pornographic photoshopping” under its bullying and harassment policy.
However, the company also added that the wording of its policy is unclear to users and recommended replacing the word “offensive” with another term such as “non-consensual” and clarifying that the rule covers a wide range of media editing and manipulation techniques that go beyond “photoshop.”
Read more: WhatsApp may soon get Instagram-like reshare feature, ability to mention contacts in status updates
The report said deepfake nude images should also fall under community standards for “adult sexual exploitation” rather than “bullying and harassment.”
When the board questioned Meta about why the image of the Indian woman was not in their image database, they were alarmed by the company’s response that it relied on media reports.
“This is concerning because many victims of deepfake intimate images are not public figures and are forced to either accept the spread of non-consenting images of them or seek out and report any cases,” the council said.
The council also said it was concerned about Meta’s “automatic closure” of graphic sexual abuse appeals after 48 hours, saying this “could have significant human rights implications”.
Meta, then known as Facebook, created the Oversight Board in 2020 in response to criticism that it was not moving quickly enough to remove misinformation, hate speech and influence campaigns from its platform. The Oversight Board has 21 members, a multinational group that includes legal scholars, human rights experts and journalists.