MENLO PARK, Calif. — Meta has told its Oversight Board that the company relies on “media reports” when deciding to add images to its permanent database of banned content for its platforms, including Instagram and Facebook.
The disclosure came in a statement issued this week by Meta’s Oversight Board criticizing Meta for its inconsistent handling of what it calls “explicit AI images,” commonly known as deepfakes.
The context of the statement was the company’s present and future handling of deepfakes, one of several types of images — both legal and illegal — that Meta flags as violating its platforms’ terms of service.
The Oversight Board was responding to questions about two specific cases of deepfakes, involving an Indian public figure and an American public figure respectively, Meta acknowledged the practice of testing explicit images using a Media Matching Service (MMS) bank.
MMS banks “automatically find and remove images that have already been identified by human reviewers as breaking Meta’s rules,” the board explained.
When the board noted that the image resembling an Indian public figure “was not added to an MMS bank by Meta until the Board asked why,” Meta responded by saying that it “relied on media reports to add the image resembling the American public figure to the bank, but there were no such media signals in the first case.”
According to the board, this is worrying because “many victims of deepfake intimate images are not in the public eye and are forced to either accept the spread of their non-consensual depictions or search for and report every instance. One of the existing signals of lack of consent under the Adult Sexual Exploitation policy is media reports of leaks of non-consensual intimate images. This can be useful when posts involve public figures but is not helpful for private individuals. Therefore, Meta should not be over-reliant on this signal.”
The board also suggested that “context indicating the nude or sexualized aspects of the content are AI-generated, photoshopped or otherwise manipulated be considered as a signal of non-consent.”
Meta has been repeatedly challenged by sex workers, adult performers and many others to shed light on its widespread shadow-banning policies and practices, but access to the specifics of those processes has been scant. Meta’s answer to its own Oversight Board is a rare instance of lifting the veil of secrecy about its arbitrary and often-confusing moderation practices.
As XBIZ reported, the Oversight Board has previously criticized Meta for its policies regarding content it considers sexual, although its recommendations do not appear to have had a meaningful impact on the still-opaque moderation practices.
The Oversight Board made nonbinding recommendations that Meta should add the prohibition on “derogatory sexualized photoshop” to its Adult Sexual Exploitation Community Standard, change the word “derogatory” in that prohibition to “non-consensual,” replace the word “photoshop” with a more generalized term for manipulated media and generally “harmonize its policies on non-consensual content by adding a new signal for lack of consent in the Adult Sexual Exploitation policy: context that content is AI-generated or manipulated.”
The board also recommended that AI-generated or -manipulated nonconsensual sexual content should not need to be “non-commercial or produced in a private setting” to be in violation of Meta’s terms of service.