The Oversight Board is as soon as once more urging Meta to overtake its guidelines round AI-generated content material. This time, the board says Meta ought to create a separate rule for AI content material that is impartial of its misinformation coverage, put money into extra dependable detection instruments and make higher use of digital watermarks amongst different adjustments.
The group’s suggestions stem from an AI-generated video shared final yr that claimed to indicate broken buildings within the Israeli metropolis of Haifa through the Israel-Iran battle in 2025. The clip, which racked up greater than 700,000 views, was posted by an account that claimed to be a information outlet however was really run by somebody within the Philippines.
After the video was reported to Meta, the corporate declined to take away it or add a “excessive threat” AI label that may have clearly indicated the content material had been created or manipulated with AI. The board overturned Meta’s determination to not add the “excessive threat” label and says the case shines a lightweight on a number of areas the place the corporate’s present AI guidelines are falling quick.
“Meta should do extra to deal with the proliferation of misleading AI- generated content material on its platforms, together with by inauthentic or abusive networks of accounts and pages, significantly on issues of public curiosity, in order that customers can distinguish between what’s actual and pretend,” the board wrote in its determination. Meta ultimately disabled three accounts linked to the web page after the board flagged “apparent alerts of deception.”
One of many board’s high suggestions is that Meta create a devoted rule for AI-generated content material that is separate from its misinformation coverage. The rule, in keeping with the board, ought to embrace specifics about how and when customers are required to label AI content material in addition to details about how Meta penalizes those that break the rule.
The board was additionally extremely important of how Meta makes use of its present “AI Info” labels, noting that the way in which they’re utilized is “neither strong nor complete sufficient to cope with the dimensions and velocity of AI-generated content material,” particularly in instances of battle or disaster. “A system overly depending on self-disclosure of AI utilization and escalated evaluation (which happens sometimes) to correctly label this output can’t meet the challenges posed within the present surroundings.”
Meta, the board mentioned, additionally must put money into extra refined detection know-how that may reliably label AI media, together with audio and video. The group added that it was “involved” about studies that the corporate is “inconsistently implementing” digital watermarks on AI content material created by its personal AI instruments.
Meta did not instantly reply to a request for touch upon the Oversight Board’s determination. The corporate has 60 days to formally reply to its suggestions.
The choice is not the primary time the board has been important of Meta’s dealing with of AI content material. The group has described the corporate’s manipulated media guidelines as “incoherent” on two other events, and has criticized it for counting on third-parties, together with reality checking organizations, to flag problematic content material. Meta’s reliance on reality checkers and different “trusted companions” was once more raised on this case, with the board saying that it had heard from these teams that Meta “is much less attentive to outreach and issues, partly because of a big discount in capacities for Meta’s inner groups.” Meta, the board writes, “must be able to conducting such assessments of hurt itself, quite than rely solely on companions reaching out to them throughout an armed battle.”
Whereas the Oversight Board’s determination pertains to a publish from final yr, the difficulty of AI-generated content material throughout armed conflicts has taken on a brand new urgency through the newest battle within the Center East. For the reason that begin of the US and Israel’s strikes on Iran earlier this month, there was a pointy rise in viral AI-generated misinformation across social media. The board, which has beforehand hinted that it want to work with generative AI corporations, included a suggestion that would appear to use to not simply Meta.
“The business wants coherence in serving to customers distinguish misleading AI-generated content material and platforms ought to handle abusive accounts and pages sharing such output,” it wrote.
