Meta under fire as fake AI war video gains over 700K views

HIGHLIGHTS

Meta’s Oversight Board criticised the platform for not clearly labelling a viral AI-generated video.

The fake AI generated video gained over 700,000 views online.

The board asked Meta to create clearer rules and better tools for AI content.

Meta under fire as fake AI war video gains over 700K views

Meta is under fresh scrutiny over how it handles its AI-generated content. The company’s independent Oversight Board has recently schooled the social media giant, stating that it should develop a dedicated policy for AI-related content. The recommendation came after a fake video made with AI went viral online, wrongly showing damaged buildings in Haifa during the 2025 Israel-Iran conflict. The clip gained more than 700,000 views before the board intervened. According to the decision, Meta failed to add a clear warning label or take stronger action even after the content was flagged. The board says the case shows gaps in Meta’s current policies and highlights the growing challenge of misleading AI content spreading quickly on social media platforms worldwide today.

Digit.in Survey
✅ Thank you for completing the survey!

The AI-generated video was posted on Meta by an account that presented itself as a news outlet. However, after investigations regulators found that the page was actually run by an individual in the Philippines. Meta decided not to remove it and even declined to apply its ‘high risk’ AI label even after the video was reported to them. The ‘high risk’ label in Meta is meant to warn users when content has been created or altered using artificial intelligence.

Also read: Sony PS5 Pro may launch in India soon, BIS listing suggests

However, the Oversight Board of Meta later overturned that decision and said the company should have clearly labelled the video. It also pointed to ‘obvious signals of deception’ linked to the account. After the board raised these concerns, Meta disabled three accounts connected to the page.

In its latest recommendations, the Oversight Board had urged Meta to create a separate rule specifically for AI-generated content instead of treating it under the broader misinformation policy. According to the board, a dedicated rule should clearly explain when users must disclose that content is AI-generated and what penalties they may face if they fail to do so.

Also read: Why do CBSE question papers have QR code

Meta’s current labelling system, which is widely known as ‘AI Info’, has also been criticised by the Oversight Board. The independent panel said that the system relies too heavily on users to voluntarily disclose when they use AI tools. And as such disclosures are rare, the board warned that the approach is not strong enough to deal with the fast spread of AI media, especially during conflicts or crises.

Also read: Google Pixel 10 deal: Get up to Rs 12,000 discount, here’s how

Other than that, the board has also urged Meta to invest more in tools that can detect AI-generated images, audio, and video automatically. Furthermore, it also raised concerns that digital watermarks for content created with Meta’s own AI tools are not applied consistently either.

Meta is yet to respond to the ruling by its Oversight Board. Do note that the company just has 60 days to respond to the board’s recommendations.

Bhaskar Sharma

Bhaskar Sharma

Bhaskar is a senior copy editor at Digit India, where he simplifies complex tech topics across iOS, Android, macOS, Windows, and emerging consumer tech. His work has appeared in iGeeksBlog, GuidingTech, and other publications, and he previously served as an assistant editor at TechBloat and TechReloaded. A B.Tech graduate and full-time tech writer, he is known for clear, practical guides and explainers. View Full Profile

Digit.in
Logo
Digit.in
Logo