X Imposes Strict AI Disclosure Rules for Military Content Violators Face 90-Day Monetization Ban.
X Cracks Down on AI-Generated Military Content: New Penalties for Creators Amid Global Conflicts
Nikita Bier, Head of Product at X, has announced a significant update to the platform’s Creator Monetization standards. In light of ongoing global conflicts, X is prioritizing information accuracy and moving to prevent the spread of misleading AI-generated war imagery and videos.
Mandatory Disclosure for AI Warfare Content
Under the new regulations, creators who produce AI-generated videos related to military operations, combat, or high-tension geopolitical events must explicitly disclose the use of AI.
The Penalty: If a creator fails to label AI content and is caught by the platform’s detection systems, their monetization privileges will be suspended for 90 days.
Zero Tolerance for Recurrence: Any repeat violations will result in a permanent ban from the X Creator Ad Revenue Sharing program.
Multi-Layered Detection System
X is employing a robust verification process to enforce these rules. Detection is not limited to automated software; the platform is heavily integrating Community Notes X crowdsourced fact-checking feature to identify and flag unlabelled synthetic media in real-time.
Over the past year, many creators have used AI to create overly realistic war scenes to boost engagement and earn ad revenue. This policy aims to break the cycle of monetization from "fake panic," which impacts real-world stability.
X's use of Community Notes as a decision-making tool is a double-edged sword, reflecting the fact that AI detecting other AI may not be as accurate as human eyes examining the context of images and videos. This creates a decentralized fact-checking system that emphasizes user transparency.
It's anticipated that X may begin using the C2PA (Content Provenance and Authenticity) standard, a hidden code embedded in image/video files from various AI providers (e.g., OpenAI, Adobe, Google) to automatically verify content origins. If AI code is detected without identification, the system will immediately flag it.
The 90-day monetization suspension is a very severe penalty for "news influencers" or "accounts monitoring world events," potentially leading to platform migration or increased caution in using AI tools.
Source: TechCrunch

Comments
Post a Comment