- X now allows AI bots to write Community Notes, pending approval from diverse human reviewers
- AI contributions must earn trust over time and are clearly labeled for transparency
- The move aims to boost fact-checking scale while keeping humans in control of what’s published
It didn’t begin with a spark of human insight or a call for clarity. It began inside a neural network. In a move that could reshape how public discourse is moderated, X, formerly Twitter, has begun allowing artificial intelligence bots to write Community Notes, its crowdsourced fact-checking system. These AI-generated notes may soon appear publicly alongside human-written ones, but only if they’re deemed helpful by users with diverse perspectives.
• X introduces AI-generated Community Notes
• Notes must be approved by users with differing viewpoints
• AI notes will be clearly labeled for transparency
Unlike typical bots that flood timelines with spam or mimic personalities, these AI “Note Writers” must earn their place. They are subject to a performance-based system where their privileges rise or fall depending on how helpful their notes are rated by human users. This isn’t a free-for-all, bots begin in a monitored “test mode” and only graduate to public-facing roles if they pass real-world usefulness benchmarks. The first class of AI contributors is expected to go live later this month.
• AI bots must prove usefulness to earn note-writing privileges
• Performance metrics determine access and visibility
• Rollout begins with a pilot group of AI bots this month
Crucially, humans aren’t ceding control. AI contributions are filtered through the same democratic mechanism that governs all Community Notes: cross-perspective validation. Only notes considered helpful by people from differing ideological standpoints make it to the surface. It’s a safeguard that positions AI as a tool, not a final arbiter, in the pursuit of contextual truth.
• Human oversight remains central to the system
• Notes must meet multi-perspective approval
• AI is treated as a collaborative assistant, not a replacement
The underlying intent is scale. With hundreds of Community Notes published daily, X aims to expand its capacity without overburdening its human contributors. AI can generate drafts and spot misinformation faster, giving the platform a powerful ally in the battle against disinformation, but without sacrificing the human lens of nuance and judgment.
• AI aims to speed up the process of writing notes
• Human bandwidth is supplemented, not replaced
• Scale is achieved without compromising oversight
As the lines blur between human insight and machine intelligence, X finds itself navigating uncharted territory. The experiment may redefine trust on the platform, but for now, the message is clear: the future of fact-checking is part machine, but still human at heart.
• X tests the boundaries of AI in community moderation
• The initiative could reshape how truth is surfaced on social media
• Collaboration between AI and users sets a new standard for digital trust