At a Glance
- AI swarms are a new form of coordinated disinformation that could reshape political campaigns.
- Researchers warn that the technology is already in testing and could influence the 2028 presidential election.
- A proposed AI Influence Observatory would bring together academics and NGOs to monitor and counter the threat.
- Why it matters: The spread of AI-generated narratives could erode trust in social media and democratic processes.
Introduction
AI swarms-large groups of AI chatbots that act as a single coordinated unit-are poised to become a dominant force in online misinformation. A recent study by a team of scholars from BI Norwegian Business School and the American Sunlight Project warns that these systems are already being tested and could be deployed before the 2028 U.S. presidential election.
AI Swarms: A New Disinformation Threat
“We are moving into a new phase of informational warfare on social media platforms where technological advancements have made the classic bot approach outdated,” says Jonas Kunst, professor of communication at BI Norwegian Business School.
Kunst explains that the classic bot model, which relies on simple scripted behavior, is becoming obsolete as platforms tighten detection and users demand more authentic interactions. Instead, researchers envision swarms that can imitate humans so convincingly that they blend into genuine user traffic.
Nina Jankowicz, former Biden administration disinformation czar and current CEO of the American Sunlight Project, describes the scenario: “What if AI wasn’t just hallucinating information, but thousands of AI chatbots were working together to give the guise of grassroots support where there was none? That’s the future this paper imagines-Russian troll farms on steroids.”
How Swarms Operate
The paper outlines several technical capabilities:
- Massive scale: Thousands of bots can be deployed simultaneously, each responding to user interactions.
- Adaptive messaging: Bots learn from engagement data, refining their output in real time.
- Micro-A/B testing: “With sufficient signals, they may run millions of microA/B tests, propagate the winning variants at machine speed, and iterate far faster than humans,” the researchers write.
- Targeted outreach: By mapping social networks at scale, swarms can identify key community nodes and tailor content to local beliefs and cultural cues.
Kunst notes that detection is difficult: “Because of their elusive features to mimic humans, it’s very hard to actually detect them and to assess to what extent they are present.” He adds, “We lack access to most social-media platforms because platforms have become increasingly restrictive, so it’s difficult to get an insight there. Technically, it’s definitely possible. We are pretty sure that it’s being tested.”
Potential Impact on Elections
While the researchers predict that swarms may not have a massive impact on the 2026 U.S. midterms in November, they foresee a significant effect on the 2028 presidential election. The ability to craft persuasive, locally resonant narratives could shift public opinion with minimal human oversight.
Kunst cautions that if swarms become ubiquitous, trust in social media could erode: “Let’s say AI swarms become so frequent that you can’t trust anybody and people leave the platform.”
He explains the economic incentive for platforms: “Of course, then it threatens the model. If they just increase engagement, for a platform it’s better to not reveal this, because it seems like there’s more engagement, more ads being seen, that would be positive for the valuation of a certain company.”
The study also highlights that the sheer volume of content produced by swarms could overwhelm current moderation tools, leading to a backlog of flagged posts. This delay may allow false narratives to spread unchecked for days or weeks, amplifying their influence before correction mechanisms can intervene.
Countermeasures: The AI Influence Observatory
To address the threat, the researchers propose an AI Influence Observatory. The body would consist of academic groups and non-governmental organizations, aiming to:
- Standardize evidence collection.
- Improve situational awareness.
- Enable faster collective response rather than relying on top-down penalties.
Executives from social-media platforms are excluded from the proposal because the researchers believe those companies prioritize engagement over detection. “They have little incentive to identify these swarms,” says Kunst.
The observatory would rely on open-source intelligence and academic research to build a shared repository of tactics. By pooling resources, stakeholders could identify emerging patterns faster than any single organization could alone.
Challenges and Limitations
- Access restrictions: Limited data from platforms hampers research and detection.
- Technical complexity: Building truly human-like bots requires advanced AI that is still in development.
- Economic incentives: Platforms may resist transparency if it harms ad revenue.
- Political will: Without international cooperation, observatories may struggle to enforce standards.
Without cooperation from platform owners, the observatory would face a data vacuum, limiting its ability to detect swarms early. The study stresses that transparency from social media firms is essential for any meaningful counter-measure.
Key Takeaways
| Timeline | Event |
|---|---|
| 2026 U.S. midterms (November) | Swarms unlikely to have massive impact, but early testing likely. |
| 2028 U.S. presidential election | Potentially significant influence from AI swarms. |
| Immediate | Establishing an AI Influence Observatory could provide early warning and coordinated responses. |
Time is a critical factor. The window between the first detection of AI swarms and their deployment at scale is likely to be short. Policymakers, civil society, and tech companies must act quickly to prevent a shift in how political persuasion is conducted online.
Conclusion

The study underscores that the next wave of online manipulation will be powered by AI, not by human operatives. Without a concerted effort to detect, monitor, and counter these swarms, the integrity of political conversations on social media could be at risk. The proposed observatory offers a framework, but its success will depend on cross-sector collaboration and political commitment.

