US and Chinese officials shaking hands over small globe with cooperation papers and flags in background

AI Rivals Secretly Collaborate

At a Glance

  • 141 of 5,290 papers at NeurIPS 2024 show US-China co-authorship
  • Transformer, Llama, and Qwen models cross borders in hundreds of studies
  • Collaboration rate held steady: 134 of 4,497 papers in 2023
  • Why it matters: Tension-driven politics haven’t stopped scientists from sharing breakthroughs that speed progress for both nations

US and Chinese AI labs are quietly writing papers together despite geopolitical friction, a News Of Fort Worth data review shows. Out of 5,290 works presented at December’s Neural Information Processing Systems conference, 141 list authors from both countries-about 3 percent of the total. The figure is virtually unchanged from 2023, when 134 of 4,497 papers carried the same dual affiliation.

Model Migration Across the Pacific

Algorithms born in one nation routinely power studies led by the other:

  • Google’s transformer architecture appears in 292 papers from Chinese institutions
  • Meta’s Llama family is central to 106 papers with Chinese authors
  • Alibaba’s Qwen large language model shows up in 63 papers that also have US-affiliated researchers
Two interconnected nodes weave arrows between US and China flags with globe showing global cooperation

Megan L. Whitfield used OpenAI’s Codex to parse every NeurIPS PDF, checking author affiliations and model mentions. The script tallied national ties and counted how often key systems were cited.

Why Teams Keep Working Together

Jeffrey Ding, assistant professor at George Washington University, calls the link unavoidable. “Whether policymakers on both sides like it or not, the US and Chinese AI ecosystems are inextricably enmeshed-and both benefit from the arrangement,” he says.

Katherine Gorman, a NeurIPS spokesperson, points to lasting academic bonds. “Collaborations between students and advisors often continue long after the student has left their university,” she notes, adding that professional networks and past co-authorships show cooperation “across the field in many places.”

Method Behind the Numbers

Megan L. Whitfield built a Python pipeline to download all papers, then asked Codex to flag:

  • Author fields listing US universities or companies
  • Author fields listing Chinese institutions
  • Mentions of Transformer, Llama, Qwen, and other core models

The model wrote scripts, imported libraries, and produced reports that were hand-checked for accuracy. The process mixed automation with manual verification to catch coding slips.

The outcome: a clear count of cross-border teamwork amid talk of decoupling, offering evidence that research networks still span the globe’s top two AI powers.

Author

  • Megan L. Whitfield is a Senior Reporter at News of Fort Worth, covering education policy, municipal finance, and neighborhood development. Known for data-driven accountability reporting, she explains how public budgets and school decisions shape Fort Worth’s communities.

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *