At a Glance
- UC Berkeley’s SETI@home project sifted 12 billion radio bursts to flag 100 candidates for deeper study.
- More than a million volunteers donated idle computer cycles over 21 years to process Arecibo Observatory data.
- The effort sets a new sensitivity benchmark for future alien hunts even if none of the 100 signals pan out.
Why it matters: The study shows that crowd-sourced computing can push the frontier of one of science’s biggest questions-are we alone?
After two decades of harnessing home PCs around the planet, the SETI@home project has trimmed a deluge of cosmic radio noise to a hard-won shortlist of 100 signals that might-just might-be ET’s way of saying hello.
From 12 Billion Bursts to 100 Leads
The numbers are staggering. Between 1999 and 2020, volunteers installed free software that quietly processed narrow-band radio data from the now-defunct Arecibo dish in Puerto Rico. The result: 12 billion potential detections stored for analysis.
Scientists spent the next ten years whittling that mountain:
- First cut: automated filters reduced the total to roughly one million events.
- A supercomputer provided by the Max Planck Institute for Gravitational Physics then scrubbed out terrestrial interference, leaving a couple of million again.
- Repeated sky-position and frequency matches eliminated duplicates.
- Researchers manually vetted the final thousand, ending with the current 100 candidates.
“There’s no way that you can do a full investigation of every possible signal that you detect, because doing that still requires a person and eyeballs,” said Eric Korpela, astronomer and project director. “We have to do a better job of measuring what we’re excluding. Are we throwing out the baby with the bath water? I don’t think we know for most SETI searches, and that is really a lesson for SETI searches everywhere.”

Why Ordinary PCs Were the Secret Weapon
Arecibo’s torrent of data would have overwhelmed any single supercomputer, so researchers broke it into bite-size chunks. Each volunteer’s CPU ran a discrete Fourier transform to separate individual frequencies, then hunted for Doppler drift-the subtle slide in frequency caused by the relative motion of any distant transmitter.
Early estimates predicted about 50,000 participants. Within months, the project passed one million. “Pretty quickly, we had a million volunteers,” recalled computer scientist and co-founder David Anderson. “It was kind of cool, and I would like to let that community and the world know that we actually did some science.”
That global army kept the project alive for 21 years, making SETI@home one of the longest-running distributed-science efforts in history.
What the 100 Signals Tell Us-So Far
Nothing, yet. The candidates meet basic criteria: narrow bandwidth, persistence, and apparent motion consistent with a sky source rather than local interference. Confirming them will take follow-up observations, a painstaking process that lies ahead.
Even if every candidate fizzles, the project has already reset the bar for future hunts. “If we don’t find ET, what we can say is that we established a new sensitivity level,” Anderson explained. “If there were a signal above a certain power, we would have found it.”
Those findings appear in two peer-reviewed papers released in The Astronomical Journal.
Key Takeaways
- Big data, big crowd: More than a million PCs proved that crowd-sourced computing can tackle problems once reserved for national supercomputing centers.
- 12 billion to 100: The dramatic filter-down shows both the promise and the difficulty of separating genuine alien technosignatures from Earth-based noise.
- New baseline: Even a null result sharpens sensitivity limits for every future SETI survey, narrowing the parameter space where extraterrestrial transmitters could hide.
The final vetting of the remaining 100 candidates is still to come, but the method itself may turn out to be the project’s most enduring legacy.

