Over the last 20 years, the army of [email protected] screensavers has parsed billions of signals collected at Arecibo and selected those that seemed the most likely to have been generated by an extraterrestrial intelligence. Once the program parsed this data, it was shipped off to Berkeley where the data was further processed to filter out signals from satellites, TV stations, and other sources of interference, to match the data with historical observations, and then to determine if a followup was warranted.
In the early days of the [email protected] program, the internet connection at Arecibo wasn’t fast enough to push out data onto the internet directly, so the [email protected] team had to record the data on 35 gigabyte tapes that were mailed to Berkeley and then uploaded to the internet. Today, the data is piped over the internet to [email protected]’s servers in California, which are equipped with terabytes of storage to handle the data for processing.
When the software stops pushing out new data to users at the end of March, the Berkeley [email protected] team will continue to work through the backlog of data generated by the program over the next few months. The team is small—there are only four full-time employees—and it has struggled to stay on top of managing the public-facing part of the [email protected] program while also publishing research on the data that has been collected. So far, the team has only been able to deeply analyze portions of the dataset. Getting a solid understanding of what it contains will require looking at all the data in aggregate.
“[email protected] volunteers only have access to 100 seconds of data from the telescope, so they can’t see this global picture over 20 years,” says Werthimer. “If you see an interesting signal in the sky, it needs to be there when you go back and look again. That’s what we’re going to be looking for.”
Although the public-facing portion of the [email protected] experiment may be coming to a close, Korpela says the project isn’t dead; it’s hibernating. After the data analysis is wrapped up, he says, [email protected] could possibly be relaunched using data from other telescopes like the MeerKAT array in South Africa or the FAST telescope in China. Korpela says it would probably take a year or more to stand up a successor to the program’s first iteration, but he hasn’t ruled it out as a possibility.
In the meantime, Breakthrough Listen will be carrying the torch for massive public-facing SETI projects. Founded in 2015 with a $100 million donation from the Russian billionaire Yuri Milner, Breakthrough Listen is dedicated to collecting and analyzing massive amounts of radio data to search for signs of extraterrestrial intelligence. Like [email protected], Breakthrough is also being shepherded by the Berkeley SETI Research Center, but its data firehose would overwhelm a distributed computing program like [email protected] to search through it all. Instead, to parse through the data it uses massive banks of GPUs at the Green Bank Telescope in West Virginia running advanced search algorithms.
“Developing these new algorithms and bringing them on site is really the way to crack this problem today,” says Steve Croft, Breakthrough Listen’s project scientist at the Green Bank Telescope. “It’s just not feasible anymore to go over the internet to individual users.”
Each day, the telescopes around the world that contribute to Breakthrough Listen generate more than 100 terabytes of raw data. Even if there were enough people volunteering their computers to analyze it, the internet connections at the telescopes can’t push the data onto the net fast enough. As Croft says, it was time to “bring the computers to the data” and do as much processing of radio signals on site as possible.