I didn't really know what to expect when I showed up at the gym at the University of San Francisco last Saturday to participate in the first "Flash Mob Computing" event. But it turned out to be one of those incredible days where I learned a lot, met some great people, and had a blast. During all of this, history was being made as several hundred PCs were networked together to form one of the largest supercomputers.
The idea was an instant, do-it-yourself supercomputer that would be assembled out of individuals' PCs and only be operating for a period of a few hours. In fact, that was the idea behind a course offered at USF and taught by scientist Pat Miller, who works full-time at the Lawrence Livermore Labs across the bay. Students in his class got more than they bargained for when they signed up last year.
The scene when I arrived at the gym at 8:30 in the morning was what I would call controlled chaos, and plenty of activity for that hour of the morning. It wasn't the usual crowd of people working out or swimming laps: instead, it was filled with geeks. Not as many people as expected were carting in their own computers – I guess the thought of having all your personal data exposed to the mob was unsettling to some. I was carrying two laptops, care of two vendors who had loaned the equipment to me for other reviews: Acer's Aspire and a new whitebook from D&H. They joined a really diverse collection of IBM Thinkpad laptops, Dell laptops and desktops, Toshibas, and some whiteboxes that were of every shape and size, including some 100 machines from e-Loan, a local company which was one of the prime supporters of the event.
Those of us that BYOL didn't have to worry that our data would be disrupted. Every machine that was part of the mob was given a boot CD and the hard disk wasn't touched. But I guess it is hard to tell someone whose entire life is on their laptop this.
Some of the student projects were naked computers: no case, no frumpery, just the boards and connectors to cobble everything together. The most interesting PCs were the water-cooled over-clockers, one that had its own life-support external case that I guess held the coolant reservoir or something. Others were clearly custom-built jobs with fancy cases.
There were no Macs, save for one machine that was being used by the Web team to update the site: the organizers of the event had asked for only X86-family machines, to keep the number of variables down while they assembled their gigantic supercomputer.
By 10:30 we had roughly 650 PCs on the floor of the gym. They were placed on folding tables that had pre-cut cables organized and laid down their lengths. The cables all terminated at a bunch of Foundry Big Iron super-switches that were located around the room. (Foundry had loaned close to $500,000 worth of gear, which is a significant proportion of the value of the computers on the floor.)
The experiment was supposed to begin around 11, but various problems kept the organizers from running the Linpack benchmark for several hours. Still, the level of organization was impressive: Everyone seemed to know what they were doing, and the numerous reporters had plenty of time to interview the principals as well as talk to various industry luminaries who follow these supercomputer events like groupies of a major rock band. One was Gordon Bell, who was the father of the VAX while he worked at DEC and is now a Microsoft fellow. He was carrying his own laptop, but forgot to bring his CD drive so he wasn't able to connect to the mob.
What made the day for me wasn't just seeing all this gear hooked up but the ancillary people and meetings that were happening elsewhere on the USF campus. To augment the day's activities, we were treated to a series of talks by leading experts, including computer scientists at national laboratories, NASA, HP and Microsoft. While it was a Saturday, I still found myself spending more time at the seminars than I anticipated, just because they were so interesting. It isn't often that you can sit and learn from the leading thinkers of computer science, and hear about how NASA is doing global climate models, or how Microsoft built its Terraserver, the database of maps of the United States. I really liked Jim Gray's talk. He is a research fellow at Microsoft and one of the original designers behind the TPC benchmark while he was at Tandem.
"There are two types of supercomputing problems now: finding a needle in a haystack, and finding all the haystacks," he said. "Computers are good at one or the other, but not both." As an example, he mentioned skyserver.sdss.org, a site that consolidates and analyzes the leading astronomical observatories around the world, all using Web Services, XML, and some common coding. "Astronomy isn't anymore about guys sitting up through the night looking through telescopes at the tops of mountains," he said. "Instead, it is all about reducing large amounts of data down to a form that humans can actually analyze it." He mentions that Microsoft receives as part of its Terraserver project a box of firewire hard disks from the government, with the terabytes of data that are needed to update the site.
What was most interesting to me, and ultimately the mob's undoing, were the networking issues around assembling and running such a huge collection of gear. The mob used ordinary 100BaseT Ethernet, which was a two-edged sword. While easy to setup, it was difficult to debug when network problems arose. The Linpack benchmark that was used requires all of the component machines to be running during the several hours of the test, and the organizers had trouble getting all 600-plus PCs to operate online flawlessly. The best benchmark accomplished was a peak rate of 180 gigaflops using 256 computers, but that wasn't an official score as a node failed during the test. The group was able to complete a test of 77 gigaflops the night before using 150 computers that the university had donated for the experiment. Both of those results are better than the original Cray supercomputers that were created in the early 1990s and delivered around 16 gigaflops – at considerably higher cost, too.
The supercomputer set keeps track of these benchmarks through a Web site called top500.org. Twice a year the site posts the results of the benchmark and the list of the 500 most powerful machines – or at least the most powerful machines that the public is aware of. As one of the supercomputer designers who has worked for the government labs told me, "Those are the top 500 that YOU know about. You can be sure there are plenty of others." You certainly got the feeling that "other agencies" were keeping tabs on this event. To make the list the mob needed to turn in a benchmark somewhere above 600 gigaflops: Clearly, they were in range if they could have gotten all their gear to contribute and run without problems.
Of course, to be fair, most of the machines on the Top 500 list are custom-build jobs that take weeks or months to assemble, test, and code their specialized operating system software, not to mention spending some dollars to purchase too. (One of the more interesting entries is third on the list, a collection of several thousand Macintoshes, at Virginia Polytechnic Univesity.)
But what we were witnessing was one computer designer called the democratization of supercomputing, or street computing at its best. Anyone could easily assemble a couple dozen nodes and do this in an afternoon, and the ability to harness occasional collections of PCs to tackle computing problems has already been proven by the peer-to-peer computational experiments of SETI@Home and others that take over your PC as a screensaver when otherwise idle. While the mob wasn't completely successful, it did prove its point, and it was a fascinating day to watch and be a part of.
Entire contents copyright 2004 by David Strom, Inc.
David Strom, firstname.lastname@example.org, +1 (516) 562-7151
Port Washington NY 11050
Web Informant is (r) registered trademark with the
U.S. Patent and Trademark Office.
ISSN #1524-6353 registered with U.S. Library of Congress
If you'd like to subscribe (issues are sent via email),
please send an email to: