Weather-Predicting Supercomputer Spawns High-Tech Race

from the bigger,-faster,-better dept

Wasn’t distributed computing supposed to get rid of the need for big expensive supercomputers? It seems that there’s still a market for “big iron”, as many in the US computing industry are upset that Japan has trumped them with their Earth Simulator super computer, which runs faster than the top five US supercomputers combined. Apparently, the Earth Simulator has given Japan an edge in both supercomputers and in climate and weather studies, which the computer is used for. Of course, things change quickly, and already IBM and Cray say that they’ll have more powerful supercomputers by 2004. I do wonder, though, if all this focus on big iron does puts the focus too much on the size of the computer, and less on other ways (such as distributed computing) to accomplish the same results.

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Weather-Predicting Supercomputer Spawns High-Tech Race”

Subscribe: RSS Leave a comment
dorpus says:

Distributed computing is not serious computing

For big, serious calculations that require complex coordination, no errors, speed, and where big money is at stake, big iron is still (and probably always will be) the way to go.

Distributed computing, prone to unreliable communications and whimsical performance, is a tool for lower-priority projects that:

1. do not require security
2. is on a long timetable
3. is just a massively parallel, “dumb” task not requiring much coordination.

We would not think of distributing bank transactions or air traffic control systems to the screensavers of millions of home PC’s.

Steve Janss says:

Uh... No.

Distributed computing (SETI), superclusters (Google), and a traditional supercomputer (weather/Dod nuclear simulator) are all “supercomputers” with respect to the fact that all operate significantly faster than the fastest single-processor.

Whether or not something is a supercomputer depends upon three things:


Believe it or not, our brains are the fastest computers on the planet – for the tasks our brains were designed to handle (audio-visual analysis and storage, critical thinking, “fuzzy” logic).

But as for the question…

Interesting thought about superclusters making supercomputing obsolete…

Google, of course, runs on one of the largest and long-running (several years) super-cluster in the world.

But is it a supercomputer?

Well, that depends upon the task at hand.

Some tasks can be broken down into many smaller tasks which require little, if any, intercommunication. The SETI screensaver project was one of those. This type of task lends itself to distributed computing, and was (still is, I believe) the largest, most powerful, most widely distributed supercluster on the planet.

Was it a supercomputer? Yes, since the task was to analyze radio frequency data for patterns. Single task, distributed throughout millions of computers all over the planet. In this case, each chunk of data can be analyzed independant of the others.

Even the new weather supercomputer, if it were taked to do what the SETI project did, would be many, many times slower, as powerful as it is. Yet the distributed computer continued chunking away, day and night, using excess processor cycles!

To me, the technology is fairly simple, yet the concept is still amazing.

Other tasks require more frequent communication between the sub-tasks (Google), and so this approach would NOT work. However, a super-cluster like the one they’re running works just fine. In this case, each sub-task has little (if anything) to do with other sub-tasks. John’s search really has nothing to do with Sarah’s search, other than the fact that both are accessing the same database.

Yet other tasks require a high level of communication between the sub-tasks (weather supercomputer and the DoD’s nuclear simulation supercomputer). Thus, even a super-cluster connected with 100Base-T isn’t fast enough, as the lag time between nodes becomes the bottleneck.

This is when a real supercomputer is called for, when the processors can communicate between themselves and the memory at the same speed your Pentium IV communicates with the L2 cache.

Now THAT’s fast! And necessary in finite element analysis where every element affects not only those elements immediately surrounding them, but nearby elements, as well.

So, the correct answer is: A supercomputer is not defined merely by the hardware and software, but by the task it’s performing, as well.

Consider, for example, trying to task the SETI project with the weather data… The internodal com jam would bottleneck the entire project, and the overall speed of the supercomputer would probably bog down to something like that of a Cray-1 (or less).

Also consider trying to task the weather supercomputer with the SETI project – it would be many times slower than were the millions of computers slogging away in the actual project.

Well, there you have it.

– Steve Janss

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...