IBM's Massive Grid Storage System Not Working So Well
from the whoops dept
IBM has been touting their grid/utility computing effort and it’s ability to distribute and scale the load as high as necessary. They certainly picked a big project to prove some of the grid storage claims, and now it’s looking like they may have bitten off way more than they can realistically chew. They’re supposed to be setting up a grid-based storage system that can store 34 terabytes of data a day continuously, with that number growing to 100 terabytes a day a few years later. The project is a particle accelerator that will be continuously generating a ton of data analyzing 40 million particle collisions per second. The only problem is that, so far, IBM hasn’t been able to get the system to handle more than 28 terabytes of data (less than a single day’s worth) and there has yet to be a single successful test, as the system apparently hangs and crashes every time they try. Of course, it’s not unusual for projects not to work well early on, but there are growing concerns that this isn’t just early problems, but a more fundamental issue about how this storage system is supposed to work.
Comments on “IBM's Massive Grid Storage System Not Working So Well”
Brings to mind IBM’s spectacular failure 10 years ago to overhaul the FAA’s air traffic control systems. They ended up making incremental improvements. Last I heard, about 50% of software projects in the real world fail.