A few years back when you couldn't open a new web site without hearing about yet another distributed computing project (all compared to SETI@Home, despite it not really being the first such project), Google did a little experimenting in the space by experimenting on their toolbar with distributed computing, allowing users to contribute some spare cycles to Folding@Home. About a year later, perennial also-ran search engine Looksmart tried to get into the distributed search engine game by begging volunteers to install an app that would search for Looksmart. This seemed a bit odd at the time. People got involved with distributed search projects because they thought they were helping some big public "good," such as finding aliens or curing cancer. Helping a commercial search engine search better probably wasn't all that compelling. At the time, Google responded by saying the difficult part of search wasn't the computing power needed or the ability to find more sites, but crunching the data to figure out which pages were really relevant -- and that wasn't something that a distributed app could help that much with. Is it possible they've changed their minds? While there are plenty of theories about why Google launched a web accelerator project (since many don't see how it relates to search), Tristan Louis has submitted his own theory, saying that it has a lot more to do with search than most people think. His argument is that the Web Accelerator is just a back door way for Google to use everyone's computer as a distributed part of the Google grid. Since Google gets to record every webpage you go to, it's not hard to see how that data can be processed back into the overall search engine crawler data for indexing -- or for Google to just build an indexing system directly into the Accelerator. Considering their earlier comments, it might be a stretch at this point to think that's where they're going with this, but it's at least a theory worth mentioning. Update: Meanwhile, people are freaking out after realizing that Google's web accelerator is doing the one thing you don't want a caching system to do: caching private login info. So, people are finding themselves logged in as someone else on pages that require a login (and probably realizing that others are going to be logged in as them). This looks to just be an extension of an earlier problem people noticed with Google's search engine caching -- meaning, they should have known about this already. Update: But wait, it gets even worse. Various sites are reporting that the "pre-fetching" option basically clicks all links on a page, including ones that say things like delete this account. Ooops.
If you liked this post, you may also be interested in...
- Stopping 23andMe Will Only Delay The Revolution Medicine Needs
- Abusing The Surveillance Scandal To Punish Internet Freedom Even More
- Bruce Schneier On The Feudal Internet And How To Fight It
- US Free Trade Agreements Are Bad Not Just For The Economy, But For The Environment, Too
- James Clapper Thinks That NSA Employees Will Sell Out Our Nation After A Few Days Without A Paycheck