The Story Behind Facebook Threatening To Sue Developer Into Oblivion For Highlighting Useful Facebook Data
from the how-nice-of-them dept
You should read the full story, but basically, he built a simple crawler for public Facebook info, initially for his own purposes. He made sure that Facebook's robots.txt didn't block such crawlers -- and he also emailed someone at Facebook (who he had dealt with before), but didn't hear back from anyone. As his crawler worked, it started collecting a bunch of interesting data, and so he set up a website to let people explore some of this (again, public) data.
After playing with some of the data himself, he started making some interesting maps and charts with the data, and did a simple analysis of geographic locations of Facebook friend connections to show people what you could do with the data. He noted that if others (such as professional researchers) wanted to dig into the data, he would let them access a version of the data set (with identifying info stripped). The chart he released got picked up by a variety of sites and quickly got passed around.
And that's when the lawyers called:
On Sunday around 25,000 people read the article, via YCombinator and Reddit. After that a whole bunch of mainstream news sites picked it up, and over 150,000 people visited it on Monday. On Tuesday I was hanging out with my friends at Gnip trying to make sense of it all when my cell phone rang. It was Facebook's attorney.Mathew Ingram reported on the data getting forced down, and got a statement from Facebook that seems to miss the point:
He was with the head of their security team, who I knew slightly because I'd reported several security holes to Facebook over the years. The attorney said that they were just about to sue me into oblivion, but in light of my previous good relationship with their security team, they'd give me one chance to stop the process. They asked and received a verbal assurance from me that I wouldn't publish the data, and sent me on a letter to sign confirming that. Their contention was robots.txt had no legal force and they could sue anyone for accessing their site even if they scrupulously obeyed the instructions it contained. The only legal way to access any web site with a crawler was to obtain prior written permission.
Andrew Noyes, manager of public policy communications at Facebook, said in an email that Warden "aggregated a large amount of data from over 200 million users without our permission, in violation of our terms. He also publicly stated he intended to make that raw data freely available to others." Noyes also noted that Facebook's statement of rights and responsibilites says that users agree not to collect users' content or information "using automated means (such as harvesting bots, robots, spiders, or scrapers) without our permission."But I still don't see what the legal argument is. At best, I could see them terminating his account for disobeying the terms of service -- but even then the whole thing doesn't make much sense. The data is publicly available and, as Peter notes, it's pretty much standard practice for people to aggregate and analyze such data. However, he also pointed out that he couldn't afford to be a legal test case, and so he gave in and negotiated with Facebook to remove the data.
In the end, though, this shows Facebook's rather schizophrenic view towards data and privacy. On the one hand, it tries to push everyone to open up their info, but then if anyone does anything useful with it, they threaten to sue?