by Timothy Lee
Wed, Dec 12th 2007 2:12pm
Ed Felten has an interesting post analyzing the fallout from Facebook's Beacon gaffe. It's now widely agreed that the company screwed up, which raises the question of why it wasn't obvious ahead of time that Beacon would prove unpopular with users. Felten offers a few ideas: delegating privacy to a single isolated department, treating privacy as a legal or PR problem, or underestimating the importance of peoples' emotional reactions. I think there's also something more fundamental going on: often no one really knows what the privacy rules for new technologies will be. We're now doing things with information that were literally impossible a couple of decades ago. Social conventions haven't been keeping up. As a result, no one—even users themselves—knows what will be considered a privacy violation until after it's been tried. Sometimes, as with last year's Facebook news feed announcement or Google's introduction of GMail, an initial negative reaction blows over once users learn more. In other cases opposition snowballs to the point where a company has no choice but to change course. It's often difficult to predict ahead of time which category a given product will fall into. Peoples' initial concerns are often based on very superficial impressions (such as the idea that GMail is "reading your email") that can turn out to be unfounded once users become more familiar with the product. Other features, such as Facebook's news feed, can turn out to be useful enough that users consider it to be worth a bit of foregone privacy. And in some cases, a new feature just turns out to be a plain old bad idea. We won't know until users have a chance to try a feature and provide feedback. Which is why I think it's a mistake to judge a company too harshly for introducing a new product that turns out to be a bad idea from a privacy perspective. We'll only learn good principles for privacy by experimentation, and experimentation inevitably leads to some missteps. As long as a company clearly discloses how user information will be used and is responsive to user concerns, I don't think people should hold the occasional misstep against it.
If you liked this post, you may also be interested in...
- Netherlands Looks To Join The Super-Snooper Club With New Mass Surveillance Law
- German Regulators Urge Parents To Destroy WiFi Connected Doll Over Surveillance Fears
- Coalition Slams DHS Plans To Demand Social Media Passwords
- Basically The Entire Tech Industry Signs Onto A Legal Brief Opposing Trump's Exec Order
- How Is 'Non-Literally Copying' Code Still Copyright Infringement?