We've all heard the "if you've got nothing to hide, what are you complaining about" argument concerning violations of privacy. In fact, it seems to come up in nearly every blog post we do on the subject -- especially on stories about TSA scans and gropes. There have been plenty of attempts over the year to debunk this faulty line of arguing, and Michael Scott
points us to a lengthy, but excellent, article by Daniel Solove that explains why privacy matters, even if you've got nothing to hide
. The article actually goes through a bunch of the counter arguments that people bring up (such as asking people to hand over their credit card bills or asking them to share naked photos), but Solove points out that these sorts of extreme responses don't even get to the crux of the matter. It's not just about the stuff we want to keep secret:
Commentators often attempt to refute the nothing-to-hide argument by pointing to things people want to hide. But the problem with the nothing-to-hide argument is the underlying assumption that privacy is about hiding bad things. By accepting this assumption, we concede far too much ground and invite an unproductive discussion about information that people would very likely want to hide. As the computer-security specialist Schneier aptly notes, the nothing-to-hide argument stems from a faulty "premise that privacy is about hiding a wrong." Surveillance, for example, can inhibit such lawful activities as free speech, free association, and other First Amendment rights essential for democracy.
The deeper problem with the nothing-to-hide argument is that it myopically views privacy as a form of secrecy. In contrast, understanding privacy as a plurality of related issues demonstrates that the disclosure of bad things is just one among many difficulties caused by government security measures.
It goes on to note that even if you have nothing to hide, there are plenty of reasons why a loss of privacy should concern you:
One such harm, for example, which I call aggregation, emerges from the fusion of small bits of seemingly innocuous data. When combined, the information becomes much more telling. By joining pieces of information we might not take pains to guard, the government can glean information about us that we might indeed wish to conceal. For example, suppose you bought a book about cancer. This purchase isn't very revealing on its own, for it indicates just an interest in the disease. Suppose you bought a wig. The purchase of a wig, by itself, could be for a number of reasons. But combine those two pieces of information, and now the inference can be made that you have cancer and are undergoing chemotherapy. That might be a fact you wouldn't mind sharing, but you'd certainly want to have the choice.
Another potential problem with the government's harvest of personal data is one I call exclusion. Exclusion occurs when people are prevented from having knowledge about how information about them is being used, and when they are barred from accessing and correcting errors in that data. Many government national-security measures involve maintaining a huge database of information that individuals cannot access. Indeed, because they involve national security, the very existence of these programs is often kept secret. This kind of information processing, which blocks subjects' knowledge and involvement, is a kind of due-process problem. It is a structural problem, involving the way people are treated by government institutions and creating a power imbalance between people and the government. To what extent should government officials have such a significant power over citizens? This issue isn't about what information people want to hide but about the power and the structure of government.
There's a lot more in the article as well, and it seems like this will be a good one to point people to the next time they make this bogus argument.