As the debate over privacy issues and whether or not the US needs a specific privacy law has continued, it seems that some may be over-focusing on what they believe needs to be private. While I'm a big supporter of basic privacy rights, where it becomes quite troubling is when people seek to make information that has no basic expectation of privacy into private information. We've discussed "obvious" cases, such as with the right to be forgotten
, which is under discussion in Europe. But what about other cases. Professor Paul Ohm, in an otherwise interesting interview about how difficult it is to have truly anonymous datasets
, also suggests that we should outlaw making inferences from data
We have 100 years of regulating privacy by focusing on the information a particular person has. But real privacy harm will come not from the information they have but the inferences they can draw from the data they have. No law I have ever seen regulates inferences. So maybe in the future we may regulate inferences in a really different way; it seems strange to say you can have all this data but you can’t take this next step. But I think that‘s what the law has to do.
Why does the law "have" to do that? At some point, aren't we taking it too far? Certain things should reasonably be kept private, but if a company is taking data that it legitimately has access to, and is able to make inferences from it, is that so wrong? Google tries to improve search rankings based on the inferences it makes from how people search. Your spam filter
improves based on the inferences it makes over your data. As Adam Thierer points out in response to Ohm, there are all sorts of reasons why companies should be allowed to make inferences from data
- Example 1: Your local butcher may deduce from past purchases which types of meat you like and suggest new choices or cuts that are to your liking. This happened just this past weekend for me when a butcher at my local Balducci’s grocer recommended I try a terrific cut of steak after years of watching what else I bought there. And because I am such a regular shopper at Balducci’s, I also get special coupons and discounts offered to me all the time based on inferences drawn from past purchases. (I have a very similar experience at a local beer and wine store).
- Example 2: Your mobile phone provider may draw inferences from past usage patterns to offer you a more sensible text or data plan. This happened to me last year when Verizon Wireless cold-called me and set up a much better plan for me.
- Example 3: Your car or home insurance agent may use data about your past behavior to adjust premiums or offer better plans. When I was teenage punk, my family’s insurance company properly inferred that I was a bad risk to them (and others on the road!) because of multiple speeding tickets. I paid higher premiums as a result all the way through my 20s. But, as I aged and got fewer tickets, they inferred I was a better bet and gave me a lower premium.
I think the real issue is when people try to apply an artificial "privacy" standard on concepts that don't necessarily need or deserve privacy rights. Privacy is important for certain things, but too many people seem to think that suddenly all data deserves extra privacy rights, even if that makes little real-world sense.