Is It A Privacy Violation For Companies To Make Inferences About What You Might Like?

from the better-service-is-a-privacy-violation? dept

As the debate over privacy issues and whether or not the US needs a specific privacy law has continued, it seems that some may be over-focusing on what they believe needs to be private. While I'm a big supporter of basic privacy rights, where it becomes quite troubling is when people seek to make information that has no basic expectation of privacy into private information. We've discussed "obvious" cases, such as with the right to be forgotten, which is under discussion in Europe. But what about other cases. Professor Paul Ohm, in an otherwise interesting interview about how difficult it is to have truly anonymous datasets, also suggests that we should outlaw making inferences from data:
We have 100 years of regulating privacy by focusing on the information a particular person has. But real privacy harm will come not from the information they have but the inferences they can draw from the data they have. No law I have ever seen regulates inferences. So maybe in the future we may regulate inferences in a really different way; it seems strange to say you can have all this data but you can’t take this next step. But I think that‘s what the law has to do.
Why does the law "have" to do that? At some point, aren't we taking it too far? Certain things should reasonably be kept private, but if a company is taking data that it legitimately has access to, and is able to make inferences from it, is that so wrong? Google tries to improve search rankings based on the inferences it makes from how people search. Your spam filter improves based on the inferences it makes over your data. As Adam Thierer points out in response to Ohm, there are all sorts of reasons why companies should be allowed to make inferences from data:
  • Example 1: Your local butcher may deduce from past purchases which types of meat you like and suggest new choices or cuts that are to your liking. This happened just this past weekend for me when a butcher at my local Balducci’s grocer recommended I try a terrific cut of steak after years of watching what else I bought there. And because I am such a regular shopper at Balducci’s, I also get special coupons and discounts offered to me all the time based on inferences drawn from past purchases. (I have a very similar experience at a local beer and wine store).
  • Example 2: Your mobile phone provider may draw inferences from past usage patterns to offer you a more sensible text or data plan. This happened to me last year when Verizon Wireless cold-called me and set up a much better plan for me.
  • Example 3: Your car or home insurance agent may use data about your past behavior to adjust premiums or offer better plans. When I was teenage punk, my family’s insurance company properly inferred that I was a bad risk to them (and others on the road!) because of multiple speeding tickets. I paid higher premiums as a result all the way through my 20s. But, as I aged and got fewer tickets, they inferred I was a better bet and gave me a lower premium.
I think the real issue is when people try to apply an artificial "privacy" standard on concepts that don't necessarily need or deserve privacy rights. Privacy is important for certain things, but too many people seem to think that suddenly all data deserves extra privacy rights, even if that makes little real-world sense.


Reader Comments (rss)

(Flattened / Threaded)

  1.  
    identicon
    @barneyc, Mar 29th, 2011 @ 1:27am

    Inference ~ Decisional Interference

    Several years ago the notion of inference as an invasion of privacy was nicely set down by Daniel Solove; he termed it (at least I think he coined it) Decisional Interference.

    The problem isn't that inference isn't useful or indeed desirable moreover that i) not everyone wants it in every or the same situations as others and ii) that if the inference is made to benefit the inferer rather than the inferee then we enter all sorts of sticky icky territory.

     

    reply to this | link to this | view in thread ]

  2.  
    identicon
    SiEdDi, Mar 29th, 2011 @ 1:28am

    Inferences as such are not bad, but..

    Clearly the Examples quoted in the article do not constitute privacy harm. But if the butcher decided to raise the price for the meat that he knows you love, just for you there's a wholly different story (this does happen online, for example in credit card rate offers, you might wanna check ryan calo's blog on that.

     

    reply to this | link to this | view in thread ]

  3.  
    icon
    @barneyc (profile), Mar 29th, 2011 @ 1:40am

    Re: Inferences as such are not bad, but..

    In the UK there is a budget airline who a couple of week back were caught out "interfering."

    When someone looked up a fare they were quoted something like £120 but they waited a day before going back to book it. By then the price had risen to £230 odd.

    When they cleared their browser cookies the price mysteriously dropped back to £120.

    The airline were fiddling with pricing based on a consumer expressed intent that wasn't followed through.

    i.e. we can infere you are interested in this flight as you looked at it yesterday so we've upped the price.

     

    reply to this | link to this | view in thread ]

  4.  
    icon
    PW (profile), Mar 29th, 2011 @ 1:47am

    Inferences or Context

    Where Prof. Ohm talks about this in terms of inferences, I think of it in terms of context, but on the whole I agree with him. Let's use your examples to explain the problem.

    Example 1: what if your past purchases at your local butcher were then made available to insurance companies for the purpose of pricing your health insurance? Perhaps, they don't get just yours, but everyone in your neighborhood and decide that because on average people in your 'hood are obese then you will be charged a significantly higher health insurance premium. While you had no problem with this info being used towards giving you discounts and coupons on future purchases, you might be much less comfortable with this same information being used towards figuring out your health insurance premiums.

    Example 2: Unbeknown to you, one of your very close friends since college has been traveling to Syria quite frequently, but since he lives across the country, you're only aware that he has been traveling on business. Turns out his business has to do with buying Syrian antiquities but the people he has been interacting with are somewhat suspect. Because the two of you communicate every month or so, and you also communicate somewhat regularly with some other mutual friends, inferences start getting drawn between all of you regarding your potential involvement with terrorism. Getting these inferences a wrong when it comes to recommending you a different calling plan may be no big deal, but getting it wrong when it comes to whether you might be related to a terrorist, is a very different story.

    Example 3: The insurance companies have decided that correlating your activities online with your propensity for risk is a better indicator than whether you get speeding tickets. In some cases it's not just risk, but the fact that your reading preferences and the frequency of your shopping indicates a sedentary lifestyle, and hence you present risks behind the wheel as well as raise issues around longevity.

    In other words, information that may be harmless in one context, when viewed in another can become very uncomfortable for you. It's very possible that you might not participate in various programs if you were aware of this. Since the value proposition that some of these data collectors gain is the externality value of your information, then you should at least have some control or be aware of it so you can make an educated decision about sharing this information about yourself. Opt-out is an unethical concept on its face because it readily implies that you should have to take an action to not have your information used in ways that you are not aware of.

    Anyway, sorry for the long response, but the privacy issue is that in fact no information is good or bad, private or non-private (ie. lots of people I don't know, know where I live), it's more that in different contexts it can take on different values. Something that could be fine in one context may be very detrimental in another, so you should have the right to decide the context (or use right) under which you're willing to share information about yourself.

     

    reply to this | link to this | view in thread ]

  5.  
    identicon
    Anonymous Coward, Mar 29th, 2011 @ 3:09am

    I don't think inferring something from data should be regulated even though I can see the potential for abuse like targeting people who don't bother to fight anything and will just pay if they can to make things go away, of course those things can also be fought in other fronts and that is why I don't believe regulations are needed.

     

    reply to this | link to this | view in thread ]

  6.  
    icon
    ReallyEvilCanine (profile), Mar 29th, 2011 @ 3:13am

    Privacy

    Those examples are truly sophomoric taken at face value. None of them display a violation of privacy. The problem begins when each of those examples collects and distributes such information without your permission.

    Cold-calling is illegal here in Germany, as is sharing of any personal information without express consent (and it's also illegal to tie such consent to benefits). Even in the case of customer loyalty cards, thee are harsh limits on what information can be shared between participating companies who all support one particular card (think Walgreen's, Safeway, Foot Locker, Quizno, Chipotle and your local private electric supplier all sharing one SuperCustomerCard).

    The Founding Fathers in the US never dreamed that personal privacy would need to be protected or they would have added that to the Bill of Rights. The laws here in Germany were written long after such a need was clear.

     

    reply to this | link to this | view in thread ]

  7.  
    icon
    Andrew (profile), Mar 29th, 2011 @ 3:43am

    From the rest of the article, I don't think Paul Ohm is talking about inferences such as those given in the examples above. I think he is discussing cases where the identity of a person can be inferred from apparently anonymised data (e.g. Netflix rentals or Amazon purchases). This is not the same as a butcher recommending new meats as he already knows you personally (though he may of course know you better post analysis).

    And this is something that I believe is already covered by some legislation. In the UK, the Data Protection Act regulates the use of personally identifiable information. If I remember correctly, a record qualifies as personally identifiable information if it uniquely identifies someone, so if they had my address and I lived in a house with 5 others it would not count, but if I lived on my own it would. Presumably Netflix rental data would fall under this too if it were possible to identify me uniquely. The DPA does not stop companies from offering a better service to their data subjects (or even ripping them off :) based on the information they have; it principally regulates distribution to others and mandates advertising / database opt outs.

     

    reply to this | link to this | view in thread ]

  8.  
    icon
    Not an Electronic Rodent (profile), Mar 29th, 2011 @ 4:21am

    Yes and no

    My feeling on this kind of thing is that the use of data shouldn't necessarily be regulated but what should be mandated by law (otherwise it simply won't happen) is that the company should have a clear opt-in/opt-out choice for the consumer as to how they are allowed to use that data especially when it comes to marketing. Just because I might buy a coffee machine from a company doesn't mean I want them to ring me up and ask me how it is and try and sell me something else nor give my details to coffee manufacturers to do the same thing. On the other hand I might want them to, but I think that should be my choice not theirs. I don't think there should be an implicit contract that just because you have interacted with them and therefore given them data about you that they can do anything they like with it with no further permission from you.

    There's regulations like that in the UK and they sort of work, as well as "Telephone preference" and "mailing preference" lists to prevent junk marketing, but they are muddy at best, poorly advertised and usually bent by lobbying for exceptions.

     

    reply to this | link to this | view in thread ]

  9.  
    identicon
    abc gum, Mar 29th, 2011 @ 4:34am

    Re: Re: Inferences as such are not bad, but..

    I've seen the same practice with US carriers.

    Imagine at the grocery whilst choosing tomatoes, you put one down only to pick it up again later but this time the price is substantially higher.

     

    reply to this | link to this | view in thread ]

  10.  
    identicon
    abc gum, Mar 29th, 2011 @ 4:38am

    I'm glad that Mr Ohm is creating resistance

     

    reply to this | link to this | view in thread ]

  11.  
    identicon
    Bengie, Mar 29th, 2011 @ 4:59am

    Re: Re: Re: Inferences as such are not bad, but..

    That's not using anonymous data. Even though they don't know who you are, the data is tied directly to your browser which means the data is tied to a "person", which is no longer anonymous.

    I know a browser can't be a 100% tie to a specific person, but typically it is.

     

    reply to this | link to this | view in thread ]

  12.  
    identicon
    abc gum, Mar 29th, 2011 @ 5:09am

    Re: Re: Re: Re: Inferences as such are not bad, but..

    Agreed. It's still bad business practice.

     

    reply to this | link to this | view in thread ]

  13.  
    icon
    Benjamin (profile), Mar 29th, 2011 @ 5:17am

    Re: Inferences or Context

    PW, I think your response was very insightful, and I think it raises a number of good points.

    I have the feeling that it's going to take a good amount more of my thinking power than I can bring to bear at the moment, but I might have to disagree on the idea of opt-out being unethical on the basis of it's requiring action from the individual. I can't help but feel uncomfortable when we draw the line in a place where making an observation requires prior restraint.

    It may be because of my experience in social science research, but I believe that whenever observations about my behavior are observed, and may serve to make a direct impact upon my life, I would like to be aware of this, and informed as to the intent. Then, much as I can do when my phone asks me if I want to share my location with a software developer, I can choose to opt out. Much as with institutionalized research, it's informed consent that I'm seeking, but observations that do not have a direct impact on my immediate existence are not my biggest concern.

     

    reply to this | link to this | view in thread ]

  14.  
    identicon
    Michael, Mar 29th, 2011 @ 6:43am

    Re: Re: Inferences as such are not bad, but..

    That seems like a bad business practice for an airline, but you want to criminalize it?

    How is that any different than a used car salesman sizing up his customer before negotiating a price? Is he not allowed to take into account you leaving and returning the next day?

    You may call the used car salesman unscrupulous, but this should not be criminal behavior. Competition should take care of these kinds of problems. If you don't like the salesman, go to the dealership down the road. There are ways to report bad business practices (BBB in the US) and with the online world allowing people to complain pretty loudly, this seems like something we should keep regulators out of before they may it much much worse.

     

    reply to this | link to this | view in thread ]

  15.  
    identicon
    mischab1, Mar 29th, 2011 @ 9:25am

    Re: Privacy

    In other words, the problem isn't that a company is making inferences about you, it is that they are doing it with data they shouldn't have.

     

    reply to this | link to this | view in thread ]

  16.  
    identicon
    Anonymous Coward, Mar 29th, 2011 @ 10:39am

    Re: Re: Inferences as such are not bad, but..

    While I agree this is totally shady and I would never give business to that airline again, you're basically saying that the supply-and-demand model of macroeconomics should be a criminal activity. If all of the petrol in the world were gone in 5 years and you found a barrel in your garage, would it be wrong to charge more than the normal price? would it be illegal?

     

    reply to this | link to this | view in thread ]

  17.  
    identicon
    Nick Bramble, Mar 29th, 2011 @ 3:31pm

    Re: Inferences or Context

    (1) PW's objections to this line of reasoning are all based on third-party use of data – uses that are seriously out of line with the consumer's original expectations.

    (2) Meanwhile, Thierer's examples all involve first-party use of data – the collection and use of which would tend to align with an ordinary consumer's expectations.

    Seems like (1) is more problematic than (2) and is a valid distinction for any privacy law or regulation to make.

     

    reply to this | link to this | view in thread ]

  18.  
    icon
    Brian Schroth (profile), Mar 30th, 2011 @ 6:28am

    Re: Re: Inferences or Context

    Your phone's dialog is a poor example, that is an opt-in scenario. You have to explicitly select "Allow", otherwise you will not be opted in.

    An opt-out scenario would be if the phone automatically assumed it was allowed, but if you chose to go into some settings menu somewhere you could choose to turn that off.

     

    reply to this | link to this | view in thread ]

  19.  
    icon
    Persephone (profile), Mar 30th, 2011 @ 8:30am

    Hey Mike, when are you going to write about this massive privacy violation?

    http://open.salon.com/blog/virginia888/2010/12/02/is_topix_giving_out_users_personal_data_to_the _nsa

    A short list of privacy violations conducted by Topix:

    Violation of encryption, open accessing of user's IP addresses, supporting hackers, sharing users personal data with the NSA without their permission or knowledge.

     

    reply to this | link to this | view in thread ]

  20.  
    identicon
    Androgynous Cowherd, Apr 2nd, 2011 @ 8:06pm

    As Adam Thierer points out in response to Ohm, there are all sorts of reasons why companies should be allowed to make inferences from data.


    And he leaves out the most important one. The thing being talked about is this proposal:

    No law I have ever seen regulates inferences. So maybe in the future we may regulate inferences ... think that‘s what the law has to do.


    Regulating inferences. What are "inferences", exactly? Basically, inferences are thinking. So they're talking about regulating thinking.

    That's right. Thoughtcrime. Because figuring out how to better filter spam from your customers' webmail is doubleplusungood.

     

    reply to this | link to this | view in thread ]


Add Your Comment

Have a Techdirt Account? Sign in now. Want one? Register here
Get Techdirt’s Daily Email
Save me a cookie
  • Note: A CRLF will be replaced by a break tag (<br>), all other allowable HTML will remain intact
  • Allowed HTML Tags: <b> <i> <a> <em> <br> <strong> <blockquote> <hr> <tt>
Follow Techdirt
A word from our sponsors...
Essential Reading
Techdirt Reading List
Techdirt Insider Chat
A word from our sponsors...
Recent Stories
A word from our sponsors...

Close

Email This