Was A French Court Correct In Blaming Google For Its Google Suggest Suggestions?

from the still-not-convinced dept

We recently wrote about yet another (the third one we know of) ruling in France that found Google liable for what "Google Suggest" suggested. Google Suggest, of course, is the autocomplete function that tries to guess what you're searching on, based on what other people searched on after typing the same letters. Of course, more recently, that's been expanded to Google's Instant Search, where it actually shows full results as you type. We suggested that the problem here was that French courts did not understand the technology.

Journalist Mitch Wagner, who I tend to agree with more often than not, claims that we got it wrong, and that the French courts do understand the technology perfectly fine: and they still decided to side against Google (and, separately, we should mention against Google's CEO, as if he had anything to do with the suggestions in question):
But actually the French court understands what's going on. Google raised just those issues in its defense, and the court disagreed. "The court ruling took issue with that line of argument, stating that 'algorithms or software begin in the human mind before they are implemented,' and noting that Google presented no actual evidence that its search suggestions were generated solely from previous related searches, without human intervention," according to Computerworld.
He goes on to suggest that it can (sort of) be compared to a product liability case, where if you make a product that does something "bad" (such as suggest libelous search results) that it should be your responsibility:
Is it appropriate for Google to build a search engine that automatically generates results with no intervention to be sure those results aren't libelous, defamatory, or otherwise harmful?

This is a problem that goes beyond people accused of crimes. Many companies are unhappy with the results that comes up when you search on industry terms. If you make hats, and you're not in the first page of results that come up when searching the word "hats," then you're dissatisfied with Google. Does that make Google wrong? Does it matter if your hats are, in fact, better and more popular than companies with search terms ranked higher?
I'm sorry, but I don't buy it. I understand Wagner's point, but I think the French courts still don't really understand the issues. It's not a question of whether or not it's appropriate, it's a question of whether or not it's even possible. How does Google build a search engine that simply knows whether a suggestion might be considered by a court of law to be libelous? As for the different rankings, those are opinions, which should be protected speech (last we checked). If Google's results aren't good, that's an opening for another search engine. Blaming Google because you don't like how the algorithm works is still a mistake, and I don't think the French courts really recognize this at all, no matter what they say.


Reader Comments (rss)

(Flattened / Threaded)

  1.  
    identicon
    Anonymous Coward, Sep 30th, 2010 @ 2:38am

    protected speech

    "As for the different rankings, those are opinions, which should be protected speech (last we checked)"

    maybe in the US , but not in France, where some opinions
    can't be expressed legally. for instance racist speeches.

     

    reply to this | link to this | view in thread ]

  2.  
    identicon
    LZ7, Sep 30th, 2010 @ 2:59am

    France

    I would say, that I have a fairly deep understanding of Google's AI, and the underlying philosophy that drives it's evolution. To put it bluntly, these accusations are based on moronic assumptions. I happen to know for a fact that the algo has a mind of its own, and it gets smarter with every iteration. It's the worlds largest neural network after all and if Google can be held liable for what it suggests, then so can every other smart application.

    I love how, software is purely an idea... until it's patent time, then it's an "Invention".

     

    reply to this | link to this | view in thread ]

  3.  
    icon
    Richard (profile), Sep 30th, 2010 @ 3:02am

    Maybe the french are just still annoyed over the whole "search for 'french military victories' and google suggests 'did you mean french military defeats?'" thing that 4chan did ages ago.

     

    reply to this | link to this | view in thread ]

  4.  
    icon
    Richard (profile), Sep 30th, 2010 @ 4:01am

    Product liability

    He goes on to suggest that it can (sort of) be compared to a product liability case, where if you make a product that does something "bad" (such as suggest libelous search results) that it should be your responsibility:

    Such a liability, if taken seriously, would shut down the whole of computing. All software has bugs and therefore can produce undesired results. Most software vendors have a pretty all-embracing liability disclaimer in their license agreements - and for good reason. Only a small subset of safety-critical software is tested to a high enough standard to allow liability to be accepted - and even then there are occasional problems (the RAF NI security Chinook crash for example).

    This strand of computing could not survive on its own. Do we really want to go back to 1939?

     

    reply to this | link to this | view in thread ]

  5.  
    identicon
    Anonymous Coward, Sep 30th, 2010 @ 4:22am

    Even if the algorithm "begin(s) in the human mind before they are implemented", the result of said algorithm most definitely doesn't come from a human mind. I don't think they're discussing the algorithm itself, just the results... so I don't see how that sentence makes sense. As for Google not proving there's no human intervention, the court hasn't proved that Google doesn't have aliens making the algorithms, which will negate the whole "human mind" thing. I think they should.

     

    reply to this | link to this | view in thread ]

  6.  
    identicon
    Anonymous Coward, Sep 30th, 2010 @ 5:24am

    The only way Google knows how to remedy that is to actually censor.

    http://edition.cnn.com/2010/TECH/web/09/29/google.instant.blacklist.mashable/index.html?h pt=Sbin

    There is no human possible way to know what it is libelous or not at the moment without human intervention and even then it is not possible to do it with a 100% certainty of the results. The French just don't want the feature apparently. There is no viable solution to this problem that doesn't involve labor intensive, cost intensive and inevitably unreliable solutions, so the only sensible thing to do is to remove that feature from the French views.

     

    reply to this | link to this | view in thread ]

  7.  
    identicon
    Anonymous Coward, Sep 30th, 2010 @ 5:36am

    There is a vast difference in law between the French and the Americans.

    The accessory of current French Alar is the Napoleonic Code.
    In the Napoleonic Code there was a de facto presumptin of guilt.
    Psychologically if the law does not explicitly permit something it is forbidden.

    US law is based on English Law.
    In the English Law there was a de facto presumptin of innocence.
    Philosophically, if something is not forbidden in law it is permitted.

     

    reply to this | link to this | view in thread ]

  8.  
    identicon
    abc gum, Sep 30th, 2010 @ 5:37am

    Customers who bought a dictionary also bought a thesaurus.

    What ?!
    The nerve of those people - I'll sue them for that !

     

    reply to this | link to this | view in thread ]

  9.  
    identicon
    NNN, Sep 30th, 2010 @ 5:57am

    "The accessory of current French Alar is the Napoleonic Code.
    In the Napoleonic Code there was a de facto presumptin of guilt."

    This is totally false !
    In France there is also the "présomption d'innocence"
    http://en.wikipedia.org/wiki/Presumption_of_innocence

    "Psychologically if the law does not explicitly permit something it is forbidden."

    hahaha can you really imagine that ?

     

    reply to this | link to this | view in thread ]

  10.  
    identicon
    Anonymous Coward, Sep 30th, 2010 @ 5:58am

    Re: France

    No-one is to blame for what the Skynet does?

     

    reply to this | link to this | view in thread ]

  11.  
    identicon
    AJB, Sep 30th, 2010 @ 6:10am

    Re:

    On eBay... For Sale - Authentic WWII French Rifle -- never shot, dropped twice.

     

    reply to this | link to this | view in thread ]

  12.  
    identicon
    si, Sep 30th, 2010 @ 6:13am

    This is totally false !
    In France there is also the "présomption d'innocence"
    http://en.wikipedia.org/wiki/Presumption_of_innocence


    false or not are you seriously citing wikipedia as a credible source of information?!

     

    reply to this | link to this | view in thread ]

  13.  
    identicon
    Anonymous Coward, Sep 30th, 2010 @ 6:20am

    No search, no find. France disappears into the cloud! Voila! No burqa, no freedom. No more revolution, just EU. No travel to Europe. Europe sucks.

     

    reply to this | link to this | view in thread ]

  14.  
    identicon
    A.H., Sep 30th, 2010 @ 6:40am

    French?

    It's painfully obvious to anyone with even the smallest amount of understanding of computers and programming that the French courts are:

    A) Completely Ignorant of Computer Matters
    B) Completely Clueness in General
    C) Have stock with the companies bringing the lawsuits
    D) Drunk
    E) Have a croissant up their butts
    F) All of the Above

     

    reply to this | link to this | view in thread ]

  15.  
    icon
    taoareyou (profile), Sep 30th, 2010 @ 6:41am

    Standards

    It's virtually impossible for Google to adhere to the various legal standards of every single country in the world. What is not legal in France is legal in the U.S. If your country deems Google's search to be illegal, take actions to prevent your citizens from accessing it. You cannot enforce your rules on every other country.

     

    reply to this | link to this | view in thread ]

  16.  
    identicon
    Josef, Sep 30th, 2010 @ 6:43am

    Re: Off topic

    "false or not are you seriously citing wikipedia as a credible source of information?!"

    Ummmm. I hear a lot of people down wikipedia as not credible. Usually those people have done no research and have nothing to dispute information they are presented from wikipedia.

    Im just bringing this up because I generally use wikipedia and then cross reference with other sources, depending on the topic. I've found it to be very accurate. So Im wondering why so many people with no alternatives feel otherwise.

     

    reply to this | link to this | view in thread ]

  17.  
    icon
    Sean T Henry (profile), Sep 30th, 2010 @ 7:01am

    ?

    So Google should just disable the suggestion feature for google.fr and place at the top of searches and the homepage "Missing features? Find out why." Or just set the default to no suggestions and have the user decide to turn it on with a notification on the page that it is the users decision to turn on and we are not liable for suggestions...

     

    reply to this | link to this | view in thread ]

  18.  
    icon
    Berenerd (profile), Sep 30th, 2010 @ 7:04am

    Re: Re: France

    I blame Jon Conner. If not for him the bots wouldn't have come back in time at all, giving those people the ability to reverse engineer the arm and chip.

     

    reply to this | link to this | view in thread ]

  19.  
    icon
    Marcus Carab (profile), Sep 30th, 2010 @ 7:18am

    Re:

    are you seriously citing wikipedia as a credible source of information?!

    Are you seriously that out of touch?

     

    reply to this | link to this | view in thread ]

  20.  
    identicon
    Anonymous Coward, Sep 30th, 2010 @ 7:27am

    Re: French?

    You forgot about:

    G) Enraged at being called, "surrender monkeys"
    H) Secretly ashamed at really being surrender monkeys
    I) Jealous at the influence of the USA and the comparative insignificance of France
    J) Mad as hell that English is more important than French
    K) Frustrated that they are too stupid to invent their own search engine that is anywhere near as good as Google
    L) Generally vindictive and spiteful
    M) Full of themselves
    N) Suffering from numerous other personality defects

     

    reply to this | link to this | view in thread ]

  21.  
    icon
    crade (profile), Sep 30th, 2010 @ 7:30am

    If your search suggests "Bill Clinton is a moron",
    google is making the claim that this could be what you want to search for, they are not making the statement that Bill Clinton is a moron at all.

    How could there claim that "this could be what you want to search for" be untrue?

     

    reply to this | link to this | view in thread ]

  22.  
    identicon
    NNN, Sep 30th, 2010 @ 7:55am

    Re: Re: French?

    childish.

     

    reply to this | link to this | view in thread ]

  23.  
    identicon
    Bruno, Sep 30th, 2010 @ 8:14am

    Re: Re:

    Twice ?

     

    reply to this | link to this | view in thread ]

  24.  
    identicon
    Anonymous Coward, Sep 30th, 2010 @ 10:35am

    The "Automated" Defense

    So, if I create an automated device or process that does bad things, should I not then be responsible for what it does?

    So, for example, that I rig up a shotgun with a tripwire on my property to keep "bad guys" out. If it then winds up killing neighborhood children who get on my lawn, should I then be able to just say "Hey, it's not my fault. It's an automated device! There's no way I can make it actually know who's a bad guy and who isn't!" Or should I be held responsible anyway on the grounds that I shouldn't have implemented such a device in that case? But that might discourage my "innovation".

    So, should "automation" be a defense, as Mike contends, or not?

     

    reply to this | link to this | view in thread ]

  25.  
    icon
    crade (profile), Sep 30th, 2010 @ 11:08am

    Re: The "Automated" Defense

    Your example isn't really automated.. It's triggered by a tripwire. :)
    The google example isn't really automated either, it's triggered by people searching for text.

    The question is really if the tool maker should be held responsible for the actions of the tool's users. The search tool doesn't do anything automatically.

     

    reply to this | link to this | view in thread ]

  26.  
    icon
    btr1701 (profile), Sep 30th, 2010 @ 11:15am

    Searches

    Since Google's auto-complete function only shows what people are factually searching for, it seems like the French courts (and those who agree with them) are saying that it's illegal to create a service that reports factual information about the search habits of internet users.

     

    reply to this | link to this | view in thread ]

  27.  
    icon
    Rikuo (profile), Sep 30th, 2010 @ 11:16am

    Re: The "Automated" Defense

    I'm gonna start off with the obvious: a search on Google is nothing like rigging up a shotgun on your front lawn. Potential libel and wholesale slaughter are two different things.
    Even if you want to equate the two, you are actively setting up a system that can do physical harm. What's happening with Google is that users (not Google itself) are searching for "XX is YY" (where X is a name and Y is a negative adjective). The algorithm then notes that so many thousands/millions of people have searched for "XX is YY", and if I start typing "XX is" it will add in "YY", because more than likely, that's what I'm searching for.
    Here's an example that I thought up. Say, a library hosts books and has a fancy robotic mechanism that picks up and deposits books in front of me based on what I search for. Say I type into the computer "The Holocaust Didn't Happen" or "Politician X is a Hypocryte", and it dumps books that are about what I searched for, and it gets more accurate based on a user telling the computer "Yes, this book is pertinant to the topic". Is the library at fault? They didn't write the books, they merely have them on a shelf. The computer doesn't know its libelous. Should the books be consigned to being unknown because its against the law to search for something libelous?

     

    reply to this | link to this | view in thread ]

  28.  
    icon
    Mike Masnick (profile), Sep 30th, 2010 @ 11:24am

    Re: The "Automated" Defense

    So, for example, that I rig up a shotgun with a tripwire on my property to keep "bad guys" out. If it then winds up killing neighborhood children who get on my lawn, should I then be able to just say "Hey, it's not my fault. It's an automated device! There's no way I can make it actually know who's a bad guy and who isn't!" Or should I be held responsible anyway on the grounds that I shouldn't have implemented such a device in that case? But that might discourage my "innovation".

    You didn't really mean to make that argument, did you?

    We're not saying it's okay because it's "automated," but because it's a function of what the overall users do. Users searched on those terms, it's accurate.

    Besides, setting up a gun to shoot people is to set up a system specifically designed to perform an illegal act. Reporting what people are searching for is not.

     

    reply to this | link to this | view in thread ]

  29.  
    icon
    btr1701 (profile), Sep 30th, 2010 @ 11:25am

    Re: ?

    Or just shut down their physical offices in France and leave the country altogether. Their site would still be accessible to the people of France (assuming the government didn't block it), but they wouldn't have to deal with crap like this. They could just ignore these lawsuits, let the French courts issue default judgments against them and then laugh when the plaintiffs come trying to collect.

     

    reply to this | link to this | view in thread ]

  30.  
    icon
    btr1701 (profile), Sep 30th, 2010 @ 11:27am

    Re: The "Automated" Defense

    > So, if I create an automated device or process
    > that does bad things, should I not then be
    > responsible for what it does?

    Your argument assumes that factually showing what people around the world are searching for is a "bad thing".

     

    reply to this | link to this | view in thread ]

  31.  
    icon
    Rikuo (profile), Sep 30th, 2010 @ 12:26pm

    Here's a scenario the French court didn't think about

    What if I'm a historian on a certain subject, and I search on Google deliberately for a libelous statement.
    For example, take the David Beckham case de jour, where a prostitute has claimed he paid her to sleep with him. What if in twenty years, I write a biography of David Beckham, and I want to write about this episode in his life. So I type into Google "David Beckham Prostitute", and it spits out links to her blog or something, which will be kept in some archive.

     

    reply to this | link to this | view in thread ]

  32.  
    icon
    Andrew (profile), Sep 30th, 2010 @ 12:30pm

    "The court ruling took issue with that line of argument, stating that 'algorithms or software begin in the human mind before they are implemented,' and noting that Google presented no actual evidence that its search suggestions were generated solely from previous related searches, without human intervention."

    Doesn't this come a little too close to questioning safe harbour provisions? I can't address the second part of this statement (though it would surprise me greatly if more than a handful of possible suggestions had been subject to human intervention), but this blog's comment system, for example, was conceived in the human mind too. Yet if I were to write something libellous here, Techdirt would rightly not be held liable despite republishing my comments to the world.

     

    reply to this | link to this | view in thread ]

  33.  
    identicon
    Mitch Wagner, Sep 30th, 2010 @ 5:53pm

    Thanks for the follow-up!

    Google already screens Google Instant for search results that are inoffensive, so screening results is not ridiculous.

    I'm not saying I agree with the French courts here. I'm concerned that we're creating a future where public perception trumps reality, and if most of the Google-using population believes a thing to be true, Google will spit it back, even if that thing is actually false.

     

    reply to this | link to this | view in thread ]

  34.  
    identicon
    Anonymous Coward, Oct 2nd, 2010 @ 2:48pm

    Re: Re: The "Automated" Defense

    You didn't really mean to make that argument, did you?

    It's a question, not an argument. Please learn the difference. In fact, it's actually questioning *your* argument. Sorry.

    We're not saying it's okay because it's "automated,"...

    "How does Google build a search engine that simply knows whether a suggestion might be considered by a court of law to be libelous?" seems to be asking that question. Likewise, how does one rig up a shotgun booby-trap that "simply knows" when it is firing in a way that might be considered by a court of law to be a reasonable level of force in a particular situation? Could it be that if one can't then maybe, just maybe, they shouldn't be rigging up such a thing?

    Besides, setting up a gun to shoot people is to set up a system specifically designed to perform an illegal act.

    So you're telling me that it is illegal to shoot people, in any situation, where you are out there in California? I'm not familiar with California law so I'll just have to take your word on that, but I would ask you: if it is illegal to shoot people in any situation in California, then why is it that every California cop I've seen has a gun? Decoration? Interesting.

    Now, where I live it is not illegal to shoot people in certain situations, so as far as I'm concerned your argument to the contrary fails on factual grounds. But, the courts here have ruled though that setting up shotgun booby-traps is illegal because they may fire even when they shouldn't. In other words, automation is no excuse around here for doing something that would otherwise be illegal.

    Reporting what people are searching for is not.

    Apparently a French court disagrees with you. Somehow, I have a feeling that they're not letting you dictate otherwise to them, either.

     

    reply to this | link to this | view in thread ]

  35.  
    identicon
    Anonymous Coward, Oct 2nd, 2010 @ 2:52pm

    Re: Re: The "Automated" Defense

    Your argument assumes that factually showing what people around the world are searching for is a "bad thing".

    Not, it doesn't. However, that does seem to be the determination that was made by the court. If you have a problem that, then perhaps you should address your concerns to the court.

     

    reply to this | link to this | view in thread ]

  36.  
    identicon
    Anonymous Coward, Oct 2nd, 2010 @ 2:55pm

    Re: Re: The "Automated" Defense

    I'm gonna start off with the obvious: a search on Google is nothing like rigging up a shotgun on your front lawn. Potential libel and wholesale slaughter are two different things.

    Straw man alert: nobody was saying it was.

    Even if you want to equate the two...

    Umm, no, that was *your* strawman.

     

    reply to this | link to this | view in thread ]

  37.  
    icon
    btr1701 (profile), Oct 7th, 2010 @ 8:26pm

    Re: Re: Re: The "Automated" Defense

    > > Your argument assumes that factually showing what people around
    > > the world are searching for is a "bad thing".

    > Not, it doesn't. However, that does seem to be the determination that
    > was made by the court.

    And such a ruling is logically and philosophically incompatible with a free society, so I guess the French judiciary has tacitly admitted that they're no longer living in one.

    > If you have a problem that, then perhaps you should address your
    > concerns to the court.

    Or I could just address them here, like I did. How's that?

     

    reply to this | link to this | view in thread ]

  38.  
    identicon
    Anonymous Coward, Oct 8th, 2010 @ 6:00pm

    Re: Re: Re: Re: The "Automated" Defense

    Or I could just address them here, like I did. How's that?

    I think the word is "ineffectual".

     

    reply to this | link to this | view in thread ]


Add Your Comment

Have a Techdirt Account? Sign in now. Want one? Register here
Get Techdirt’s Daily Email
Save me a cookie
  • Note: A CRLF will be replaced by a break tag (<br>), all other allowable HTML will remain intact
  • Allowed HTML Tags: <b> <i> <a> <em> <br> <strong> <blockquote> <hr> <tt>
Follow Techdirt
A word from our sponsors...
Essential Reading
Techdirt Reading List
Techdirt Insider Chat
A word from our sponsors...
Recent Stories
A word from our sponsors...

Close

Email This