Was A French Court Correct In Blaming Google For Its Google Suggest Suggestions?

from the still-not-convinced dept

We recently wrote about yet another (the third one we know of) ruling in France that found Google liable for what “Google Suggest” suggested. Google Suggest, of course, is the autocomplete function that tries to guess what you’re searching on, based on what other people searched on after typing the same letters. Of course, more recently, that’s been expanded to Google’s Instant Search, where it actually shows full results as you type. We suggested that the problem here was that French courts did not understand the technology.

Journalist Mitch Wagner, who I tend to agree with more often than not, claims that we got it wrong, and that the French courts do understand the technology perfectly fine: and they still decided to side against Google (and, separately, we should mention against Google’s CEO, as if he had anything to do with the suggestions in question):

But actually the French court understands what’s going on. Google raised just those issues in its defense, and the court disagreed. “The court ruling took issue with that line of argument, stating that ‘algorithms or software begin in the human mind before they are implemented,’ and noting that Google presented no actual evidence that its search suggestions were generated solely from previous related searches, without human intervention,” according to Computerworld.

He goes on to suggest that it can (sort of) be compared to a product liability case, where if you make a product that does something “bad” (such as suggest libelous search results) that it should be your responsibility:

Is it appropriate for Google to build a search engine that automatically generates results with no intervention to be sure those results aren’t libelous, defamatory, or otherwise harmful?

This is a problem that goes beyond people accused of crimes. Many companies are unhappy with the results that comes up when you search on industry terms. If you make hats, and you’re not in the first page of results that come up when searching the word “hats,” then you’re dissatisfied with Google. Does that make Google wrong? Does it matter if your hats are, in fact, better and more popular than companies with search terms ranked higher?

I’m sorry, but I don’t buy it. I understand Wagner’s point, but I think the French courts still don’t really understand the issues. It’s not a question of whether or not it’s appropriate, it’s a question of whether or not it’s even possible. How does Google build a search engine that simply knows whether a suggestion might be considered by a court of law to be libelous? As for the different rankings, those are opinions, which should be protected speech (last we checked). If Google’s results aren’t good, that’s an opening for another search engine. Blaming Google because you don’t like how the algorithm works is still a mistake, and I don’t think the French courts really recognize this at all, no matter what they say.

Filed Under: , ,
Companies: google

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Was A French Court Correct In Blaming Google For Its Google Suggest Suggestions?”

Subscribe: RSS Leave a comment
38 Comments
LZ7 says:

France

I would say, that I have a fairly deep understanding of Google’s AI, and the underlying philosophy that drives it’s evolution. To put it bluntly, these accusations are based on moronic assumptions. I happen to know for a fact that the algo has a mind of its own, and it gets smarter with every iteration. It’s the worlds largest neural network after all and if Google can be held liable for what it suggests, then so can every other smart application.

I love how, software is purely an idea… until it’s patent time, then it’s an “Invention”.

Richard (profile) says:

Product liability

He goes on to suggest that it can (sort of) be compared to a product liability case, where if you make a product that does something “bad” (such as suggest libelous search results) that it should be your responsibility:

Such a liability, if taken seriously, would shut down the whole of computing. All software has bugs and therefore can produce undesired results. Most software vendors have a pretty all-embracing liability disclaimer in their license agreements – and for good reason. Only a small subset of safety-critical software is tested to a high enough standard to allow liability to be accepted – and even then there are occasional problems (the RAF NI security Chinook crash for example).

This strand of computing could not survive on its own. Do we really want to go back to 1939?

Anonymous Coward says:

Even if the algorithm “begin(s) in the human mind before they are implemented”, the result of said algorithm most definitely doesn’t come from a human mind. I don’t think they’re discussing the algorithm itself, just the results… so I don’t see how that sentence makes sense. As for Google not proving there’s no human intervention, the court hasn’t proved that Google doesn’t have aliens making the algorithms, which will negate the whole “human mind” thing. I think they should.

Anonymous Coward says:

The only way Google knows how to remedy that is to actually censor.

http://edition.cnn.com/2010/TECH/web/09/29/google.instant.blacklist.mashable/index.html?hpt=Sbin

There is no human possible way to know what it is libelous or not at the moment without human intervention and even then it is not possible to do it with a 100% certainty of the results. The French just don’t want the feature apparently. There is no viable solution to this problem that doesn’t involve labor intensive, cost intensive and inevitably unreliable solutions, so the only sensible thing to do is to remove that feature from the French views.

Anonymous Coward says:

There is a vast difference in law between the French and the Americans.

The accessory of current French Alar is the Napoleonic Code.
In the Napoleonic Code there was a de facto presumptin of guilt.
Psychologically if the law does not explicitly permit something it is forbidden.

US law is based on English Law.
In the English Law there was a de facto presumptin of innocence.
Philosophically, if something is not forbidden in law it is permitted.

NNN says:

“The accessory of current French Alar is the Napoleonic Code.
In the Napoleonic Code there was a de facto presumptin of guilt.”

This is totally false !
In France there is also the “pr?somption d’innocence”
http://en.wikipedia.org/wiki/Presumption_of_innocence

“Psychologically if the law does not explicitly permit something it is forbidden.”

hahaha can you really imagine that ?

Josef says:

Re: Off topic

“false or not are you seriously citing wikipedia as a credible source of information?!”

Ummmm. I hear a lot of people down wikipedia as not credible. Usually those people have done no research and have nothing to dispute information they are presented from wikipedia.

Im just bringing this up because I generally use wikipedia and then cross reference with other sources, depending on the topic. I’ve found it to be very accurate. So Im wondering why so many people with no alternatives feel otherwise.

A.H. says:

French?

It’s painfully obvious to anyone with even the smallest amount of understanding of computers and programming that the French courts are:

A) Completely Ignorant of Computer Matters
B) Completely Clueness in General
C) Have stock with the companies bringing the lawsuits
D) Drunk
E) Have a croissant up their butts
F) All of the Above

Anonymous Coward says:

Re: French?

You forgot about:

G) Enraged at being called, “surrender monkeys”
H) Secretly ashamed at really being surrender monkeys
I) Jealous at the influence of the USA and the comparative insignificance of France
J) Mad as hell that English is more important than French
K) Frustrated that they are too stupid to invent their own search engine that is anywhere near as good as Google
L) Generally vindictive and spiteful
M) Full of themselves
N) Suffering from numerous other personality defects

Sean T Henry (profile) says:

?

So Google should just disable the suggestion feature for google.fr and place at the top of searches and the homepage “Missing features? Find out why.” Or just set the default to no suggestions and have the user decide to turn it on with a notification on the page that it is the users decision to turn on and we are not liable for suggestions…

btr1701 (profile) says:

Re: ?

Or just shut down their physical offices in France and leave the country altogether. Their site would still be accessible to the people of France (assuming the government didn’t block it), but they wouldn’t have to deal with crap like this. They could just ignore these lawsuits, let the French courts issue default judgments against them and then laugh when the plaintiffs come trying to collect.

Anonymous Coward says:

The "Automated" Defense

So, if I create an automated device or process that does bad things, should I not then be responsible for what it does?

So, for example, that I rig up a shotgun with a tripwire on my property to keep “bad guys” out. If it then winds up killing neighborhood children who get on my lawn, should I then be able to just say “Hey, it’s not my fault. It’s an automated device! There’s no way I can make it actually know who’s a bad guy and who isn’t!” Or should I be held responsible anyway on the grounds that I shouldn’t have implemented such a device in that case? But that might discourage my “innovation”.

So, should “automation” be a defense, as Mike contends, or not?

crade (profile) says:

Re: The "Automated" Defense

Your example isn’t really automated.. It’s triggered by a tripwire. 🙂
The google example isn’t really automated either, it’s triggered by people searching for text.

The question is really if the tool maker should be held responsible for the actions of the tool’s users. The search tool doesn’t do anything automatically.

Rikuo (profile) says:

Re: The "Automated" Defense

I’m gonna start off with the obvious: a search on Google is nothing like rigging up a shotgun on your front lawn. Potential libel and wholesale slaughter are two different things.
Even if you want to equate the two, you are actively setting up a system that can do physical harm. What’s happening with Google is that users (not Google itself) are searching for “XX is YY” (where X is a name and Y is a negative adjective). The algorithm then notes that so many thousands/millions of people have searched for “XX is YY”, and if I start typing “XX is” it will add in “YY”, because more than likely, that’s what I’m searching for.
Here’s an example that I thought up. Say, a library hosts books and has a fancy robotic mechanism that picks up and deposits books in front of me based on what I search for. Say I type into the computer “The Holocaust Didn’t Happen” or “Politician X is a Hypocryte”, and it dumps books that are about what I searched for, and it gets more accurate based on a user telling the computer “Yes, this book is pertinant to the topic”. Is the library at fault? They didn’t write the books, they merely have them on a shelf. The computer doesn’t know its libelous. Should the books be consigned to being unknown because its against the law to search for something libelous?

Anonymous Coward says:

Re: Re: The "Automated" Defense

I’m gonna start off with the obvious: a search on Google is nothing like rigging up a shotgun on your front lawn. Potential libel and wholesale slaughter are two different things.

Straw man alert: nobody was saying it was.

Even if you want to equate the two…

Umm, no, that was *your* strawman.

Mike Masnick (profile) says:

Re: The "Automated" Defense

So, for example, that I rig up a shotgun with a tripwire on my property to keep “bad guys” out. If it then winds up killing neighborhood children who get on my lawn, should I then be able to just say “Hey, it’s not my fault. It’s an automated device! There’s no way I can make it actually know who’s a bad guy and who isn’t!” Or should I be held responsible anyway on the grounds that I shouldn’t have implemented such a device in that case? But that might discourage my “innovation”.

You didn’t really mean to make that argument, did you?

We’re not saying it’s okay because it’s “automated,” but because it’s a function of what the overall users do. Users searched on those terms, it’s accurate.

Besides, setting up a gun to shoot people is to set up a system specifically designed to perform an illegal act. Reporting what people are searching for is not.

Anonymous Coward says:

Re: Re: The "Automated" Defense

You didn’t really mean to make that argument, did you?

It’s a question, not an argument. Please learn the difference. In fact, it’s actually questioning *your* argument. Sorry.

We’re not saying it’s okay because it’s “automated,”…

“How does Google build a search engine that simply knows whether a suggestion might be considered by a court of law to be libelous?” seems to be asking that question. Likewise, how does one rig up a shotgun booby-trap that “simply knows” when it is firing in a way that might be considered by a court of law to be a reasonable level of force in a particular situation? Could it be that if one can’t then maybe, just maybe, they shouldn’t be rigging up such a thing?

Besides, setting up a gun to shoot people is to set up a system specifically designed to perform an illegal act.

So you’re telling me that it is illegal to shoot people, in any situation, where you are out there in California? I’m not familiar with California law so I’ll just have to take your word on that, but I would ask you: if it is illegal to shoot people in any situation in California, then why is it that every California cop I’ve seen has a gun? Decoration? Interesting.

Now, where I live it is not illegal to shoot people in certain situations, so as far as I’m concerned your argument to the contrary fails on factual grounds. But, the courts here have ruled though that setting up shotgun booby-traps is illegal because they may fire even when they shouldn’t. In other words, automation is no excuse around here for doing something that would otherwise be illegal.

Reporting what people are searching for is not.

Apparently a French court disagrees with you. Somehow, I have a feeling that they’re not letting you dictate otherwise to them, either.

Anonymous Coward says:

Re: Re: The "Automated" Defense

Your argument assumes that factually showing what people around the world are searching for is a “bad thing”.

Not, it doesn’t. However, that does seem to be the determination that was made by the court. If you have a problem that, then perhaps you should address your concerns to the court.

btr1701 (profile) says:

Re: Re: Re: The "Automated" Defense

> > Your argument assumes that factually showing what people around
> > the world are searching for is a “bad thing”.

> Not, it doesn’t. However, that does seem to be the determination that
> was made by the court.

And such a ruling is logically and philosophically incompatible with a free society, so I guess the French judiciary has tacitly admitted that they’re no longer living in one.

> If you have a problem that, then perhaps you should address your
> concerns to the court.

Or I could just address them here, like I did. How’s that?

Rikuo (profile) says:

Here's a scenario the French court didn't think about

What if I’m a historian on a certain subject, and I search on Google deliberately for a libelous statement.
For example, take the David Beckham case de jour, where a prostitute has claimed he paid her to sleep with him. What if in twenty years, I write a biography of David Beckham, and I want to write about this episode in his life. So I type into Google “David Beckham Prostitute”, and it spits out links to her blog or something, which will be kept in some archive.

Andrew (profile) says:

“The court ruling took issue with that line of argument, stating that ‘algorithms or software begin in the human mind before they are implemented,’ and noting that Google presented no actual evidence that its search suggestions were generated solely from previous related searches, without human intervention.”

Doesn’t this come a little too close to questioning safe harbour provisions? I can’t address the second part of this statement (though it would surprise me greatly if more than a handful of possible suggestions had been subject to human intervention), but this blog’s comment system, for example, was conceived in the human mind too. Yet if I were to write something libellous here, Techdirt would rightly not be held liable despite republishing my comments to the world.

Mitch Wagner (profile) says:

Thanks for the follow-up!

Google already screens Google Instant for search results that are inoffensive, so screening results is not ridiculous.

I’m not saying I agree with the French courts here. I’m concerned that we’re creating a future where public perception trumps reality, and if most of the Google-using population believes a thing to be true, Google will spit it back, even if that thing is actually false.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...