Last year we wrote about two contradictory rulings in France involving lawsuits by companies upset about how Google Suggest works
. As you probably know, as you type a query into Google, it tries to "suggest" the rest of the query, based on common searches beginning with what you typed. This is all done automatically and is an algorithmic function of what people are actually searching on. The "problem" was that in one case, people were searching for the name of a company, Centre National Prive de Formation a Distance (CNFDI), and one of the most popular searches, meaning one of the suggested searches, was to follow CNFDI with "arnaque," which means "scam." In one case, from a company called Direct Energie, the court ruled that it was Google's fault -- and oddly blamed the fact that the results were not alphabetical to suggest Google was at fault. The better ruling came in the CNFDI case where the court pointed out that search engines are "important tools for the free circulation of ideas and information," and the fact that many people were questioning whether CNFDI was a scam was, in fact, important and potentially useful information, and thus not libelous by itself. It also said that the burden on free speech would be too great if Google were forced to remove the suggestion.
So much for that ruling. Reader Mike Read has sent in the news that an appeals court has reversed the CNFDI ruling
and found Google liable. Its reasoning is that Google lets people alert the company to "offensive" terms in Google suggest, and it believes that "scam" is an offensive term. I have to question that logic. If people are legitimately concerned that there are scams going on, why shouldn't that be expressed?