The Good Censor Document Shows Google Struggling With The Challenges Of Content Moderation

from the thoughtful-analysis dept

Last week, the extreme Trump-supporting media sites went positively ballistic when Breitbart released a leaked internal presentation entitled “The Good Censor.” According to Breitbart and the other Trumpkin media, this is somehow “proof” that Google is censoring conservatives, giving up on free speech and planning to silence people like themselves. To put this into a context those sites would understand, this is “fake news.” I finally had the time to read through the 85 page presentation and, uh, it paints a wholly different picture than the one that Breitbart and such sites have been painting.

Instead, it pretty clearly lays out why content moderation is impossible to do well at scale and that it will always result in decisions that upset a lot of people (no matter what they do). It also discusses how “bad actors” have effectively weaponized open platforms to silence people.

It does not, as some sites have suggested, show a Google eager to censor anyone. Indeed, the report repeatedly highlights the difficult choices it faces, and repeatedly highlights how any move towards increased censorship can and will be abused by governments to stamp out dissent. It also is pretty self critical, highlighting how the tech companies themselves have mismanaged all of this to make things worse (here’s just one example of a much more thorough analysis in the document):

The presentation actually spends quite a lot of time talking about the problems of any censorship regime, but also noting that various governments basically are requiring censorship around the globe. It’s also quite obviously not recommending a particular path, but explaining why companies have gotten more aggressive in moderating content of late (and, no, it’s not because “Trump won”). It notes how bad behavior has driven away users, how governments have been increasingly using regulatory and other attacks against tech companies, and how advertisers were being pressured to drop platforms for allowing bad behavior.

The final five slides are also absolutely worth reading. It notes that “The answer is not to ‘find the right amount of censorship’ and stick to it…” because that would never work. It acknowledges that there are no right answers, and then sets up nine principles — in four categories — which make an awful lot of sense.

  • Be more consistent
    • Don’t take sides
    • Police tone instead of content
  • Be more transparent
    • Justify global positions
    • Enforce standards and policies clearly
    • Explain the technology
  • Be more responsive
    • Improve communications
    • Take problems seriously
  • Be more empowering
    • Positive guidelines
    • Better signposts

    You can quibble about these, but they’re mostly good general principles (though perhaps hard to put into practice). I also think there are some other ideas that are missing — such as moving control out to the ends of the network, rather than continuing to handle everything at the company level or providing incentives for good behavior — but this is hardly the list of a company saying it’s actively going to censor political speech. Indeed, many of the complaints from people about the moderation is the lack of transparency and consistency in these actions. Though, I’d also argue that this presentation underplays why consistency is so difficult when you rely on thousands of people to make determinations on content with varying shades of gray and no clear answers as to what’s “good” or “bad.”

    So while this document is being used to attack Google, it actually is yet another useful tool in showing (1) just how impossible it is to do these things right and (2) how carefully companies are thinking about this issue (rather than just ignoring it, as many insist). I recognize that’s not as “fun” a story as slagging the big bad tech giant for its plan to silence people, but… it’s a more accurate story.

    Filed Under: , , , , ,
    Companies: google

    Rate this comment as insightful
    Rate this comment as funny
    You have rated this comment as insightful
    You have rated this comment as funny
    Flag this comment as abusive/trolling/spam
    You have flagged this comment
    The first word has already been claimed
    The last word has already been claimed
    Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

    Comments on “The Good Censor Document Shows Google Struggling With The Challenges Of Content Moderation”

    Subscribe: RSS Leave a comment
    64 Comments
    Lee Yin Batarde says:

    SO after a week: IF "impossible", then Google IS doing it WRONG.

    (1) just how impossible it is to do these things right and (2) how carefully companies are thinking about this issue (rather than just ignoring it, as many insist).

    Your #2 is FLAT LIE: NO ONE said Google is "ignoring it"! In fact, the complaint — ADMITTED RIGHT THERE BY GOOGLE — is that Google / Facebook / Twitter ARE CENSORING, with the further point also substantiated in that PDF that they focus on "conservative" or Republicans as problem. — YES, are some "leftists" or "terrorist" examples, but in practice says Trump stole the election.

    Lee Yin Batarde says:

    Re: SO after a week: IF "impossible", then Google IS doing it WRONG.

    Google is just trying to find a cover story for its attacks on free speech of political opponents — and for its intent to gain money in Communist China REGARDLESS.

    And then, Google tries to push all blame off onto "governments", even though are no examples of that given.

    That’s why legislation and court cases are in the works. — My bet is that Kavanaugh strongly affirms my opinion that "platforms" ARE the new Public Forums and that corporations ARE violating First Amendment Rights of "natural" persons, NOT free to "moderate" as wish which Masnick constantly tries to put over. — IF NOT the current "MNN" case, in another.

    Feeble defense of your "sponsor", a week late, and FLAT LYING, the best one can expect from a Google shill.

    New readers, if any, substance for the charge is:

    https://copia.is/wp-content/uploads/2015/06/sponsors.png

    NOTE ALSO that Masnick NEVER mentions that "sponsorship" here, as any journalist is ethically required to: Masnick is NOT a journalist, and has only Goolge’s interest for his ethics.

    btr1701 (profile) says:

    Re: Re: Re: SO after a week: IF "impossible", then Google IS doing it WRONG.

    Be more responsive

    > •Improve communications

    Improve communications? Google wins the Understatement of the Year award with that one. I suppose anything would be a step up from ‘nothing’. When someone is banned or suspended on any of these platforms, they’re sent a vague computer-generated notice and given no ability to speak to anyone about it.

    Stephen T. Stone (profile) says:

    Re: Re:

    My bet is that Kavanaugh strongly affirms my opinion that "platforms" ARE the new Public Forums and that corporations ARE violating First Amendment Rights of "natural" persons, NOT free to "moderate" as wish which Masnick constantly tries to put over.

    In which case, you can say goodbye to virtually every comment section on a blog (including this one), ever social interaction network, and every other kind of website that allows third-party submissions. No “platform” will ever allow third-party submissions if they cannot moderate the platform as they see fit.

    That One Guy (profile) says:

    Re: Re: Re: Be careful what you wish for

    It never ceases to be funny watching them arguing for a position that stands to screw them over if they actually ‘win’ and get it. It’s like watching someone arguing for the immediate destruction of a bridge they are currently standing on; you almost want to see them get what they claim to want, just to see the look on their face when they realize what it means for them.

    John Smith says:

    Re: Re: SO after a week: IF "impossible", then Google IS doing it WRONG.

    Not disclosing a sponsor is a clear violation of FTC rules. If you have proof, that would be who to take it up with. Generally anyone with an IQ over 30 can see where someone’s loyalties lie, sponsored or not.

    I still say the stupid people in the audience need to be sterilized, and until we do that, it’s pointless to try to save these slugs from themselves. We’re a world overridden by idiotic slugs who shouldn’t be allowed to breed, and who are poisoning the gene pool to the point where extinction is threatened.

    Yes, I’m being hyperbolic. I do not actually support Eugenics…for now.

    Stephen T. Stone (profile) says:

    Re: Re: Re:

    I do not actually support Eugenics…for now.

    Which means you support it, but you are unwilling to own your position until the Overton Window moves close enough to said position that it becomes acceptable.

    Just say you want to sterilize “the poors” (or whatever segment of the population you want to start sterilizing) and get it over with. I promise, no one here will think any less of you than they already do.

    Anonymous Coward says:

    Re: Re: SO after a week: IF "impossible", then Google IS doing it WRONG.

    It’s funny how some organization, accepting funding / donations from a slew of other entities is somehow magically beholden to the one donor you don’t like when you disagree with the opinion of someone in the organization.

    In politics, accepting donations and lobbying and pre-written bills to be signed into law is just free speech. (For one side of the aisle anyway.)

    Do you really expect copia institute sponsors to be listed at the head of every article? No, no you don’t. You wouldn’t list such things repeatedly, either, when they are clearly available at a permanent and obvious (conspiritorially secret) link. And it would blow all the fun out of the water for you and your ilk.

    Also, if Google owns Masnick, they should fire him for doing a really awful job of promoting teh Goog propaganda.

    As for flat lying, see Breitbart, and also Fox News, who has a court ruling under their belt saying that they can lie to the public under the guise of “news” all they want. They proudly fought for their right to lie. (And they both censor the fuck out of anything “left” or which they otherwise don’t like.)

    Of course, y’all do have the right to generate fake news and play your fake martyr game. Have fun with that.

    Gary (profile) says:

    Re: SO after a what?

    Your #2 is FLAT LIE: NO ONE said Google is "ignoring it"! In fact, the complaint

    Your accusation is a flat out lie. It is not untruthful, or in anywhat incorrect to say people have accused Google of ignoring problems. Clearly, many people have said this!

    Google/Twitter/FB absolutely can not operate without moderation. As the article states, moderation is imposed by various legal and extra legal factors.

    What your comment fails to do is show that this moderation is designed to silence your faction? (Please cite.)

    ShadowNinja (profile) says:

    Re: Re: Breitbart

    Don’t forget serial liars who viciously slander the reputations of innocent people.

    Like the black woman who they claimed gleefully watched a white couple lose their farm instead of doing her job and helping them (the media found said white couple from the story. They said the black woman saved their farm and they were eternally grateful to her).

    Anonymous Coward says:

    > Last week, the extreme Trump-supporting media sites went positively ballistic when Breitbart released a leaked internal presentation entitled “The Good Censor.”

    Not only Trum-supporting media sites.

    As a Democrat voter I firmly argue for anti-trust breakup of the major tech companies in response to their efforts to censor free speech of American citizens.

    This goes beyond the invasion of a plain language reading of the Fourth Amendment online. What has transpired in the UK is absolutely Orwellian beyond what we have thus far in the US. Reading through The Good Censor, the central argument is that Google wants to apply the UK system of censorship in America.

    That hill I deem worth dying on.

    The loud censorship happy portion of the Democrat voter base is an over-vocal minority. The much larger silent majority strongly disagree with censorship. On that point I’m happy to agree with the current administration.

    Win through words and argument. When you can’t win through argument of your words, you’ve lost to the better idea. Revise and improve on the platform.

    John Smith says:

    Re: Re:

    Yet Google also runs the internet’s largest USENET archive and posting vehicle, while YouTube is very lenient about allowing controversial speech.

    If people wanted free speech online, USENET would still be thriving, but it’s not. Every time a company tries to censor people, the market fixes the problem long before regulators ever could. Censorship is what destroyed AOL and it took all of four years.

    tom (profile) says:

    Re: Re: Re:

    USENET got screwed over by several politicians uttering that dread mantra “Think of the Children!” because a very small piece of USENET carried channels where folks could talk about pedophilia and exchange files. Instead of encouraging the moderation of those few channels, the pols instead strong armed the ISPs to drop USENET from their free offerings. The likely real reason was the fairly large exchange of files the large media companies claimed were pirated content.

    Instead of using a paid USENET provider, a lot of folks switched to things like Yahoo Groups.

    If the politicians really get involved in this fight, wonder if we will see a repeat.

    Stephen T. Stone (profile) says:

    Re: Re:

    I firmly argue for anti-trust breakup of the major tech companies in response to their efforts to censor free speech of American citizens.

    What efforts have Twitter, Google, etc. undertaken to prevent the average person from using their voice? I mean, what have they done to stop you, me, or anyone else from spinning a one-user personal Mastodon instance, self-hosting a personal blog, or literally anything else that will allow people to express themselves without relying on the privilege of using platforms owned by Twitter, Google, etc.?

    And yes, use of those platforms is a privilege, not a right. You are not entitled to force Twitter into hosting your speech; the same goes for Google, Facebook, and any other platform which you do not personally own. By the same token, those platforms cannot legally prevent you from jumping onto another platform—i.e., Google cannot stop me from using Twitter, Tumblr, or Mastodon to bitch about Google’s moral and ethical shortcomings. If you have a law, statute, or court ruling that says otherwise on any of those points, now would be the time to present it.

    Win through words and argument. When you can’t win through argument of your words, you’ve lost to the better idea.

    That explains why Republicans have done their best to disenfranchise voters, gerrymander voting districts, and make voting a much harder process.

    John Smith says:

    Re: Re: Re:

    Google has done nothing to silence anyone, as far as I can see. Twitter definitely has, through uneven enforcement. Who is on its “Trust And Safety Council” again? Facebook falls somewhere in between.

    If people want free speech, let’s all go back to USENET, or boycott companies which censor. Perhaps we can apply USENET’s SPAM rules and “throttle” spammers without censoring anyone or let people opt-in to “total free speech.” Some company will jump profitably into this vacuum if it’s a real problem.

    YouTube bans very few videos, and Vimeo bans even fewer. Search engines do index controversial material. People who search for news using specific search terms will generally get unbiased results.

    The Wanderer (profile) says:

    Re: Re: Re:

    I would add one more conditional on there – not only can they not legally prevent you from speaking by other channels [1], they also cannot practically prevent you from doing so.

    If they could do (and were doing) the latter, you’d have a hard time convincing me that that would not still be a violation of the freedom of speech, although I readily grant that it would not be a violation of the First Amendment.

    [1] I’m parsing "legally" here as "with the force of law", not "without violating the law". I.e., not that it would be illegal for them to prevent you from doing this, but merely that an attempt by them to prevent you from doing it would not be backed by government enforcement. If there are laws which would in fact make such prevention an illegal act on their part, I’m not being able to think of what those laws are.

    Anonymous Coward says:

    Re: Re:

    It sounds like they’re moving towards it, but for the record, this report was written by Insights Labs. It contains recommendations by outsiders contracted to probably inform Google on what they feel has been going on with the public perception of censorship online.

    I used to think it was a sure sign of things moving in the direction of censorship, but after watching this video, I’m not so sure.

    It could go either way. If this document, being written by people contracted by Google itself to give it recommendations, is listened to, then it may step away from its fiddling with search results and YouTube videos as much.

    If not, it’s as good as pissing in the wind.

    I wonder if this isn’t a PR stunt to "out" observers’ own reactions. That’s useful data to Google, too.

    The Wanderer (profile) says:

    Re: Re:

    As a Democrat voter

    I really want to dismiss this claim as specious on the basis of the fact that the only people who use "Democrat" as an adjective seriously (outside of contexts like "Democrat [so-and-so’s name]") are kneejerk right-wingers.

    Unfortunately, that fact seems to be becoming increasingly obsolete and inaccurate these days…

    John Smith says:

    So when someone isn’t banned from Twitter for threatening to come to my home to kill me, and another claims to be standing outside my home, ready to shoot me, but when I get a ban for suggesting #metoo is hypocritical, that’s not bias, just “good censorship.”

    That’s just the tip of the iceberg. Shadowbanning is rampant online, but even then, the market can handle it. New companies will always spring up to monetize the audiences ignored by the more established outfits. Twitter was once such a bastion of free speech, until it got big. This story has played out dozens of times since it first happened on AOL, where censorship of conservatives inadvertently led to what would become the alt.right.

    Whatever this site’s agenda, if what it says is true, or false, that will ultimately come to light. The internet detects censorship and damage and routes around it. Always has, always will. That’s the nature of a decentralized communications medium that was designed to survive a nuclear war.

    Stephen T. Stone (profile) says:

    Re: Re: Re:

    It’s not imaginary at all.

    It is if you refuse to offer any proof that it happened, up to and including the Twitter usernames of everyone involved as well as uncensored, undoctored screenshots of the tweets in question and (if possible) direct links to those tweets. Provide the proof and we will judge it for ourselves. Until then, your anecdote is bullshit and we will continue to call it such.

    John Smith says:

    Re: Re: Re:2 Re:

    Given who the proof would name, that’s very ironic.

    As I said, it’s the tip of the iceberg, and the extreme case is Alex Jones anyway. Back in the 1990s it was 2 Live Crew. In the 1960s it was Lenny Bruce.

    In thhe 1970s, people wanted Three’s Company banned for being too risqué, and now they show reruns of it on Nick At Nite.

    I’ve seen enough of Twittet’s enforcement to conclude bias, and enough of Google’s to conclude that it is not. Other people obviously have different opinions.

    I’ve also said that censorship is a problem that tends to solve itself. AOL was clearly biased in the 1990s and cost itself its position of dominance.

    Stephen T. Stone (profile) says:

    Re: Re: Re:3

    I’ve seen enough of Twittet’s enforcement to conclude bias, and enough of Google’s to conclude that it is not.

    So what?

    Twitter is legally allowed to show bias in its moderation; any site that moderates third-party submissions is allowed that same right. To say that Twitter does not have that right is to say that YouTube, Archive of Our Own, FurAffinity, and even Stormfront—just for a few quick examples—have no right to show bias in favoring or disfavoring certain content. It would amount to saying a site like Twitter, AO3, etc. must host any form of legal speech no matter what. How would you feel if you opened a blog and you were forced to host a wholly unmoderated comments section because the law says you cannot show bias in your moderation?

    Bias is not just “left vs. right”. Bias is pro-LGBT vs. anti-LGBT, pro-choice vs. anti-choice, pro-racism vs. anti-racism, pro-Google vs. anti-Google—in other words, it is the decision to favor a specific opinion or point of view over opposite-yet-similar opinions/views. If you hold the opinion that foul language has no place on your blog, you have the right to moderate your blog’s comments section in a way that conforms to your bias against words like “fuck”, “shit”, and “Barbra Streisand”. No law, statute, or court ruling says you must be forced to host such speech on your blog; if someone wants to use it, they can use it anywhere else that will accept it—but they cannot force it upon your blog.

    I would think a notion such as “you can’t be forced to host speech you don’t wanna host” is non-controversial. Then again, people like you seem to think Twitter should be forced to host the speech of disinformation peddlers such as InfoWars, White supremacists such as Richard “I got alt-highfived and became a living joke” Spencer, absolute lunatics such as Donald Trump, or even just a worthless pissant furry with lots of free time on his hands.

    Anonymous Coward says:

    Re: Re: Re:2 Re:

    Well, one could judge it to be poorly executed moderation, but to take it as evidence for this putative agenda-driven censorship conspiracy claim would require a lot more evidence.

    Of course, if some elements of this are true, and the threats were true (as opposed to protected hyperbolic speech), then i would be calling the fooken cops, not Twitter. Then send Twit the police report.

    Claims of directed bias are like other belief and pattern seeking effects of the unexamined human mind. In this case, as if one being were looking at all the things and particularly singled out a TwitUser with an agenda.

    I am not saying it can’t happen, that a lame employee could conceivably have it in for a particular TwitUser and somehow handles all the complaint material associated with that person, but for it to happen at scale is, frankly, ridiculous. And since not all anti-#me-too comments or accounts, for example, get deleted/banned, not even a sizable fraction of them, we may assume the claims of bias are brain farts or direct bullshit. (Yeah. They really do censor one out of a billion comments with their left-wing (? or whatever) agenda. It is really putting the kibosh on “conservatives”. @@ )

    Anonymous Coward says:

    Re: Re: Re:3 Re:

    Project veritas exposed Twitter employees targetting conservative users by giving negative scores to things such as “America” “Guns” “flag” and other conservative talking points and flagging the accounts as bots/banning them when a threshold was crossed.
    It can be a “lame employee” or a whole division that instead of doing their jobs end up being a censor center.

    Your argument that “not all accounts using the same speech are banned therefore censorship is inexistant” is beyond stupid.

    Anonymous Coward says:

    Re: Re: Re: Re:

    _Were it proven, would you call it bias?_

    Were it proven, I’d call it a miracle that you proved anything.

    _There are enough public examples not to need mine._

    And this is exactly why you have no credibility, bobmail.

    Shouldn’t you be writing a self-help book on HW to get all the pussy you can grab?

    That One Guy (profile) says:

    Re: Re: Re: 'The commentor who cried 'death threats!'(among other things)'

    Coulda swore there was some old story about that, something about a young shepherd and a wolf that didn’t exist?

    Memory’s a little fuzzy, but I seem to recall some sorta moral about how if you make a habit of lying and/or get a reputation for dishonesty then even if you do happen to tell the truth at some point no-one will have any reason to believe you, and you’ll have no-one but yourself to blame for that.

    Completely unrelated of course, no idea why it even came to mind after reading your comment.

    Toom1275 (profile) says:

    Re: Re: Re:2 'The commentor who cried 'death threats!'(among other things)'

    Didn’t he mention something about having consulted with lawyers? If so, then that’s probably mutually exclusive with the idea od suing Techdirt. Unless your lawyer is Charles Harder, they’d probably strongly recommend against suing someone over malicious false claims made by the plaintiff.

    That One Guy (profile) says:

    Re: Re: Re:3 'The commentor who cried 'death threats!'(among other things)'

    He might have(or at least claimed to have done so), after having warned him twice to stop with the unsupported claims and watching him continue to make them I filed him under ‘dishonest troll’ and flag him by default, ignoring anything he says as a waste of time.

    Stephen T. Stone (profile) says:

    Re: Re:

    So

    Damn, but you love to otherword people.

    when someone isn’t banned from Twitter for threatening to come to my home to kill me, and another claims to be standing outside my home, ready to shoot me, but when I get a ban for suggesting #metoo is hypocritical, that’s not bias, just "good censorship."

    No, it is unequal moderation. The whole point of this article, like others in its vein, is that moderation does not properly scale when a platform grows as big as Twitter. Mistakes will be made, hopefully (but not always) in good faith. When those mistakes hit you, you can either “route around” them—more on that in a bit—or you can get irrationally angry about a platform denying you a privilege you thought was an entitlement. Your choice.

    (By the by: Those other fuckers should have been banned on principle, while you should have received only a metaphorical ass-kicking in your mentions.)

    The internet detects censorship and damage and routes around it. Always has, always will.

    What, then, makes people like you feel the need to get all pissy about Google when y’all can “route around it” with ProtonMail, the Mastodon and PeerTube protocols, DuckDuckGo, and any non-Google service or protocol that competently recreates the functionality of existing Google services?

    Mike Masnick (profile) says:

    Re: Re:

    So when someone isn’t banned from Twitter for threatening to come to my home to kill me, and another claims to be standing outside my home, ready to shoot me, but when I get a ban for suggesting #metoo is hypocritical, that’s not bias, just "good censorship."

    This is a strawman. We never said that’s "good censorship." Indeed, even if we take your version of events that happened to you as an accurate depiction (which I find unlikely, but let’s take it), that only serves to further prove the point: content moderation at scale is impossible to do well. It will always lead to mistakes. No one is saying that’s "good censorship."

    But we should be encouraging platforms to be thoughtful and careful in how they manage these things — because they’re under tremendous pressure to "do something." So mocking companies for having a thoughtful approach is just as ridiculous as saying they "shouldn’t do anything."

    Anonymous Coward says:

    Re: Re:

    when I get a ban for suggesting #metoo is hypocritical, that’s not bias, just "good censorship."

    What? Of course it’s bias. Censorship is always biased; any act of dividing texts into those that should be or shouldn’t be censored is bias (as is the general decision of whether any censorship should exist, e.g., I’m biased against censorship even if others think it’s "good").

    Christenson says:

    Here we go again!

    Betting this ends up with >100 comments, many of them flagged.

    I’ve been reading about how Instagram also enables abuse, noticing comments about irrelevant garbage showing up in the mmasnick twitter feed, and thinking:

    What should be censored is *very* contextual…same content is OK or not depending on presentation. So garbage for *YOU* is fine in *MY* abuse collection. Oh, and at the moment, there are no consequences to anyone, hardly, for abuse. The difficulty also arises from “free” platforms…moderation has to come from somewhere, and 20K paid employees just aren’t going to cut it for a billion users.

    Therefore:
    1) push decisionmaking to the end users, who are crowdsourced. (See “flag” button on Techdirt, seems to work pretty well)

    2) Every area/channel has an “owner”…for youtube it is who posts the video, for twitter it is the person whose feed it is, etc. Allow OWNERS to have good tools for the comments, including filtering out “burner” accounts, closing comments, etc.
    (Yes, I’ve got a content farm of long-running burner accounts with lots of followers for sale, but anonymous horribleness has to have a cost somewhere)

    3) Allow for multiple rating bodies…half the US is panicking about “pornography” while the other half is wondering when the first half will wake up to reality…and of course wants to ensure that their prissy employer’s computers don’t access anything NSFW so as to stay out of Title IX trouble!

    Anonymous Coward says:

    Re: Re: Re:

    Or the Jewish person angry at the Wall Street Journal for maliciously and falsely painting a popular Youtuber as antisemitic in an effort to destroy his career and harm YouTube as a platform.

    Got suspended for tweeting anger at the non-Jewish journalists involved in that hit-piece. One of the journalists even had a history of making casual antisemitic jokes.

    Anonymous Coward says:

    Re: Re: Re:2 Re:

    Off the top of my head?

    They compiled a video and claimed a random clip of him pointing at a corner was a Sig Heil or that him dressing up as an SS Officer while playing a World War 2 game and mocking Nazis was evidence that he supported Nazis.

    Many of the clips they pulled claiming he was a Nazi was, in context, impossible to mistake as having any relation to Nazis whatsoever. They forced that association in.

    Anonymous Coward says:

    Re: Re: Re:4 Re:

    Yes? Someone dressing up as an SS Officer while playing a World War 2 game and proceeding to crack jokes at Nazis is pretty damn important context.

    But that wasn’t even the most damning part. The most damning part were all the examples where the Wall Street Journal made up the context out of thin air. As in there was absolutely no relation whatsoever that any reasonable person could’ve made without the Wall Street Journal claiming that him randomly pointing at a corner in one of his vids was a Sig Heil.

    Anonymous Coward says:

    > the report repeatedly highlights the difficult choices it faces, and repeatedly highlights how any move towards increased censorship can and will be abused by governments to stamp out dissent.

    Google didn’t seem to find it very difficult to decide to build a censored search engine with queries linked to individuals’ phone numbers to help the Chinese government stamp out dissent, though.

    Christenson says:

    Re: Re:

    letting the Chinese government take responsibility and control is easy compared to fairly moderating content according to conflicting criteria from multiple parties.

    Especially when the criteria can be self-conflicting, even from one person. Nazi insults are one thing when they are being used as an example of abuse, another if they are being directed at someone.

    That One Guy (profile) says:

    'They said it, not me.'

    ‘Don’t take sides, moderate based upon tone, and crack down on abusive content.’

    If that is supposed to be a smoking gun that Google is going after conservatives/brietbart fans then that’s a pretty damning picture they are painting of their own side. They are all but saying that conservatives are more likely than others to post content that would trip a troll/abusive content filter, such that they’re basically their own critics.

    Leave a Reply to Stephen T. Stone Cancel reply

    Your email address will not be published. Required fields are marked *

    Have a Techdirt Account? Sign in now. Want one? Register here

    Comment Options:

    Make this the or (get credits or sign in to see balance) what's this?

    What's this?

    Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

    Follow Techdirt

    Techdirt Daily Newsletter

    Ctrl-Alt-Speech

    A weekly news podcast from
    Mike Masnick & Ben Whitelaw

    Subscribe now to Ctrl-Alt-Speech »
    Techdirt Deals
    Techdirt Insider Discord
    The latest chatter on the Techdirt Insider Discord channel...
    Loading...