Nazis, The Internet, Policing Content And Free Speech

from the many,-many-issues dept

I'm going to try to do something that's generally not recommended on the internet: I'm going to try to discuss a complicated issue that has many nuances and gray areas. That often fails, because all too often people online immediately leap to black or white positions, because it's easy to miss the nuance when arguing about an emotionally potent issue. In this case, I want to discuss an issue that's already received plenty of attention: how various platforms -- starting with GoDaddy and Google, but with much of the attention placed on Cloudflare -- decided to stop serving the neo-Nazi forum site the Daily Stormer. Now, I'll note that as all that went down, I was focused on a multi-day drive out to (and then back from) the middle of absolute nowhere (a beautiful place) to watch the solar eclipse thing that everyone was talking about -- meaning that for the past week I've been disconnected from the internet quite a bit, which meant that I (a) missed much of the quick takes on this and (b) had plenty of time to really think about it. And, the simple fact is that it is a complicated issue, no matter what anyone says. So let's dig in.

Let's start with the basics: Nazis -- both the old kind and the new kind -- are bad. My grandfather fought Nazis in Europe and Northern Africa during WWII, and I have no interest in seeing Nazis in America of all places. But even if you believe that Nazis and whoever else uses the Daily Stormer are the worst of the absolute worst, there are many other issues at play here beyond just "don't provide them service." Of course, lots of services are choosing not to. Indeed, both the Washington Post and Quartz are keeping running tallies of all the services that have been booting Nazis and other racist groups. And, I think it's fairly important to state that these platforms have their own First Amendment rights, which allow them to deny service to anyone. There's certainly no fundamental First Amendment right for people to use any service they want. That's not how free speech works.

A second complicating factor is that there are different levels of services and their decisions can have very different impacts. So, for example, if some blog doesn't allow you to comment, that's not a big deal on the free speech front since there are millions of other places you can comment online. But if no one will even provide you any access to the internet, then there are some larger questions there about your right to access the entire network that everyone uses to speak. And there's a spectrum between those two end points. There are only a few ISPs, so if Comcast and Verizon decide you can't be online, you may not be online at all. There are multiple places where you can register domains, but if all the registrars blacklist certain providers, then you can effectively be banned from the open internet entirely. It's harder to say where things like Facebook, Google and even Cloudflare fall along that spectrum. Some might argue that you don't need any of those services -- while others might say that Google and Facebook are so central to everyday life that being forced off of them puts people at a serious disadvantage. Cloudflare is even more complicated, since it's just a middleman CDN/DDoS protection/security provider. But, as the company's CEO admitted in kicked off the Daily Stormer, there are very few other services online that could protect a site like that from the kinds of DDoS attacks that the site regularly gets (the fact that Daily Stormer briefly popped up on Dream Host this week and almost all of Dream Host was hit with massive, debilitating DDoS attacks just emphasizes that point).

But this issue is key: not all internet services are the same, and no single rule should apply across all of them. It simply wouldn't make sense.

Recognize: this is more complicated than you think

As many experts in the field have noted, these things are complicated. And while I know many people have been cheering on each and every service kicking off these users, we should be careful about what that could lead to. Asking platforms to be the arbiters of what speech is good and what speech is bad is frought with serious problems. As Jillian York eloquently put it:

I’m not so worried about companies censoring Nazis, but I am worried about the implications it has for everyone else. I’m worried about the unelected bros of Silicon Valley being the judge and jury, and thinking that mere censorship solves the problem. I’m worried that, just like Cloudflare CEO Matthew Prince woke up one morning and decided he’d had enough of the Daily Stormer, some other CEO might wake up and do the same for Black Lives Matter or antifa. I’m worried that we’re not thinking about this problem holistically.

Kate Knibbs, over at The Ringer, also has a nuanced article about this, pointing out how relying on internet platforms to "police hate" results in all sorts of potential problems and contradictions. Even if we all agree that Nazi propaganda is bad, there's a big question about whether or not this (censorship by platforms) is the proper response:

The world will be a better place if technology companies are able to disrupt the spread of propaganda. But while their post-Charlottesville efforts are an encouraging sign that technology companies are finally treating the prospect of domestic right-wing extremist groups as a serious threat, the way these companies have chosen to address that threat is an unsettling reminder that they are near-unfettered gatekeepers of speech. We are online at the whim and for the profit of a few extremely wealthy multinational corporations with faulty track records for moderating content. As overdue and appreciated as their efforts to root out hate groups from the digital world are, their efforts to preserve an open internet should be undertaken with equal urgency.

This, in fact, is the same very public struggle that Cloudflare's CEO, Matthew Prince, has been having over the issue. As he explained in his original statement, he's not really comfortable with the fact that one person -- even himself -- basically has the power to kick someone off the internet entirely. A few days later, in a (possibly paywalled) piece at the Wall Street Journal, he's still second guessing himself.

Your black and white quick take on this misses the point:

Yes, I know that some of you are angrily getting ready to scream one of two (contradictory) things in the comments: (1) free speech should mean that all these sites should be allowed to remain up or (2) oh, come on: Nazis are obviously bad and there's no slippery slope in denying them internet services. But there are strong responses to both of those extreme viewpoints, which come from opposite ends of the spectrum. Again, free speech also means that platforms have rights to choose what speech they host and what speech they don't host. Don't like it? Start your own platform. Similarly, no one truly believes that all content must be allowed on all platforms at all times. For anyone who claims that's not true, I'll just point to the email filter you use to show you're wrong. We accept filtering decisions in our email because we know that a completely unfiltered experience is so filled with garbage as to make it unusable. The question then becomes one of where do we draw the lines for moderation.

As for potential comment (2): yes, Nazis are obviously bad. But here's the problem: there are plenty of people (including some of those who are desperately typing out argument (1) above) who will argue that other groups -- antifa, BLM, the SLPC -- are just as bad. And then... you're just left with a fight on your hands about who's bad. And that doesn't solve anything. Even worse, it puts tremendous subjective power into the hands of those in charge. And, specifically for those who are making this "Nazis are obviously bad, so there's no slippery slope" argument, think about who's in charge right now. Do you really want them defining who's "bad" and who's "good"?

On top of that, we're constantly pointing to example after example after example of platforms being really bad at properly determining what's really bad and what's good. Doing so requires time and context -- which are two things that don't come easily on the internet.

At the very least, putting the onus on the internet platforms to be required to make these kinds of calls means that you're trusting a very small number of self-appointed people -- with very different incentives -- to be the world's speech police. And that should be concerning. Some argue there's no slippery slope argument in banning Nazis because they're Nazis. But there is a different slippery slope: the appointment of private, for-profit platforms being the speech police and the arbiters of what's good speech and what's bad speech. Yes, as noted, those platforms have every right to determine what they don't want on their own platforms, but as we move along that spectrum discussed above, and the power of a centarlized platform could mean cutting people off entirely, the overall impact of these decisions becomes greater and greater. And rushing headlong into a world where we trust private companies to make speech determinations just because they built a scalable platform seems like the wrong way to go about things. Just because you can build a big platform doesn't mean you're good at determining who should be allowed to speak.

Merely censoring doesn't solve the problem

This is a key point that hasn't been brought up very much, but as the coiner of "The Streisand Effect," I'm kind of obliged to do so: it's a pretty common gut reaction to really awful content that the best (and sometimes "only") option is to silence it. And there may be some narrow cases where that actually works. But all too often, attempts to silence or censor content only lead to more attention getting paid to that content. And, in the case of Nazis, it actually has a reinforcing impact that isn't widely considered. Many of the ignorant folks who jump on board with these groups (and, yes, they are ignorant) believe that they're being "edgy" and "contrarian" and "outside the norm." And pulling down their websites reinforces this view. It doesn't make them rethink their ignorant hate. It makes them think they're on to something. They interpret it as "the establishment" or "the swamp" or whoever not being able to handle the truth that they're bringing.

It certainly doesn't do much to educate the ignorant of why their beliefs are ignorant. This is why we often talk about the importance of counterspeech, which can be surprisingly effective, even in dealing with Nazis. But counterspeech isn't always the answer and isn't always effective. There is no counterspeech to deal with spam, for example. But that's why we've developed a system of tools and filters to deal with spam, but don't legally mandate that, say, domain registrars stomp out spammers.

This is why it's complicated:

Up top, I noted that the whole thing is more complicated than many people are willing to recognize. And it's because of the competing factors I discussed above. Some level of moderation is fundamental, necessary and right. Your email spam filter reveals that you know this is true. And platforms do have every right (including the First Amendment) to refuse service to assholes. But, at the same time, we should be concerned about a few centralized powers, or even individuals, being in a position to make these decisions on an ad hoc basis. This may not apply to smaller platforms, but the big guys that are often seen as "necessary" for participating in public life, certainly raise some questions.

So, how the hell do you weigh these seemingly competing factors? Some moderation is necessary, but expecting platforms to police opens up a whole host of problems from arbitrariness to the powerful silencing the less powerful and more.

Towards a (still complicated) solution:

Not surprisingly, EFF's take on the whole situation brings us closer to a framework for thinking about this issue. In fact, while they don't state this directly, in much of the world, we do have at least some history with a system that has faced similar complications and has a process. That system is the existing judicial system, and that process is due process. It is, of course, far from perfect. But there may be lessons we can learn from it. EFF suggests pulling in some of its features including transparency and a right of appeal.

Other elements of the Net risk less when they are selective about who they host. But even for hosts, there’s always a risk that others—including governments—will use the opaqueness of the takedown process to silence legitimate voices. For any content hosts that do reject content as part of the enforcement of their terms of service, or are pressured by states to secretly censor, we have long recommended that they implement procedural protections to mitigate mistakes—specifically, the Manila Principles on Intermediary Liability. The principles state, in part:

  • Before any content is restricted on the basis of an order or a request, the intermediary and the user content provider must be provided an effective right to be heard except in exceptional circumstances, in which case a post facto review of the order and its implementation must take place as soon as practicable.
  • Intermediaries should provide user content providers with mechanisms to review decisions to restrict content in violation of the intermediary’s content restriction policies.
  • Intermediaries should publish their content restriction policies online, in clear language and accessible formats, and keep them updated as they evolve, and notify users of changes when applicable.

In other words, for these core, centralized chokepoints, there needs to be transparency and due process.

Of course, there are dangers in that as well. Last year, in hosting a panel on just this subject at Rightscon, we discussed the idea of internal corporate "due process" for moderating content. Medium's Alex Feerst discussed how they argue these issues out, as if they're in court, with someone representing each side. But when I asked about whether or not the "internal case law" would ever be made public, the answer was likely no. And you can also understand why. Because there are certainly some individuals and people who specifically are seeking to game the system (think: spammers and trolls). Revealing the exact policies upfront gives them extra ammo on how to game the system, violating the spirit of those rules, while not the letter. In other words, some would argue (compellingly) that some aspects of transparency here could make the problems even worse.

So while I'm certainly all for more due process, and some associated transparency, I worry that the requirements of transparency are not entirely realistic either -- especially in areas with rapidly changing activities and norms.

Can we rethink the internet?

To me, this keeps reminding me of an article I wrote two years ago, about why we should be looking at protocols, not platforms. The early days of the internet were built on protocols -- and the power was in the end-to-end nature of things. But with those protocols, people could build their own implementations and software to work with those protocols. The power was thus at the ends. Individuals could choose how they interact with the protocols and they could implement their own solutions without being completely cut off. You could filter out the content you didn't want. But the choice was yours. Over the last decade, especially, we've moved far away from that ideal (in part because there appears to be more money in locked-in, centralized platforms, rather than more distributed protocols). But, opening things up offers some opportunity to allow good things to happen.

Let the ignorant Nazis gather -- they're going to figure out a way to do so anyway. But have widely available (and recommended) filters to allow most decent people to ignore them. Or, let others focus on using counterspeech against them. Let various attempts at responding to and diffusing the power of ignorant propaganda bloom, rather than assuming that the best response is to just make it all disappear entirely. This, of course, does not solve everything. But it certainly seems like a better solution than hoping a few giant companies magically figure out how to become benevolent dictators over what content is allowed online.

In the end, there isn't an "easy" solution to any of this, and anyone pitching one is almost certainly selling snake oil. Expecting to solve "hate" by allowing a small number of internet platforms to censor "bad" people is a fool's errand. First, it's likely to be ineffective, and second, it will inevitably lead to bad results, with content you don't think should be blocked getting blocked. Platforms may have the right to police and moderate their own content, but demanding that they do so in all cases is going to lead to bad results. In the end, some of it needs to come down to a recognition of the different levels of service along the spectrum. Further down the line, with smaller services on the network, any moderation should be seen as a choice those platforms make. But as you move up the chain, at some point we need to be a lot more careful about the power of certain players to completely cut people off from the internet. This is the problem of an internet that has become too centralized in some areas. And, to me, it still feels the better solution isn't putting more power in the hands of massive centralized "infrastructure" providers, but pushing the power out to the ends, in the spirit of the original, open end-to-end internet. Give the ends of the network the power. Let them share tools and filters among each other, but let's not rush to demand that a few key centralized players be the final arbiters of speech online.


Reader Comments

Subscribe: RSS

View by: Time | Thread


  1. icon
    Glathull (profile), 26 Aug 2017 @ 1:04pm

    1. The problem with having a nuanced and tricky conversation:

    It's a shame that the internet--a platform that is so completely well-suited for long-form writing and discussion of nuanced issues with a broad audience--has succumbed to human nature in this way. Our basic human desire for quick, easy, simple answers is limiting the positive power the arguably greatest inventions in human history.

    2. a disclaimer:
    It's an unfortunate necessity that I have to provide a disclaimer to say that I'm not a fan of neo-Nazis, actual Nazis, haters, racists, or fascists. It's a sign of the sad state of the world that defending a person's right to express something I think is really disgusting will get me labelled as a supporter of that idea unless my dad happened to fight against actual Nazis in WW2 (which he did, bless his 98-year-old-ass).

    2. The problem with "evil":
    Applying the "evil" (and Nazi is basically a surrogate for evil) label to a group of people and then exterminating them to the greatest extent possible is the oldest and most reprehensible trick in the book. Our society has reached a level of development that it doesn't condone outright murder of the evil people.

    (caveat emptor: we do this all the time when the "evil" people are in other countries, and we call it war. I'm talking about what we do to our own citizens within our own borders most of the time. And I'm not making the case that the government doesn't punish classes of people by policy. It does. But I'm not talking about government policy right now. I'm talking about social behavior.)

    Pushing them off to the sidelines, limiting their ability to express their ideas, and perhaps failing to prosecute people who harm them is the extent of what we generally find acceptable.

    But make no mistake: this is the same operational morality that drives violent extremism, terrorism, racism, and everything that we want to claim we are better than. Kicking groups of people off the internet for having ideas we think are bad comes from the same internal logic as burning someone alive for believing in a different god than you do. I'm good; you're evil. Therefore you must go away.

    I'm not equating the act of silencing a person with physically killing them. I'm not saying that depriving someone of access to the internet is the same as murder. I'm saying, to be clear, that the logic of punishing people based on being a member of a very loosely and arbitrarily defined class is a universally bad idea.

    Regardless of how people want to spin it, the only operating definition of evil is whatever the ruling class of a society decides it is. It changes over time; it varies by culture; it fluctuates depending on the power dynamics of the ruling classes.

    Defining evil based on what a person says or believes or thinks is one of the worst ideas ever.

    The constitution, bill of rights, and specifically the first amendment is a genuinely revolutionary document. It shifted the burden of power to define good and evil from monarchs and churches to the people while recognizing that the simple majority of people can still be wrong. The people who wrote the constitution were aware of mob mentality and the human tendency to act like like mobs and try to violently erase ideas and people that were deemed evil by the angry mobs.

    And it was a good response to a world mostly run by monarchs and churches. The power structures have changed now. Governments hold certain powers, but the vast majority of policy is determined by corporations through lobbying, funding for political campaigns, and infrastructure leverage. Many aspects of our lives are not determined by the rule of law as defined by governments or courts. They are dictated through opaque terms of service, by the arbitrary decisions of individuals at the tops of the world's largest corporations, and by the mobs that express outrage on the platforms that drive their bottom line.

    I'm not angry or upset about this, and I'm not ranting that we need to overthrow a corporate oligarchy. I'm simply realizing that this is the state of the world at this moment. As a technologist, it's an amazing and exciting time to be alive. The internet is still the wild west. Mistakes have and will be made.

    The fact that the first amendment only applies to the State is obviously true. But the intent of the first amendment (in my opinion) needs to be taken into account. The literal effect of the first amendment was to codify the concept of free speech: that the government is not allowed to censor ideas.

    I'll go out on a limb and suggest that the intent was more general. That those who hold the power of limiting speech cannot censor ideas.

    That we desire to create a society in which punishment is only meted out to those who have been convicted of an unlawful action.

    That we recognize and embrace the fact that shitty people with shitty ideas exist. And that killing them, putting them in jail, silencing them, telling them to go home and die, DDOS-ing them, removing their ability to make their ideas heard is not the right answer.

    I don't have the right answer. There isn't a clearly right answer here. And we're seeing that play out in real time from the handful of people that control speech on the internet. Consider the difference between Tim Cook's response to this situation and the CEO of CloudFlare. Two private citizens who each hold enormous amounts of power, unelected by the public, with two very different views. Both are, by all accounts, honorable people, committed to ethical behavior, with very different responses to this. One quite sure that evil is evil and must be silenced, the other quite unsure about whether this is really the right option.

    At some point, we have to recognize how much the world has changed and decide what pieces of our governing principles we want to take into the future. What we want to protect, and what we don't.

    The internet is, for all of it's glory, mob rule right now. While I can't even begin to say what a solution is in terms of preserving some set of basic (and by necessity, global) human rights online, I can say this: mob rule is not the answer.

    We definitely need to take some of the following points into consideration:

    1. Bad ideas exist. They will never stop existing.
    2. The definition of bad ideas will change over time.
    3. Pretending that bad ideas don't exist and trying to get rid of or hide them only makes things worse.
    4. The most effective way of dealing with bad ideas might be to air them out in the sun, for all people to see, naked, in full view of the public.
    5. Private individuals and companies should not be forced to interact with bad ideas.
    6. Private individuals and companies should not be forced to provide service to bad ideas.

    If you read through that list, it becomes immediately obvious that the concepts are in a severe conflict. The right to have and hold and speak a bad idea is in direct conflict with a basic principle that people have a right to not service it or interact with it. And as well, the right to not interact with it is in conflict with the idea that the best response to a bad idea is to engage with it and argue with it.

    But that's where we are. In conflict. A fundamental conflict about what it means to have rights on the internet--to have rights across the entire earth.

    That's not a bad thing. The conflict and friction here is healthy. And I'm optimistic about the process. In the 17th century when we were fighting a war and killing boatloads of people to establish the first amendment, it was a bloody and brutal war.

    We're trying to sort out one of the exact same points of disagreement about the internet now. And it's not about the politics of a few individual countries anymore. It's about the way the entire population of the world interacts with each other. And we're doing it with astonishingly less bloodshed. People like Mike are writing careful articles and providing a forum for discussion. Many others are as well. It's a difficult situation, but it was inevitable from the beginning of the internet.

    We are talking about this rather than stabbing and shooting each other. It's an important conflict and a necessary one. But let's not lose sight of the fact that when my dad went to war in 1941, it was a literal war over bad ideas and one group of people treating treating others with hated, bile, murder, rape, and slavery.

    We are fighting the same fight against the same ideas now. But we are doing it with words on the internet, for the most part, and I think that's an improvement.

    I sent this article to my dad after I read it and called him to talk about it. I was really curious what his point of view is. He lives in Texas where I'm from, at our family farm, is a devout pre-Vatican 2 Catholic, and voted for Trump. He sent me this email:

    "I didn't sign up for the army and leave my wife at home and go to war because of what the Germans were saying or thinking. I went because I believed that they were going to attack us. Just like the Japanese did. I thought they were going to invade us and take my family away. I didn't know about the concentration camps when I signed up, and I didn't know what they were doing to the Jews. It was never about that until I saw them with my own eyes. I didn't know anything about why they were doing what they were doing. It was never about them being Nazis. I didn't even know what being a Nazi meant until long after the war. As far as I knew, being a Nazi meant you were trying to kill me.

    I was a poor kid who grew up in the depression era, and the army promised me a lot of things. And they gave them to me after the war. WW2 is the only reason I went to college and eventually met your mother. It was an opportunity to make my life better if I survived. Be a hero. Fight the Nazis. Save your family.

    The more I read about WW2, the more I get frustrated. Either I'm really wrong about what I experienced or the historians are really wrong. And it makes me question how much of everything else I read about history is wrong. People now act like we went to war to save the Jews. I didn't know anything about any of that. No one on the ground did. And I didn't very much like the Jews I knew at the time or served with. And the Jews I was serving with didn't know anything about it either.

    When I read about the politics leading up to the war and how everything went down, I am, at least happy with the choices I made. You know I was a medic, son. I never carried a weapon and never killed anyone. People made fun of me at the time because all the medics broke the rules . . . carried guns, fought, killed the enemy. We were supposed to be non combatants. But most of us weren't. I got reprimanded for treating German soldiers during the battle of the Bulge.

    From my point of view, my job was to treat the wounded and comfort the dying. All of them. And I hope I've passed that idea along to you in some way. That the people other people are telling you are your enemy are still people. I didn't hate the Germans. Even when they were trying to kill me. Our unit was so far out in front of the regiment, that we often got shelled by our own people. I didn't hate our people.

    We were trying to get shit done. Just like you try to get shit done every day at work. That's really all there was to it. It was my job at the time, so I did it. There was no ideology for any of us. We didn't even know what that word meant.

    I know you are not religious, but the Bible sometimes has some words of wisdom, even if you don't believe in God. Love your enemies. That's what I would say to any of the Nazis I "fought" in the war. That's what I did. I treated our POWs the same as I treated our own wounded. I don't care what people believe or even what people do. They are people. And people shouldn't die the horrible ways that I saw every day for 19 months in Europe. I've told you about that. You know what it was like.

    These clowns in Charlottesville? Amateurs at best. They can't even effectively enact their ideas. Yeah, when they hurt someone, lock 'em up, as Trump likes to say. Until then, they are like stubbing your toe on the coffee table of society. Oh fuck, that hurts. Then you forget about it. Shutting them down just makes them more angry. Ignoring these assholes is far more effective than responding to them. They are a nuisance and minority, and they will all get old and die some day. Just like me.

    I went to war against people with the same ideas who were better organized, more effective, and able to really create a political movement. These clowns are just what you often call out with your co-workers. No talent hackery.

    Let them have their stupid ideas. They are wannabes anyway. If they cross the line, put them in jail.

    I didn't go to war over ideas or speech or philosophies. I went to war because I thought we were being attacked. And that's what everyone thought at the time. I never thought about the first amendment or free speech or anything like that.

    This whole thing with the new Nazis is just stupid, and the best way to deal with stupid is ignore it. It's always going to be there, so just leave it alone.

    Love,
    dad
    "

Add Your Comment

Have a Techdirt Account? Sign in now. Want one? Register here
Get Techdirt’s Daily Email
Use markdown for basic formatting. HTML is no longer supported.
  Save me a cookie
Follow Techdirt
Advertisement
Report this ad  |  Hide Techdirt ads
Essential Reading
Techdirt Insider Chat
Advertisement
Report this ad  |  Hide Techdirt ads
Recent Stories
Advertisement
Report this ad  |  Hide Techdirt ads

Close

Email This

This feature is only available to registered users. Register or sign in to use it.