Free Speech, Censorship, Moderation And Community: The Copia Discussion

from the not-an-easy-issue dept

As I noted earlier this week, at the launch of the Copia Institute a couple of weeks ago, we had a bunch of really fascinating discussions. I’ve already posted the opening video and explained some of the philosophy behind this effort, and today I wanted to share with you the discussion that we had about free expression and the internet, led by three of the best people to talk about this issue: Michelle Paulson from Wikimedia; Sarah Jeong, a well-known lawyer and writer; and Dave Willner who heads up “Safety, Privacy & Support” at Secret after holding a similar role at Facebook. I strongly recommend watching the full discussion before just jumping into the comments with your assumptions about what was said, because for the most part it’s probably not what you think:

Internet platforms and free expression have a strongly symbiotic relationship — many platforms have helped expand and enable free expression around the globe in many ways. And, at the same time, that expression has fed back into those online platforms making them more valuable and contributing to the innovation that those platforms have enabled. And while it’s easy to talk about government attacks on freedom of expression and why that’s problematic, things get really tricky and really nuanced when it comes to technology platforms and how they should handle things. At one point in the conversation, Dave Willner made a point that I think is really important to acknowledge:

I think we would be better served as a tech community in acknowledging that we do moderate and control. Everyone moderates and controls user behavior. And even the platforms that are famously held up as examples… Twitter: “the free speech wing of the free speech party.” Twitter moderates spam. And it’s very easy to say “oh, some spam is malware and that’s obviously harmful” but two things: One, you’ve allowed that “harm” is a legitimate reason to moderate speech and two, there’s plenty of spam that’s actually just advertising that people find irritating. And once we’re in that place, it is the sort of reflexive “no restrictions based on the content of speech” sort of defense that people go to? It fails. And while still believing in free speech ideals, I think we need to acknowledge that that Rubicon has been crossed and that it was crossed in the 90s, if not earlier. And the defense of not overly moderating content for political reasons needs to be articulated in a more sophisticated way that takes into account the fact that these technologies need good moderation to be functional. But that doesn’t mean that all moderation is good.

This is an extremely important, but nuanced point that you don’t often hear in these discussions. Just today, over at Index on Censorship, there’s an interesting article by Padraig Reidy that makes a somewhat similar point, noting that there are many free speech issues where it is silly to deny that they’re free speech issues, but plenty of people do. The argument then, is that we’d be able to have a much more useful conversation if people admit:

Don’t say “this isn’t a free speech issue”, rather “this is a free speech issue, and I?m OK with this amount of censorship, for this reason.? Then we can talk.”

Soon after this, Sarah Jeong makes another, equally important, if equally nuanced, point about the reflexive response by some to behavior that they don’t like to automatically call for blocking of speech, when they are often confusing speech with behavior. She discusses how harassment, for example, is an obvious and very real problem with serious and damaging real-world consequences (for everyone, beyond just those being harassed), but that it’s wrong to think that we should just immediately look to find ways to shut people up:

Harassment actually exists and is actually a problem — and actually skews heavily along gender lines and race lines. People are targeted for their sexuality. And it’s not just words online. It ends up being a seemingly innocuous, or rather “non-real” manifestation, when in fact it’s linked to real world stalking or other kinds of abuse, even amounting to physical assault, death threats, so and so forth. And there’s a real cost. You get less participation from people of marginalized communities — and when you get less participation from marginalized communities, you lead to a serious loss in culture and value for society. For instance, Wikipedia just has fewer articles about women — and also its editors just happen to skew overwhelmingly male. When you have great equality on online platforms, you have better social value for the entire world.

That said, there’s a huge problem… and it’s entering the same policy stage that was prepped and primed by the DMCA, essentially. We’re thinking about harassment as content when harassment is behavior. And we’re jumping from “there’s a problem, we have to solve it” and the only solution we can think of is the one that we’ve been doling out for copyright infringement since the aughties, and that’s just take it down, take it down, take it down. And that means people on the other end take a look at it and take it down. Some people are proposing ContentID, which is not a good solution. And I hope I don’t have to spell out why to this room in particular, but essentially people have looked at the regime of copyright enforcement online and said “why can’t we do that for harassment” without looking at all the problems that copyright enforcement has run into.

And I think what’s really troubling is that copyright is a specific exception to CDA 230 and in order to expand a regime of copyright enforcement for harassment you’re going to have to attack CDA 230 and blow a hole in it.

She then noted that this was a major concern because there’s a big push among many people who aren’t arguing for better free speech protections:

That’s a huge viewpoint out right now: it’s not that “free speech is great and we need to protect against repressive governments” but that “we need better content removal mechanisms in order to protect women and minorities.”

From there the discussion went in a number of different important directions, looking at other alternatives and ways to deal with bad behavior online that get beyond just “take it down, take it down,” and also discussed the importance of platforms being able to make decisions about how to handle these issues without facing legal liability. CDA 230, not surprisingly, was a big topic — and one that people admitted was unlikely to spread to other countries, and the concepts behind which are actually under attack in many places.

That’s why I also think this is a good time to point to a new project from the EFF and others, known as the Manila Principles — highlighting the importance of protecting intermediaries from liability for the speech of their users. As that project explains:

All communication over the Internet is facilitated by intermediaries such as Internet access providers, social networks, and search engines. The policies governing the legal liability of intermediaries for the content of these communications have an impact on users? rights, including freedom of expression, freedom of association and the right to privacy.

With the aim of protecting freedom of expression and creating an enabling environment for innovation, which balances the needs of governments and other stakeholders, civil society groups from around the world have come together to propose this framework of baseline safeguards and best practices. These are based on international human rights instruments and other international legal frameworks.

In short, it’s important to recognize that these are difficult issues — but that freedom of expression is extremely important. And we should recognize that while pretty much all platforms contain some form of moderation (even in how they are designed), we need to be wary of reflexive responses to just “take it down, take it down, take it down” in dealing with real problems. Instead, we should be looking for more reasonable approaches to many of these issues — not in denying that there are issues to be dealt with. And not just saying “anything goes and shut up if you don’t like it,” but that there are real tradeoffs to the decisions that tech companies (and governments) make concerning how these platforms are run.

Filed Under: , , , , ,
Companies: copia, copia institution

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Free Speech, Censorship, Moderation And Community: The Copia Discussion”

Subscribe: RSS Leave a comment
AnonCow says:

If you believe in free speech, it is for all speech. If you believe that something incredibly offensive is not protected, you don’t believe in free speech.

Believing in free speech means facing some nauseating examples of free speech and still believing in free speech even after you have seen the full horror of what some people will say when given free speech.

Imagine the most horrible person saying the most horrible things. Now, picture yourself standing next to that person saying that he has the right to express himself. That is what is required of those that want to say they believe in free speech. Anything else is lip service.

Al says:

Spam, business and People

Seems like the idea that modding spam is eqi to modding free speech is a bit mis leading,spam has a commercial purpose, and is almost never related to the topic of a thread, or even as is true of many off topic posts related to something said by someone in the thread, it isn’t really speech as such more like someones car alarm going off.

As an example the promo pieces you’ve run, clearly identified as such and even seem to interest some people, so in that context limks to different places to buy, reviews, pricing even actual vender posts with local contact info etc would seem to be appropriate no off topic and not spam, but if my post consisted of BUYMYPENISENLARGMENT that’s clearly not relevant to the discussion at hand, modding for some semblance of focus and not allowing commercial entities to drown out the actual speech of people is not censorship, no matter what SCOTUS says corporations are NOT people and should not have the same rights AS people.

Leigh Beadon (profile) says:

Re: Spam, business and People

But judgement about relevance to the topic/thread at hand is in and of itself making a decision about what speech is allowed based on its content, and thus breaks an “absolutist” approach to free speech.

“Modding for some semblance of focus” may or may not meet the definition of censorship, but it is inescapably a violation of “pure” free speech.

Al says:

Re: Re: Spam, business and People

Sorry I should have thought about how to word that better, it distracts form my more central point that commercial and people are different and should be treated differently, I agree that in a larger sense modding for focus is more complex but the example of spam is easy, spam is a commercial, it has no free speech rights it is not people, and mo amount of corporations are made of people will convince me of that, so is solent green, does solent green have free speech rights?.

If the CEO of company X comes out and says I personally XXX that is different, he is responsible and it is his/her actual person(al) assertion, but to include spam as free speech takes things on the wrong track since it assigns rights to things rather than people, there is no way that can work out ISDS comes to mind, it’s all part of the same narrative things have rights, people don’t, any discussion that doesn’t start with the assertion that people have rights and things don’t cannot lead anywhere but wrong.

lars626 says:

information and noise

Go back and watch the original Monte Python ‘SPAM’ skit.
In that context the spam contains no information, only noise. The problem is that the noise drowns out all the information.

Offensive speech contains information, even if we disagree. Filtering out the noise is useful. The trick is to not confuse objectionable speech with noise, not always easy.

Whatever forum you are in will have some constraints.
On the street corner you are constrained by your vocal limitations. In an auditorium there are a limited number of microphones. In an online forum you are limited by whatever rules the forum creator sets.

If those rules are too restrictive go elsewhere or create your own.

Anonymous Coward says:

I find it interesting and timely this article comes up. Right now there seems to be a continuing exodus from the Reddit site over censorship and mod over reactions.

For a long time, there was no record of such deletions. They just disappeared. This has happened so long that now there are several bots that watch the front page for such deletions in what appears to be an attempt at controlling the discussion by removing it, especially on what seems to be certain forbidden subjects or attempts at sliding it down in importance.

While most mods are not taking responsibility for the deletions they are now starting to show up in the deleted records to try and justify those actions they won’t do in their own domains.

Many have left for another clone site called Voat. More and more seem to be showing up each day there, leaving what they claim to be severe censorship and mod abuse of power.

Anonymous Coward says:

I think that part of the problem with Facebook and other social networks is that they merge public and semi-private relationships. That is by merging all conversations into a single platform, it is much harder for the users to filter who they listen to. As a result, messages on such networks feel more personal that they really are, making harassment and the like more ‘effective. This causes greater pressure for the platform to take action control some postings..

Anonymous Coward says:


Couple of primers
1. I have never been a moderator so, opinions and what everyone has and all that.
2. Read the article, watched the discourse
3. Long time reader but for reasons most can understand I do not have a login to distinguish my voice
4. my proposal is off the cuff, more time to ponder could refine but I would like to catch
Now that that is out of the way I would like try and catch Mike while he is still working

What if we could have a middle ground, Chan style but with a twist. you have an article, write up, blog post, etc. with commenting enabled. why not take an approach somewhat similar to what Jeff Atwood and crew have done over at the Discourse sites and blend it with Chan style boards. In my hypothetical world the Mods would, when deemed necessary (spam, harassment, etc.) create a mirror “board”. you would the copy over the article and comment stream and allow for the progression of discourse being modded separately. you are taking down or blocking just moving. The Discourse crew does something similar with posts that are made in the wrong section E.G. a hardware issue question in the programming section. just do Chan board style replication to allow for different flows. This way you do not stifle speech you simply redirect the flow. Does this make sense?

Al says:

Where this is actually happening

Bruce sterling was right in his quote of whoever from hacker crackdown we all feel as if this is happening in our living rooms because in a way it is, I’m in my bedroom my cat is harassing me to be let out, none of you are actually here or even in the same city as me, but in a way it FEELS like you are.

But it’s important to differentiate between real and virtual, people are asshole I don’t think this is a surprise to anyone(or at least it should not be) but they are not in your living room.. If you really want to punch them and shut them up go there.. If it’s not worth it to do that then maybe it’s just people being assholes on the internet, if people are IRL real assholes chances are there are people that are not near them that will deal with the situation, the facebook suicides demonstrated that people hundreds and thousands of miles away cannot help with real problems in a timely manner, if peoples speech is so objectionable that it needs to be opposed there are probably people nearer by that will oppose it.

I have had people from elsewhere come to try to help with anti-Fa work and they didn’t they where in the way it’s the same with the internet in general your a long way away and people mostly don’t mean the insane madness that comes out of their keyboards and if they do chances are that there are people near that are dealing with it.


4chan is not the internet, it’s not even the most threatening part of the internet.. worry about your bank, employer,FBI,NSA, Local cops…

Wendy Cockcroft says:

“we need better content removal mechanisms in order to protect women and minorities.”

And from there it’s a short slide down the slippery slope…

But that doesn’t mean we should let harassment fly, either. This is an amazing opportunity to open up the debate on freedom of speech online and I look forward to seeing more Copia videos on important issues like this.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...