We Need To Shine A Light On Private Online Censorship

from the transparency-reporting-on-content-moderation dept

On February 2nd, Santa Clara University is hosting a gathering of tech platform companies to discuss how they actually handle content moderation questions. Many of the participants have written short essays about the questions that will be discussed at this event — and over the next few weeks we’ll be publishing many of those essays, including this one.

In the wake of ongoing concerns about online harassment and harmful content, continued terrorist threats, changing hate speech laws, and the ever-growing user bases of major social media platforms, tech companies are under more pressure than ever before with respect to how they treat content on their platforms—and often that pressure is coming from different directions. Companies are being pushed hard by governments and many users to be more aggressive in their moderation of content, to remove more content and to remove it faster, yet are also consistently coming under fire for taking down too much content or lacking adequate transparency and accountability around their censorship measures. Some on the right like Steve Bannon and FCC Chairman Ajit Pai have complained that social media platforms are pushing a liberal agenda via their content moderation efforts, while others on the left are calling for those same platforms to take down more extremist speech, and free expression advocates are deeply concerned that companies’ content rules are so broad as to impact legitimate, valuable speech, or that overzealous attempts to enforce those rules are accidentally causing collateral damage to wholly unobjectionable speech.

Meanwhile, there is a lot of confusion about what exactly the companies are doing with respect to content moderation. The few publicly available insights into these processes, mostly from leaked internal documents, reveal bizarrely idiosyncratic rule sets that could benefit from greater transparency and scrutiny, especially to guard against discriminatory impacts on oft-marginalized communities. The question of how to address that need for transparency, however, is difficult. There is a clear need for hard data about specific company practices and policies on content moderation, but what does that look like? What qualitative and quantitative data would be most valuable? What numbers should be reported? And what is the most accessible and meaningful way to report this information?

Part of the answer to these questions can be found by looking to the growing field of transparency reporting by internet companies. The most common kind of transparency report that companies voluntarily publish gives detailed numbers about government demands for information about the companies’ users—showing, for example, how many requests were received, from what countries or jurisdictions, what kind of data was requested, and whether they were complied with or not. As reflected in this history of the practice published by our organization, New America’s Open Technology Institute (OTI), transparency reporting about government demands for data has exploded over the past few years, so much so that projects like the Transparency Reporting Toolkit by OTI and Harvard’s Berkman-Klein Center for Internet & Society have emerged to try and define consistent standards and best practices for such reporting. Meanwhile, a decent number of companies have also started publishing reports about the legal demands they receive for the takedown of content, whether copyright-based or otherwise.

However, almost no one is publishing data about what we’re talking about here: voluntary takedowns of content by companies based on their own terms of service (TOS). Yet especially now, as private censorship gets even more aggressive, the need for transparency also increases. This need has led to calls from a variety of corners for companies to report on content moderation. For example, a working group of the Freedom Online Coalition, composed of representatives from industry, civil society, academia, and government, called for meaningful transparency about companies’ content takedown efforts, complaining that “there is very little transparency” around TOS enforcement mechanisms. The 2015 Ranking Digital Rights Corporate Accountability Index found that every company surveyed received a failing grade with respect to reporting on TOS-based takedowns; the 2017 Index findings fared only slightly better. Finally, David Kaye, the United Nations Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, called for companies to “disclose their policies and actions that implicate freedom of expression.” Specifically, he observed that “there are … gaps in corporate disclosure of statistics concerning volume, frequency and types of request for content removals and user data, whether because of State-imposed restrictions or internal policy decisions.”

The benefits to companies issuing such transparency reports around their content moderation activities would be significant: For those companies under pressure to “do something” about problematic speech online, this is a an opportunity to outline the lengths to which they have gone to do just that; for companies under fire for “not doing enough,” a transparency report would help them express the size and complexity of the problems they are addressing, and explain that there is no magic artificial intelligence wand they can wave and make online extremism and harassment disappear; and finally, public disclosure about content moderation and terms of service practices will go a long way toward building trust with users—a trust that has crumbled in recent years. Putting aside the benefit to companies, though, there is the even more significant need of policymakers and the public. Before we can have an intelligent conversation about hate speech, terrorist propaganda, or other worrisome content online, or formulate fact-based policies about how to address that content, we need hard data about the breadth and depth of those problems, and about the platforms’ current efforts to solve those problems.

While there have been calls for publication of such information, there has been little specificity with respect to what exactly should be published. No doubt this is due, in great part, to the opacity of individual companies’ content moderation policies and processes: It is difficult to identify specific data that would be useful without knowing what data is available in the first place. Anecdotes and snippets of information from companies like Automattic and Twitter offer a starting point for considering what information would be most meaningful and valuable. Facebook has said they are entering a new of era transparency for the platform. Twitter has published some data about content removed for violating its TOS, Google followed suit for some of the content removed from YouTube, and Microsoft has published data on “revenge porn” removals. While each of these examples is a step in the right direction, what we need is a consistent push across the sector for clear and comprehensive reporting on TOS-based takedowns.

Looking to the example of existing reports about legally-mandated takedowns, data that shows the scope and volume of content removals, account removals, and other forms of account or content interference/flagging would be a logical starting point. Information about content that has been flagged for removal by a government actor—such as the U.K.’s Counter Terrorism Internet Referral Unit, which was granted “super flagger” status on YouTube, allowing the agency to flag content in bulk—should also be included, to guard against undue government pressure to censor. More granular information, such as the number of takedowns in particular categories of content (whether sexual content, harassment, extremist speech, etc.), or specification of the particular term of service violated by each piece of taken-down content, would provide even more meaningful transparency. This kind of quantitative data (i.e., numbers and percents) would be valuable on its own, but would be even more helpful if paired with qualitative data to shed more light on the platforms’ opaque content moderation practices and tell users a clear story about how those processes actually work, using compelling anecdotes and examples.

As has already and often happened with existing transparency reports, this data will help keep companies accountable. Few companies will want to demonstrably be the most or least aggressive censor, and anomalous data such as huge spikes around particular types of content will be called out and questioned by one stakeholder group or another. It will also help ensure that overreaching government pressure to takedown more content is recognized and pushed back on, just as in current reporting it has helped identify and put pressure on countries making outsized demands for users’ information. And most importantly, it will help drive policy proposals that are based on facts and figures rather than on emotional pleas or irrational fears—policies that hopefully will help make the internet a safer space for a range of communities while also better protecting free expression.

Unquestionably, the major platforms have become our biggest online gatekeepers when it comes to what we can and cannot say. Whether we want them to have that power or not, and whether we want them to use more or less of that power in regard to this or that type of speech, are questions we simply cannot answer until we have a complete picture of how they are using that power. Transparency reporting is our first and best tool for gaining that insight.

Kevin Bankston is the Director of the Open Technology Institute at New America). Liz Woolery is Senior Policy Analyst at the Open Technology Institute at New America.

Filed Under: , , , , ,

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “We Need To Shine A Light On Private Online Censorship”

Subscribe: RSS Leave a comment
57 Comments
Anonymous Coward says:

Re: Et Tu, Techdirt?

The author of this piece raises several good points, but a distributed moderation model (like TD comment flagging or most moderation on StackOverflow/StackExchange) doesn’t lend itself so well to the “transparency report” concept — much of the data isn’t kept in a very granular way in these systems, anyway. (For instance, TD probably just keeps a count of flags on each post, although if Mike and co. wish to chime in with more details, they’re certainly welcome to do so!)

Anonymous Coward says:

Re: Re: Et Tu, Techdirt?

a distributed moderation model (like TD comment flagging or most moderation on StackOverflow/StackExchange)

These are only semi-distributed, because ultimately it’s one site collecting data and deciding whether to show/hide/delete comments based on it. In a fully distributed model, we’d get the comments from somewhere other than the site posting the story, and we’d decide what to show/hide.

orbitalinsertion (profile) says:

Re: Et Tu, Techdirt?

Is the entire commissariat supposed to report what it flags and why? There is merely an annoyance flag which some people use. (And some of them tell you right then and there.) If you are one of those clowns who disbelieve this, you won’t believe a transparency report, either. That game is fully transparent already.

Christenson says:

Re: Re: Et Tu, Techdirt?

Interestingly, I seem to trust Techdirt even without a transparency report. Something to do with consistency…

Having said that, some basic statistics (maybe along with the weekly “best of the week”?) would be helpful(how much content is moderated out? How often is that more than weeding my stupid keying mistakes, of which I’ve made another one today? Robots? Human spammers?)

There is also the problem of “at scale”, and I just don’t see how we can possibly get a good idea of what should happen “at scale” if we don’t look at smaller sites.

Wendy Cockcroft (user link) says:

Re: Re: Re: Et Tu, Techdirt?

Keying errors don’t get moderated, Christenson. Comments are hidden by the likes of me clicking on the red “Report” button at the side of any post I don’t like; if five people click it the post gets hidden (if memory serves).

Spam such as ads for sunglasses, etc., gets caught and binned most of the time or we’d see more of it.

Sometimes my posts are held for moderation, probably because I tripped a keyword, and never get posted. Okay, fine, that’s not the end of the world.

Christenson says:

Re: Re: Re:2 Et Tu, Techdirt?

Umm, I’m thankful my accidental posts with blank bodies (happens due to autocomplete interactions and habits) get “held for moderation” and largely disappear. I’m thankful there are no ads for sunglasses, and all of that is moderation, even if (see Mike Masnick’s comments below) it is largely automated and crowdsourced to good people like you.

Mike Masnick (profile) says:

Re: Re: Re:3 Et Tu, Techdirt?

Umm, I’m thankful my accidental posts with blank bodies (happens due to autocomplete interactions and habits) get "held for moderation" and largely disappear.

Yes, I should note two features of our system: it flags "blank body" or "empty comment" posts as requiring review, and it also flags "duplicate posts" that are done immediately after one another (sometimes comments accidentally get submitted twice with a double click). Those comments we generally won’t release from the filter, because it’s fairly clear that they were errors, and not intended to be posted.

Anonymous Anonymous Coward (profile) says:

Re: Re: Re:4 Et Tu, Techdirt?

Duplicate posts also happen when once submit is clicked and the next page fails (for whatever reason, it could be my connection rather than your servers), one clicks submits again (the first click seems to be recorded somehow, and followed through on). I figured this out some time ago, and no longer click submit again.

Mike Masnick (profile) says:

Re: Re: Re:2 Et Tu, Techdirt?

Sometimes my posts are held for moderation, probably because I tripped a keyword, and never get posted.

I would actually be fairly surprised if those comments are not posted. It is, of course, possible that we miss some false positives in our review, but our process makes that… pretty difficult. I’m fairly confident that we can catch most false positives and get them posted to the site.

Rich Kulawiec (profile) says:

Fundamental errors in architecture can't be fixed

An example: if you start with a 1974 Ford Pinto and try to build a tank, you’ll fail. Oh, you can beef off the frame and drop in a big engine and bolt on armor plate and so on, but no matter what you do, no matter how much money and effort you spend, you will never have a tank. And if you try to pretend that you do, and use it as one, then you’re going to fail in a big way.

The mechanisms (and processes) of abuse control need to be accounted for at the whiteboard stage of design. If they’re not designed-in early on then retrofitting them later is almost certainly not going to work. As we see, all day, every day with Google and Facebook and Twitter and others. They didn’t learn from their predecessors’ successes and failures; instead, in their arrogance and naviete’, they blundered ahead and built enormous operations *that they do not know how to run*.

Thus all the flailing that we see as they try one thing and then another, none of which work particularly well and some of which have adverse side effects. This is all an attempt to patch the problem the field and thus avoid admitting that they made the wrong step years ago — and that it might be unfixable.

Facebook has publicly admitted that there are 200M fake profiles, which means that the number they know about is higher, and that in turn means that the real number is still higher. Twitter is hilariously lowballing their estimates of bot numbers, as if we should believe that the same people who dropped a couple hundred million fakes on Facebook couldn’t do exactly the same thing to Twitter. And so on…to the point where I think it’s reasonable to ask if these companies are actually in effective control of their own operations.

So as you discuss all the points above, please keep in mind that some (but not all) of what’s happening is due to incompetence and hubris: they were so busy asking how they could that they never stopped to ask if they should.

Anonymous Coward says:

Re: Fundamental errors in architecture can't be fixed

The biggest problems those sites have is that software cannot deal with abuse problems in a reliable way. That goes for postings and sign-ups both. Also, most of the social media sites allow users to decide which accounts they will follow, but would rather chase for large numbers of followers and followed that curate their use of the system to that which they are interested in, and which meets their standards of acceptability.

Too many people when they find objectionable material not only want other to protect them from that material, but also protect other from the same material; hence the pressure for censorship.

JEDIDIAH says:

Re: Re: Fundamental errors in architecture can't be fixed

No. The biggest problem with sites like Facebook is that they promote the nonsense. This is by design. So an feed that’s prone to trolling quickly becomes pointless as the trolls get all the “mod points”.

That’s not even getting into the stupid things you can get banned for on FB.

Anonymous Coward says:

Re: Re: Fundamental errors in architecture can't be fixed

The biggest problems those sites have is that software cannot deal with abuse problems in a reliable way.

The biggest design problem is that they have to. Why should Techdirt, your local newspaper, or anyone else posting stuff to a website have to be involved in people’s discussions of it? That just happened to be the easiest way to do things in the early days of the web, and worked "well enough" to avoid getting replaced.

Moving this to Facebook or Disqus doesn’t solve the problem, because there’s still some centralized authority deciding which conversions people can have. A real distributed discussion system could solve it, once we figure out how to do that.

Anonymous Coward says:

Re: Re: Re: Fundamental errors in architecture can't be fixed

A real distributed discussion system could solve it,

One exists, it is called Usenet, but conversation are slower because it takes time for posting to spread to all servers, and can be a bit disjointed, because they are seen in different orders on different servers. Sometimes a central system is better for human interactions.

Anonymous Coward says:

Re: Fundamental errors in architecture can't be fixed

You seem to have a benign view of Facebook, Google, and Twitter.

Those all have same leftist, corporatist, globalist agenda — in which chaos and “pushing the limits” is used as a tool — but still for a while have to be sneaky about doing it.

Again as I’ve reported here in this little island of corporatism: Google removed advertising income from Infowars and Antiwar.com, and its Youtube has “de-monetized” many conservatives. — You just don’t hear about those because They control the message!

Stephen T. Stone (profile) says:

Re: Re: Fundamental errors in architecture can't be fixed

Google removed advertising income from Infowars and Antiwar.com

Why should Google be forced to host speech from Infowars in the form of advertisements, regardless of how anyone at Google personally feels about that site?

its Youtube has "de-monetized" many conservatives.

That sucks for them. They ain’t the only ones who got dinged by the Adpocalypse, though.

You just don’t hear about those because They control the message!

If anything, we hear about it from the people who got dinged by Google/YouTube moderation because they refuse to shut up about how their getting dinged is some anti-conservative conspiracy funded and run by whatever boogeyman is popular this week.

Richard (profile) says:

Re: Re: Re: Fundamental errors in architecture can't be fixed

There is a certain irony here.

Currently the right is claiming that they are being silenced by large corporations whose agenda they dislike. They may well be correct in this observation BUT – which philosophy is it that says it is OK, even laudable for corporations to use the free market and grow into de-facto monopolies and that for the state to interfere would be "liberal/socialist/communist".

Of course if the federal state were to nationalise Google/youtube/twitter/facebook – the effect of which would be to force the corporations to follow the first amendment (which is what they seem to want) then the right would cry COMMUNISM!!! (at least that is what they ought to cry…

Richard (profile) says:

Re: Re: Re: Fundamental errors in architecture can't be fixed

As opposed to the more common rightist corporatist, globalist agenda?
in which chaos and “pushing the limits” is used as a tool
That would be alt-rightist.

As opposed to the Totalitarian rightist corporatist, globalist agenda?

which is of course Control alt rightist.

and the Totalitarian corporatist, globalist agenda that will happily drift into a nuclear war (presumably initially with N Korea)

which is Control Alt delete-ist

Christenson says:

Re: Underestimating replaceability

The pinto analogy isn’t carried far enough…. I can build a tank from a pinto, but when I’m done and have a good battle tank, I don’t think I’ll have any parts of the original pinto left! But don’t come whining about how that cost 3 times as much as if I’d started from scratch, lol.

Similarly for the social media platforms…of which Techdirt itself is, in some degree, one small example. The entire architecture under techdirt, I believe, has been replaced a few times, but keeping the fundamentals…a postings we call a blog, a way for the world to respond to that blog, and a way for techdirt to moderate those responses.

Anonymous Coward says:

Well, this is the place to learn about sneaky tricks and unwritten rules -- but only applied to dissenters! VILE AD HOM IS OKAY IF YOU'RE A FANBOY!

I just repeat from yesterday:

Nearly all websites EXCEPT Techdirt have WRITTEN RULES on words and attitudes allowed. But Techdirt tries to do it the sneaky way, first with the "hiding" which is falsely claimed to be just "community standard" and not to involve a moderator who makes the decision to hide comments. Then there’s the fanboy / dissenter distinction: NOT ONE fanboy has ever had a comment hidden here, ONLY those who dissent, and for NO articulable reason. Then there’s the un-admitted blocking of home IP address, which was done to me.

You cannot even get an answer here as to whether there IS a moderator or not! Works by "magic".

And add that the "hiding" of comments which are okay under common law goes on as of today, just read a couple pieces back.

Stephen T. Stone (profile) says:

Re: Well, this is the place to learn about sneaky tricks and unwritten rules -- but only applied to dissenters! VILE AD HOM IS OKAY IF YOU'RE A FANBOY!

Just for the record, Mr. SovCit, I have had at least a couple of comments flagged in the past. One got both a flagging and a Funny badge!

And even if there is a moderator, at worst, they get rid of spam comments that any other comments section on any other blog would send to the digital dumpster. I have seen no reason to believe a flesh-and-blood moderator is stopping you, me, or anyone else from saying what they want.

(Oh, and one more thing: Per usual, “common law” is not a magic phrase that ends discussion and prevents rebuttals, especially if you cannot define what it means and in what context you use it. Try another trick.)

Anonymous Coward says:

Re: Well, this is the place to learn about sneaky tricks and unwritten rules -- but only applied to dissenters! VILE AD HOM IS OKAY IF YOU'RE A FANBOY!

While you have the right to speak on this forum, you do not have a corresponding right to be heard, so quit trying to claim the latter right. Anybody who wants to can read the hidden contents, and will do so; while those who trust the communities judgment will ignore them.

Christenson says:

Re: Well, this is the place to learn about sneaky tricks and unwritten rules -- but only applied to dissenters! VILE AD HOM IS OKAY IF YOU'RE A FANBOY!

Hey, I’ve had a few comments deleted,too…and flagged them myself for deletion due to fairly obvious technical problems.

The humans (Mike Masnick and helpers) behind Techdirt obviously moderate…where do you suppose weekly “editor’s choice” awards come from? How do you suppose those “flag” choices get converted to hidden comments, especially in the presence of ill-behaved visitors who flag at random? Make accidental clicks?

Just where did you cop such an “attitude”, by the way? You don’t suppose your brick-ignorance and inability to figure out commonly-unspoken rules might have pissed off the management, who notices that you are using more of their limited time than is available for no benefit to anyone?

Mike Masnick (profile) says:

Re: Re: Well, this is the place to learn about sneaky tricks and unwritten rules -- but only applied to dissenters! VILE AD HOM IS OKAY IF YOU'RE A FANBOY!

The humans (Mike Masnick and helpers) behind Techdirt obviously moderate…where do you suppose weekly "editor’s choice" awards come from? How do you suppose those "flag" choices get converted to hidden comments, especially in the presence of ill-behaved visitors who flag at random? Make accidental clicks?

We don’t have moderators. There are three things that happen, and that’s it: we have fairly sophisticated (yet still imperfect) spam filters that deal with comments pre-posting (and which we review regularly to let through comments that were incorrectly flagged as spam. Then the voting system.

The third thing is almost never used, but in the RARE cases when a 100%, obviously total spam comment (totally off topic, pushing a website/product) shows up and we see it, we will delete those, and only those comments. The rest we leave up to the community to handle via the voting system.

Anonymous Coward says:

Re: Re: Re: Well, this is the place to learn about sneaky tricks and unwritten rules -- but only applied to dissenters! VILE AD HOM IS OKAY IF YOU'RE A FANBOY!

There may be room for numbering the spam handling.
– How many do you review manually?
– How many are corrected?
– Maybe some light on the systems numbers, but that is not as important.
– Number of the RARE cases each year or smth.

These things can also improve your understanding of how effective it is and potentially inspire you towards improving it.

Christenson says:

Re: Re: Re:3 Well, this is the place to learn about sneaky tricks and unwritten rules -- but only applied to dissenters! VILE AD HOM IS OKAY IF YOU'RE A FANBOY!

Thanks Mike and helpers.

I started “Et Tu, Techdirt” with exactly this idea in mind.

My only further request is that you consider “Moderation” broadly as a system — to my mind, it includes that automated spam filter and its human reviewers, whatever it is that ‘holds comments for moderation’, the voting crowd, and the humans who sometimes intervene.

JMT (profile) says:

Re: Re: Well, this is the place to learn about sneaky tricks and unwritten rules -- but only applied to dissenters! VILE AD HOM IS OKAY IF YOU'RE A FANBOY!

"The humans (Mike Masnick and helpers) behind Techdirt obviously moderate…where do you suppose weekly "editor’s choice" awards come from?"

WTF? That’s not moderation. That’s not even vaguely related to moderation. The editors choices are simply comments selected from those with high vote counts.

"How do you suppose those "flag" choices get converted to hidden comments, especially in the presence of ill-behaved visitors who flag at random? Make accidental clicks?"

The process of hiding comments that receive a certain number of flags is easily automated. Any any flag, accidental or not, can be undone by clicking again.

Thad (user link) says:

Re: Re: Re: Well, this is the place. I can't tell one from another; did I find you, or you find me? There was a time before we were born; if someone asks, this where I'll be, where I'll be. Hi yo! we drift in and out. Hi yo! sing into my mouth.

Jesus Christ, is Blue still having trouble with the concept that if five different people click on the flag icon, it hides his post?

…or is he still refusing to acknowledge that there could possibly be five people who don’t like him?

He does seem pretty bad at counting.

christenson says:

Re: Re: Re: Broader definition of moderation

On the contrary, don’t limit moderation to removing “bad” comments by people!
“Editor’s choice” awards certainly do encourage people to try to write what most of us consider “excellent” comments. It’s not what we usually think of as moderation, but it is moderation nonetheless.
And clearly, someone decided that a certain number of flags meant the comment got hidden. That’s very much moderation.

Anonymous Coward says:

credit where due

This seams a good opportunity to congratulate TechDirt, on what is objectively the best comment system on the net.

No scripts, no captcha’s, anon allowed, and your even good about vpn users- I don’t know how you guys do it, and I suspect it’s allot of work, which makes it all the more impressive.

cheers fellows!

Anonymous Coward says:

Re: The Limits of the First

The First clearly has limits, but the question of how hard you can push back against sites with a very hardcore language and some threatening positioning is always interesting. Ultimately, pressure cause counter-pressure etc. We may end up with the solution being part of the problem.

That is also why emotional measuring is becoming such an important area in analysing the internet: I imagine radicalisation may be correlated with certain emotional combinations.

Anonymous Coward says:

Re: The Limits of the First

Even without government "pushing", could certain types of moderation/banning be illegal? Many of these companies are in California, where companies cannot arbitrarily limit speech on nominally-private property that’s accessible to the public. The California constitution is stronger than the US constitution in this regard. Also see citation 9 "Extending Speech Rights Into Virtual Worlds" on that page.

Anonymous Coward says:

Unquestionably incorrect

>Unquestionably, the major platforms have become our biggest online gatekeepers when it comes to what we can and cannot say.

If only “on their platforms” was added to the end of this sentence it would be correct.

What we need is technology that empowers people to create and control their own platforms for speech, not government regulation.

Anonymous Coward says:

Flat but threaded comments solves this problem

Assholes still say things but people can respond, karma systems like slashdot or arbitrary systems like fark don’t really help anyone, let people speak and if that is messy and even violent that is the price of everywhere being a global forum

You remember the Idea of a forum.

(much as Rome never lived up to anything ,but as an ideal, it is aspirational)

Anonymous Coward says:

software

The biggest problems those sites have is that software cannot deal with abuse problems in a reliable way. That goes for postings and sign-ups both. Also, most of the social media sites allow users to decide which accounts they will follow, but would rather chase for large numbers of followers and followed that curate their use of the system to that which they are interested in, and which meets their standards of acceptability.

Leave a Reply to Anonymous Coward Cancel reply

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...