Reddit is a prime example of the explosive growth of online communities — and recently it's become a prime test case for the huge challenges such growth brings, especially for those who are trying to use it as the foundation for a successful company. This week we discuss some of those challenges that sit at the intersection of community and business, both in terms of popular examples like Reddit and personal experiences as both members and builders of online communities.
Oh, the poor, lowly comments section. These days, you can't turn a corner without the comment section being blamed for the death of civility, falling gold prices, and the general, entropic heat malaise of the universe. If you haven't noticed, there's a bit of a trend in the news industry afoot wherein you kill off the comment section, mindlessly shove your community over to Facebook if they want to comment, then proudly proclaim you're doing this not because you're too lazy or cheap to moderate, but because you're really just super passionate about improving online conversation. It's kind of a thing.
The Verge seems to be the latest news outlet to join the trend, co-founder Nilay Patel informing readers this week that the website will be shutting down the site's comment section because the Internet has just gotten too kooky to concentrate. Like other comment section killers, The Verge rather proudly proclaims that this move is part of an effort to build better relationships:
"What we've found lately is that the tone of our comments (and some of our commenters) is getting a little too aggressive and negative — a change that feels like it started with GamerGate and has steadily gotten worse ever since. It's hard for us to do our best work in that environment, and it's even harder for our staff to hang out with our audience and build the relationships that led to us having a great community in the first place."
Nothing quite says "building relationships" like removing the ability for your readers to publicly speak to you. Meanwhile, if you can't do your "best work" because a few obnoxious trolls can't stop pooping in your comment section, maybe don't read the comments until you're done working? As we noted when Reuters, ReCode, Vox and everybody else killed comments in the noble pursuit of high planes of communication, by closing comments down you're sending a clear message to your community and lifeblood that their input doesn't matter.
And as some (whoa, the irony) Verge commenters point out, killing comments (as is done at Verge sister site Vox.com) doesn't do much for the local flora and fauna, either:
To The Verge's credit they'll still allow forum posts and indicate the comments will return eventually, but the pretense that we're building a better community by putting a collective bag over said community's head never seems to get tired. As numerous sites have illustrated, it doesn't take much work to create a more civil, less-batshit comment section. Some do it with minimal moderation. Others, like us at Techdirt, try to create better incentives for good comments and encourage a strong and vocal community, rather than seeing comments as some sort of "task" to be "dealt" with. Hopefully The Verge's comment vacation is a step toward that direction, and not toward a permanent community comment vacation.
One of the most wonderful sights to see in the gaming community, particularly in the PC gaming community, is what a combination of a loyal fan-base and a strong modding community can produce. This is particularly so when the mods released are clear and active attempts at doing nothing more than making the original product even better. You see this all the time in PC gaming -- old games being yanked into the present, an increase the replayability of a classic, and even all-new sub-games created out of the original. All of this done through a modding community that loves the original work produced by game designers. Some gaming companies embrace the modding community, while some don't. Which way they go is typically decided by just how much control the company generally wants to exert over its product.
Guess which way Microsoft tends to go? Well, they tend to be the protectionist sort, but a recent story about the release of a new free-to-play Halo game, Halo Online, both puzzled me and amused me. The puzzled part came from Microsoft firmly insisting that the release would be available for play in Russia only, which...what the hell? Even the excuse of a long testing period in a Russia-only beta setting is, well, kind of strange.
Microsoft: Right now our focus is on learning as much as we can from the closed beta period in Russia. Theoretically, any expansion outside of Russia would have to go through region-specific changes to address player expectations.
Note that availability of the game to markets outside of Putin-ville is theoretical at this point. Except not really, of course, and that's where the amusement came from. Because if the alchemy ingredients for mods is a loyal fan-base, something begging for modification, and a capable modding community, everyone had to know that restricting this to Russia was going to be a barrier tested by the public before too long. It turns out that "before too long" meant in the past few weeks, because modders were already posting information on their work to free Halo from Russian imprisonment when Microsoft caught wind and fired off a DMCA notice to the host site.
Modders have been mucking about with the leaked Halo Online files to unlock features, with one team creating a game launcher called ‘ElDorito.’ But all that work came to screeching halt yesterday after Microsoft sent a DMCA takedown notice to Github, who was hosting the files. The site quickly complied. Microsoft sent the following notice to Github:
"We have received information that the domain listed above, which appears to be on servers under your control, is offering unlicensed copies of, or is engaged in other unauthorized activities relating to, copyrighted works published by Microsoft," the company wrote in a DMCA notice to Github.
Under other circumstances, that might be the end of the story, except that these are game modders we're talking about. When they commit, they're committed, and their work tends to mean that they're the sort of types who know how to route around these sorts of attacks. Now, to be clear, Microsoft certainly has the right to try to kill off these modders' work, but they're going to have to try a lot harder than a single DMCA if they want to really have this battle.
"In terms of DMCA/C&D mitigation, we have made redundant git backups on private and public git servers. This is to ensure we will always have one working copy. These are being synchronized so that data is always the same," [modder] Woovie explains. "Further DMCAs may happen potentially, it’s not really known at the moment. Our backups will always exist though and we will continue until we’re happy."
Team member Neoshadow42 says that, as a game developer himself, he sympathizes with Microsoft to a point about protecting ones copyrighted material:
"As someone involved in game development, I’m sympathetic with some developers when it comes to copyright issues. This is different though, in my opinion,” the dev explains. "The game was going to be free in the first place. The PC audience has been screaming for Halo 3 for years and years, and we saw the chance with this leak. The fact that we could, in theory, bring the game that everyone wants, without the added on stuff that would ruin the game, that’s something we’d be proud of."
Making the moral equation here slightly more complicated is that the things that "would ruin the game" don't only refer to the geo-restrictions, but to other game "features" as well, such as in-game microtransactions that almost uniformly piss off the PC gaming community. The modding team has aimed at removing those from the game as well, which, given that this is a free-to-play game, might break the business model Microsoft set up for the game. I expect Microsoft to continue battling for control of its product, as well as for the game's restrictions and microtransactions.
Ultimately, this is a damned shame, because there's a lesson to be learned from all of this and that lesson is not that the modding community is the enemy of the game designer. This is pure market testing at its finest. What this entire episode clearly outlines for Microsoft, were it willing to listen, is that potential customers want wider availability for the beta version of the game (as in, not restricted along national borders) and don't want annoying microtransactions in a Halo game. And if they want those things, fans will be willing to pay for them. Should Microsoft continue with its plan to not meet customer demand, those customers likely won't go unfulfilled, they'll simply find their pleasure in the form of a mod from a strong modding community that Microsoft wants to play whac-a-mole with, rather than listen to the wants of its customers.
Over at NiemanLab, there's a good interview with Tom Standage who runs the Economist's digital efforts, in which he reveals the Economist's general view of how it approaches the internet -- which could be summarized as "deny it exists." Basically, the argument that Standage makes is that people want to feel like they've "completed" something and that they're fully informed, and so the Economist likes to pretend that once you've read it, you're completely informed and you don't have to look elsewhere. This is also why the Economist refuses to link to anyone else, because it would disabuse you of the "illusion" that the Economist provided you everything you needed:
...what we actually sell is what I like to call the feeling of being informed when you get to the very end. So we sell the antidote to information overload — we sell a finite, finishable, very tightly curated bundle of content. And we did that initially as a weekly print product. Then it turns out you can take that same content and deliver it through an app.
The “you’ve got to the end and now you’ve got permission to go do something else” is something you never get. You can never finish the Internet, you can never finish Twitter, and you can never really finish The New York Times, to be honest. So at its heart is that we have this very high density of information, and the promise we make to the reader is that if you trust us to filter and distill the news, and if you give us an hour and a half of your time — which is roughly how long people spend reading The Economist each week — then we’ll tell you what matters in the world and what’s going on. And if you only read one thing, we want to be the desert-island magazine. And our readers, that’s what they say.
And as for links:
Another aspect of it is — and I get all the morning briefings, Sentences, the FT one, and Quartz’s, and the rest of them — is that we don’t do links. The reason that we don’t do links, again, if you want to get links you can get them from other people. You can go on Twitter and get as many as you like. But the idea was everything that you need to know is distilled into this thing that you can get to the end of, and you can get to the end of it without worrying that you should’ve clicked on those links in case there was something interesting. So we’ve clicked on the links already and we’ve decided what’s interesting, and we’ve put it in Espresso.
That’s the same that we do in the weekly as well — we’re not big on linking out. And it’s not because we’re luddites, or not because we don’t want to send traffic to other people. It’s that we don’t want to undermine the reassuring impression that if you want to understand Subject X, here’s an Economist article on it — read it and that’s what you need to know. And it’s not covered in links that invite you to go elsewhere.
Mathew Ingram rightly calls this view of things selling an illusion. He notes that such an illusion can be very powerful -- and even very satisfying and appealing. But it's still an illusion.
To me, it's also a version of denial -- a somewhat hubristic denial that actually says (loudly) that the Economist thinks it's much, much smarter than its readers. That seems like a pretty big mistake in the internet age, where (quite frequently) your readers are much smarter. In many ways, we at Techdirt have always taken the opposite approach. We link aggressively outward to source material, knowing that it will help people explore the subject more deeply. We encourage discussion and conversation in our comments, knowing that many of our readers are more knowledgeable on these subjects than we are.
The Economist is obviously super successful, but as we've stated before, the way people consume the news these days is changing. The kind of people who want to just sit down, consume one thing and feel that they're "informed" are going away. That's just not how people consume news these days, and young people especially don't want to consume news that way. They want to explore and dig and share and discuss. The ability to truly interact with the news, research things yourself, share your thoughts and actually be a part of the effort is what's appealing to so many people.
Maybe the Economist's view of things works for people who are scared of the internet and don't like the endless firehose of information that's available, but I'm betting that's a population that will be progressively shrinking, rather than growing.
As I noted earlier this week, at the launch of the Copia Institute a couple of weeks ago, we had a bunch of really fascinating discussions. I've already posted the opening video and explained some of the philosophy behind this effort, and today I wanted to share with you the discussion that we had about free expression and the internet, led by three of the best people to talk about this issue: Michelle Paulson from Wikimedia; Sarah Jeong, a well-known lawyer and writer; and Dave Willner who heads up "Safety, Privacy & Support" at Secret after holding a similar role at Facebook. I strongly recommend watching the full discussion before just jumping into the comments with your assumptions about what was said, because for the most part it's probably not what you think:
Internet platforms and free expression have a strongly symbiotic relationship -- many platforms have helped expand and enable free expression around the globe in many ways. And, at the same time, that expression has fed back into those online platforms making them more valuable and contributing to the innovation that those platforms have enabled. And while it's easy to talk about government attacks on freedom of expression and why that's problematic, things get really tricky and really nuanced when it comes to technology platforms and how they should handle things. At one point in the conversation, Dave Willner made a point that I think is really important to acknowledge:
I think we would be better served as a tech community in acknowledging that we do moderate and control. Everyone moderates and controls user behavior. And even the platforms that are famously held up as examples... Twitter: "the free speech wing of the free speech party." Twitter moderates spam. And it's very easy to say "oh, some spam is malware and that's obviously harmful" but two things: One, you've allowed that "harm" is a legitimate reason to moderate speech and two, there's plenty of spam that's actually just advertising that people find irritating. And once we're in that place, it is the sort of reflexive "no restrictions based on the content of speech" sort of defense that people go to? It fails. And while still believing in free speech ideals, I think we need to acknowledge that that Rubicon has been crossed and that it was crossed in the 90s, if not earlier. And the defense of not overly moderating content for political reasons needs to be articulated in a more sophisticated way that takes into account the fact that these technologies need good moderation to be functional. But that doesn't mean that all moderation is good.
This is an extremely important, but nuanced point that you don't often hear in these discussions. Just today, over at Index on Censorship, there's an interesting article by Padraig Reidy that makes a somewhat similar point, noting that there are many free speech issues where it is silly to deny that they're free speech issues, but plenty of people do. The argument then, is that we'd be able to have a much more useful conversation if people admit:
Don't say "this isn't a free speech issue", rather "this is a free speech issue, and I’m OK with this amount of censorship, for this reason.” Then we can talk."
Soon after this, Sarah Jeong makes another, equally important, if equally nuanced, point about the reflexive response by some to behavior that they don't like to automatically call for blocking of speech, when they are often confusing speech with behavior. She discusses how harassment, for example, is an obvious and very real problem with serious and damaging real-world consequences (for everyone, beyond just those being harassed), but that it's wrong to think that we should just immediately look to find ways to shut people up:
Harassment actually exists and is actually a problem -- and actually skews heavily along gender lines and race lines. People are targeted for their sexuality. And it's not just words online. It ends up being a seemingly innocuous, or rather "non-real" manifestation, when in fact it's linked to real world stalking or other kinds of abuse, even amounting to physical assault, death threats, so and so forth. And there's a real cost. You get less participation from people of marginalized communities -- and when you get less participation from marginalized communities, you lead to a serious loss in culture and value for society. For instance, Wikipedia just has fewer articles about women -- and also its editors just happen to skew overwhelmingly male. When you have great equality on online platforms, you have better social value for the entire world.
That said, there's a huge problem... and it's entering the same policy stage that was prepped and primed by the DMCA, essentially. We're thinking about harassment as content when harassment is behavior. And we're jumping from "there's a problem, we have to solve it" and the only solution we can think of is the one that we've been doling out for copyright infringement since the aughties, and that's just take it down, take it down, take it down. And that means people on the other end take a look at it and take it down. Some people are proposing ContentID, which is not a good solution. And I hope I don't have to spell out why to this room in particular, but essentially people have looked at the regime of copyright enforcement online and said "why can't we do that for harassment" without looking at all the problems that copyright enforcement has run into.
And I think what's really troubling is that copyright is a specific exception to CDA 230 and in order to expand a regime of copyright enforcement for harassment you're going to have to attack CDA 230 and blow a hole in it.
She then noted that this was a major concern because there's a big push among many people who aren't arguing for better free speech protections:
That's a huge viewpoint out right now: it's not that "free speech is great and we need to protect against repressive governments" but that "we need better content removal mechanisms in order to protect women and minorities."
From there the discussion went in a number of different important directions, looking at other alternatives and ways to deal with bad behavior online that get beyond just "take it down, take it down," and also discussed the importance of platforms being able to make decisions about how to handle these issues without facing legal liability. CDA 230, not surprisingly, was a big topic -- and one that people admitted was unlikely to spread to other countries, and the concepts behind which are actually under attack in many places.
That's why I also think this is a good time to point to a new project from the EFF and others, known as the Manila Principles -- highlighting the importance of protecting intermediaries from liability for the speech of their users. As that project explains:
All communication over the Internet is facilitated by intermediaries such as Internet access providers, social networks, and search engines. The policies governing the legal liability of intermediaries for the content of these communications have an impact on users’ rights, including freedom of expression, freedom of association and the right to privacy.
With the aim of protecting freedom of expression and creating an enabling environment for innovation, which balances the needs of governments and other stakeholders, civil society groups from around the world have come together to propose this framework of baseline safeguards and best practices. These are based on international human rights instruments and other international legal frameworks.
In short, it's important to recognize that these are difficult issues -- but that freedom of expression is extremely important. And we should recognize that while pretty much all platforms contain some form of moderation (even in how they are designed), we need to be wary of reflexive responses to just "take it down, take it down, take it down" in dealing with real problems. Instead, we should be looking for more reasonable approaches to many of these issues -- not in denying that there are issues to be dealt with. And not just saying "anything goes and shut up if you don't like it," but that there are real tradeoffs to the decisions that tech companies (and governments) make concerning how these platforms are run.
We've made the argument for some time that a good modding community and culture is a boon for games and game creators. Far from the dangerous infringement on the original works that some seem to think, a prolific modding community can lengthen the shelf life of a game, improve it for customers of the original work, and even allow the original work to spiral off into unforseen directions, all of which only serve to increase the game's playability, replayability, and fun factor, making it all the more attractive for purchase.
(An aside: many people think that modding as an element that can be included in business model considerations is unique to gaming. It isn't. Remixing, after all, is modding in another form, as are fan-edits to movies/television shows, or fan-made creations in existing universes. All of these are modding in a fashion simliar to how it works for gaming, so don't let anyone tell you that gaming is unique this way.)
After almost 22 years Doom is finally finished thanks mod-maker Linguica's "InstaDoom", which adds 37 InstaGram filters to the game and swaps out the fabled BFG with a selfie stick. Available as a free download over at Doom World, "InstaDoom" gives players of the classic shooter a chance to take the battle to the next level by applying filters like "Ashby", Lo-Fi" and "Valencia".
This, of course, is simply the latest mod coming out for a game that has one of the most insane mod-rosters of any in the history of gaming. The whole modding of the game original took off in no small part because Doom was an incredibly well-made game, but the continued modding of the game by the loyal fan community is what propelled the game far beyond being relevant to gaming, to instead being relevant to culture as a whole. The very idea that a game made over two decades ago, long before smartphones existed and any of us had to put up with the term "selfie," has been dragged into relevance with cultural motifs tossed in for effect by a modding community still going strong shows the power of a passionate fan base.
With the success of Doom still on display, and sequels continuing to ride on the early success of a franchise still enjoying relevance in its oldest parts, why would anyone want to kneecap the modding community?
We've been talking a lot lately about how the new school of website design (with ReCode, Bloomberg, and Vox at the vanguard) has involved a misguided war on the traditional comment section. Websites are gleefully eliminating the primary engagement mechanism with their community and then adding insult to injury by pretending it's because they really, really love "conversation." Of course the truth is many sites just don't want to pay moderators, don't think their community offers any valuable insight, or don't like how it "looks" when thirty people simultaneously tell their writers they've got story facts completely and painfully wrong.
Many sites justify the move by claiming comments sections are just so packed with pile that they're beyond redemption, though studies show it doesn't actually take much work to raise the discourse bar and reclaim your comment section from the troll jungle if you just give half a damn (as in, just simple community engagement can change comment tone dramatically). Case in point is Salon, which decided to repair its awful comment section by hiring a full time moderator, rewarding good community involvement, and treating commenters like actual human beings:
"You can measure engagement by raw number of comments or commenters. Using Google Analytics, Livefyre and Adobe, Salon looks at metrics like the number of replies they make as a share of overall comments, how frequently they share Salon articles, and how many pageviews they log per visit. (Users who log in, which is required if you want to comment, view seven pages per session on average, while non-registered users make it to only 1.7, according to Dooling.) After it identified these top commenters, Salon has solicited their feedback and invited them to lead discussions on posts and even help moderate threads.
..."Comments aren’t awful,” (said Salon community advisor Annemarie Dooling). “It’s just the way we position them. The whole idea is not to give up on debate."
That news is now a conversation and a community is something traditional news outlets have struggled to understand, so it's ironic that a major wave of websites proclaiming to be the next great iteration of media can't seem to figure this out either. For example Verge co-founder Josh Topolsky, spearheading the freshly-redesigned Bloomberg, recently argued that disabling comments is ok because editors are still "listening" to reader feedback by watching analytics and the viewer response to wacky font changes. But that's not the same as engagement or facilitating engagement. Similarly, Reuters and ReCode editors have tried to argue that Facebook and Twitter are good enough substitutes for comments -- ignoring that outsourcing engagement to Facebook dulls and homogenizes your brand.
"I feel very strongly that digital journalism needs to be a conversation with readers. This is one, if not the most important area of emphasis that traditional newsrooms are actually ignoring. You see site after site killing comments and moving away from community – that’s a monumental mistake. Any site that moves away from comments is a plus for sites like ours. Readers need and deserve a voice. They should be a core part of your journalism."
Now -- can you quantify and prove that money spent on community engagement will come back to you in clear equal measure as cold, hard cash? Of course not. But all the same, it's not really a choice. We're well beyond the Walter Cronkite era of journalism where a talking head speaks at the audience from a bully pulpit. We're supposed to have realized by now that news really is a malleable, fluid, conversational organism. Under this new paradigm, reporters talk to (and correct) other reporters, blogs and websites talk to (and correct) other blogs and websites, and readers talk to (and correct) the writers and news outlets. You're swimming against the current if your website design culminates in little more than a stylish uni-directional bullhorn.
We've been noting how the trend du jour among news outlets has been to not only kill off your community comments section, but to proudly proclaim you're doing so because you really value conversation. It's of course understandable that many writers and editors don't feel motivated to wade into the often heated comment section to interact with their audience. It's also understandable if a company doesn't want to spend the money to pay someone to moderate comments. But if you do decide to reduce your community's ability to engage, do us all a favor and don't pretend it's because you really adore talking to your audience.
The latest war on comments comes courtesy of the folks over at Bloomberg. You may have noticed that the Bloomberg media empire recently went through a bit of a consolidation and redesign under the leadership of former Verge editor-in-chief Josh Topolsky. Buried among the vertigo-inducing fonts and amusing new 404 warning, is, you'll note, a very obvious lack of user comments. This is, to hear Topolsky tell it, because comments don't actually reflect your community:
"I've looked at the analytics on the commenting community versus overall audience. You’re really talking about less than one percent of the overall audience that’s engaged in commenting, even if it looks like a very active community,” he says. “In the grand scheme of the audience, it doesn't represent the readership."
In other words, because most users can't be bothered to comment, we're going to eliminate a major artery for input for those users who do choose to closely participate with the authors and website. No worry, says Topolsky -- just because Bloomberg no longer gives a damn what you say to its authors regarding individual pieces, that doesn't mean the website isn't listening to its userbase when it comes to quirky color and font schemes:
"Nothing about the new Bloomberg is set in stone; Topolsky says the entire process is iterative, and that includes the comments. The digital team will be monitoring reader behavior across desktop and mobile to see how they’re reacting to and interacting with the new site. For example, on launch day, they experimented with header height so see what readers like better. On mobile, where they’re working to “find the right balance between design and imagery and text,” Topolsky plans to experiment with different formats — more text versus more color versus a grid — to figure out what draws readers in."
While at least Topolsky seems open to the idea of comments returning, he still misses the point: watching analytics to judge responses to design changes isn't the same as actually allowing a conversation with your audience. If you actually do value your readership, you wouldn't be outsourcing their conversations to the feral and intellectually-stunted Facebook mind pool. As some Techdirt regulars have noticed, local comments encourage local community, and despite all the hand-wringing about trolls out of control, studies have recently shown it only takes treating commenters like real people (and a little moderating) to dramatically raise the discourse bar. This is your audience and your community, not a raging cacophony of encroaching cybernetic hyenas in need of a good napalming.
I still think the lowly comment section is getting a bad rap during this latest site redesign phase (led by folks like ReCode and Vox), and it's leading to a continued droll homogenization of not only website design, but of participatory news conversation itself.
There's a trend afoot among some website editors to kill the comment section, then proclaim that they've courageously decided to reduce conversation to help improve conversation. It's a random bit of logic we've noted doesn't make any sense if you're interested in actually fostering a local community, and care about not having all conversation outsourced to Facebook. The pretense that you're killing comments because you're nobly trying to further human communications (and not, say, because your website is cheap and lazy) is also disingenuous. That hasn't stopped ReCode, Reuters, Popular Science, or some newspapers from killing comments in order to push humans to the next evolutionary level (or whatever).
This week, yet another website joined the "comments are evil and have no use" parade. In a now familiar treatise, TheWeek.com announced that while editors "truly do value your opinions," you're no longer going to be allowed to express them on their website. According to TheWeek, this nuclear option was required because comments are just filled with horrible, nasty people:
"There was a time — not so long ago! — when the comments sections of news and opinion sites were not only the best place to host these conversations, they were the only place. That is no longer the case. Too often, the comments sections of news sites are hijacked by a small group of pseudonymous commenters who replace smart, thoughtful dialogue with vitriolic personal insults and rote exchanges of partisan acrimony. This small but outspoken group does a disservice to the many intelligent, open-minded people who seek a fair and respectful exchange of ideas in the comments sections of news sites."
Of course if news outlets spent a few minutes actually moderating the comments section and treating it like a valuable community resource (instead of oh, a drunk uncle with a bad goiter and nasty halitosis at your wedding), that probably wouldn't be as big of a problem.
"One surprisingly easy thing they found that brought civil, relevant comments: the presence of a recognized reporter wading into the comments.
Seventy different political posts were randomly either left to their own wild devices, engaged by an unidentified staffer from the station, or engaged by a prominent political reporter. When the reporter showed up, “incivility decreased by 17 percent and people were 15 percent more likely to use evidence in their comments on the subject matter,” according to the study."
Note by "recognized" the paper just means somebody relatively recognized from the outlet or the reporter themselves. They also tried their very best to actually define "incivility" as a quantifiable metric:
"To develop a list of characteristics that signaled incivility, we drew from past research on the characteristics of uncivil discourse (Papacharissi, 2004; Sobieraj & Berry, 2011). To be coded as uncivil, the comment needed to include one or more of the following attributes: (1) Obscene language / vulgarity (e.g. “A@$#***les”), (2) Insulting language / name calling (e.g. “you idiots”), (3) Ideologically extreme language (e.g. “Liberal potheads”), (4) Stereotyping (e.g. “Deport them illegals”), or (5) An exaggerated argument (e.g. "It’s very easy to solve all of this just keep your legs closed if you don’t want a baby.”). Comments containing any one of these characteristics were coded as uncivil."
Having been a blogger (and a moderator of one of the Internet's larger tech forums) for more than fifteen years, I can anecdotally note that even the biggest jackasses generally do dial back the antisocial angst when you calmly and politely talk to them (whether I've always been able to do that every day without piling on antisocial angst of my own is another discussion). But between actual involvement and reasonable moderation, it's not hard to reclaim a comment section from the encroaching, troll-induced apocalyptic jungle. What websites that close comment sections are doing is telling everyone they don't give quite enough of a shit to work to improve them. Proceeding to proclaim this is just because you really love conversation informs that same community you also think they're kind of stupid.
As we've been noting, there's a growing trend afoot whereby some news websites have started unilaterally declaring the lowly news comment section dead, and therefore have started eliminating the ability for visitors to comment entirely. While it's one thing to just close site comments and be done with it, sites like ReCode, Reuters and Popular Science have been quick to insist that they're killing comments for the good of the "conversation," which sounds so much better than "we closed news comments because we're too cheap and lazy to police bile and spam."
At a time when racial conversation couldn't be more important, the St. Louis Post Dispatch has decided to join the war on comments, this week declaring that the paper would be eliminating comments from paper editorials completely. This is, the paper declares, because it's very much concerned about having a "meaningful discussion":
"We intend to use our opinion pages to help the St. Louis region have a meaningful discussion about race. So we are going to turn off the comments in the editorial section for a while, and see what we learn from it. (Comment will continue on news articles). Comments might return to the opinion pages. Or we might find that without them, the discussion — through letters, social media conversations and online chats, rises to a higher level."
Again, does anything say "we love conversation" quite like restricting conversation? Like ReCode and Reuters, the paper appears to believe that e-mail and social media are good enough substitutes for an open conversation on site -- not understanding that part of building a community involves a cultivating a regular, engaged local readership, and protecting that readership from the angsty dregs of the Internet.
The paper justifies its move by leaning heavily on a recent University of Wisconsin-Madison study (also see this NY Times report) that found news story readers could have their opinions manipulated through completely unmoderated comments (something astroturfing and marketing firms have relied on for ages):
"In their study, published last year, researchers concluded that “Much in the same way that watching uncivil politicians argue on television causes polarization among individuals, impolite and incensed blog comments can polarize online users.” In some cases, negative blog comments actually changed readers’ perception of what they read, not just their opinions about it."
But isn't shifting opinions part of having any conversation, online or off? And is killing the comment section entirely really the way to handle aggressive, trolling, or misleading comments? It still feels like many outlets have just grown tired of managing their own communities, but instead of admitting that they're not invested enough to spend time weeding the troll garden, they've taken to disingenuously claiming they're somehow revolutionizing online conversation -- by making sure there's less of it.