Update: After hearing from a few people at Huffington Post, it appears that the original explanation from Isaf was unclear, and led us to believe they were moderating comments based on advertiser preferences. However, Huffington Post has now clarified that they use the same AI just to determine how to post ads on certain content -- and that's what Isaf meant with his remarks. Not that they moderate comments based on advertiser preferences.
We've been somewhat excited that we're rapidly approaching one million total comments on Techdirt. We thought it was quite a nice milestone. But we feel a bit small to learn that the Huffington Post already
has over 70 million comments
just this year
alone. Over at Poynter, Jeff Sonderman has a fascinating interview with the site's director of community, Justin Isaf, about how they manage all those comments
. Apparently they have a staff of 30 full time comment moderators, helped along by some artificial intelligence (named Julia) from a company they bought just for this technology.
Now, obviously, sites have lots of different philosophies on moderating comments. Our own is pretty open. We have a spam filter that tries to cut out obvious spam (of which we get about 1,000 per day, last I checked) and other than that comments are basically unmoderated. We do
have a system that allows the community
to vote on funny and insightful comments (which we then round up in a weekly "best of" post). We also, just recently, introduced our first word/last word
feature, which lets the community promote certain comments. Finally the community can also "report" comments they find problematic, which then minimizes those comments, though they remain available for anyone to see with one click. We've found that this system of trusting the community works pretty damn well overall.
HuffPo, on the other hand, between the technology and the moderators, seems more focused on nudging the conversation themselves. I can understand and respect that choice, but there was one detail that struck me as a bit questionable:
I’m a big fan of having machines help us with the lower level tasks, freeing up time, resources and brain power for more interesting and complex tasks. Julia [the artificial intelligence system that HuffPo owns] takes that a few steps further and helps us with a lot of other aspects of HuffPost in addition to helping weed out abusive members, including identifying intelligent conversations for promotion, and content that is a mismatch for our advertisers. She has allowed us to do a lot more with a lot less.
(Note: see update
at the top). I recognize that these are all advertising businesses, but I'm a bit surprised to see HuffPo so blatantly admit that they moderate comments if they're "a mismatch for our advertisers." I've seen plenty of sites say they'll moderate inappropriate commentary, but leave reasonable commentary alone even if it's critical. But HuffPo is basically saying that if advertisers aren't likely to like the comments, they may moderate them. It's their system, and they can do what they want with that, but personally, that makes me feel uncomfortable. We've always tried to promote the fact that our own community is very opinionated (and not shy about it) when we've spoken to advertisers, and we use that as a way of explaining why things they do should be authentic and real, rather than forced and phony. And, because of that, we'd like to think that we're able to drive more interesting engagement. If you leave open the possibility of moderating comments that advertisers won't like, that seems to only encourage bogus and annoying advertising, since marketers may never learn that people don't actually like that kind of thing.
In the end, HuffPo's position is obviously self-serving, even as they pretend that it's best for advertisers. What they may end up doing is hiding the fact that the advertisements are bad, rather than improving the quality of the advertising. Now, obviously, I'm sure AOL does quite fine with HuffPo's ad selling (and they're a hell of a lot bigger than us), but it still struck me as interesting to see the company so blatantly admit how it reacts to content their advertisers might think is "a mismatch."