The Kids And Their Algo Speak Show Why All Your Easy Fixes To Content Moderation Questions Are Wrong

from the blink-in-lio dept

Last month at SXSW, I was talking to someone who explained that they kept seeing people use the term “Unalive” on TikTok as a way of getting around the automated content moderation filter that would downrank or block posts that included the word “dead,” out of a fear of what that video might be talking about. Another person in the conversation suggested that I should write an article about all the ways in which “the kids these days” figure out how to get around filters. I thought it was a good idea, but did absolutely nothing with it. Thankfully, Taylor Lorenz, now of the Washington Post, is much more resourceful than I am, and went ahead and wrote that article that had been suggested to me — and it’s really, really good.

The article is framed as talking about how “algospeak is changing our language” as people (usually kids) look to get around usually (but not always) automated moderation tools and filters.

Algospeak refers to code words or turns of phrase users have adopted in an effort to create a brand-safe lexicon that will avoid getting their posts removed or down-ranked by content moderation systems. For instance, in many online videos, it’s common to say “unalive” rather than “dead,” “SA” instead of “sexual assault,” or “spicy eggplant” instead of “vibrator.”

There are some pretty amusing examples of this:

When the pandemic broke out, people on TikTok and other apps began referring to it as the “Backstreet Boys reunion tour” or calling it the “panini” or “panda express” as platforms down-ranked videos mentioning the pandemic by name in an effort to combat misinformation. When young people began to discuss struggling with mental health, they talked about “becoming unalive” in order to have frank conversations about suicide without algorithmic punishment. Sex workers, who have long been censored by moderation systems, refer to themselves on TikTok as “accountants” and use the corn emoji as a substitute for the word “porn.”

But, really, the article highlights something that we’ve been talking about for ages: the belief that you can just deal with large societal problems through content moderation and filters is silly. In the paragraph above, you can get a sense of some of the issues around suicide discussion and discussions on sex work.

A few months ago, we wrote about how the NY Times (almost single handedly) kicked up a huge overblown moral panic about an online forum where people discuss suicide. As we noted in that article, the only reason that forum existed in the first place was because of a freak-out over people discussing suicide on a Reddit forum pressured that company into closing it down — leading people to create this separate forum, in a darker part of the internet where it’s more difficult to monitor.

But, between that and the article on algospeak, people should start to realize that whether we like it or not, some people are going to want to talk about suicide. Hiding or shutting down all such forums isn’t going to help if people really want to. They’re going to find a place to go and to talk. Rather than denying the idea that anyone should ever discuss suicide, shouldn’t we be setting up safer places for them to do so?

Also, as Lorenz’s article makes clear, contrary to the claims of aggrieved Trumpists who insist that all content moderation only targets conservatives, the people who rely on algospeak to get around filters are often the more marginalized folks, who are seeking to find like-minded people to talk to or who are feeling shunned and attacked:

Black and trans users, and those from other marginalized communities, often use algospeak to discuss the oppression they face, swapping out words for “white” or “racist.” Some are too nervous to utter the word “white” at all and simply hold their palm toward the camera to signify White people.

As we’ve discussed in our content moderation case study series, victims of racism have long found it difficult to talk about their experiences without getting moderated for racism. The fact that they have to resort to these types of tactics again shows (1) that yes, lots of people are impacted by moderation, and (2) people are increasingly forced to find workarounds to bad moderation policies.

Of course, this works in all directions as well:

Last year, anti-vaccine groups on Facebook began changing their names to “dance party” or “dinner party” and anti-vaccine influencers on Instagram used similar code words, referring to vaccinated people as “swimmers.”

The article even highlights an entire site, called Zuck Got Me, highlighting content and memes that Instagram now filters.

Either way, as Lorenz points out in her piece, none of this is to say that all moderation is ineffective or that it doesn’t make sense to moderate — because, as we’ve explained over and over again, some level of moderation is always necessary for any community. However, it does highlight how lots and lots of people get caught in the impossible-to-do-well nature of moderation tools, and that expecting content moderation to fix underlying social problems is a fool’s errand.

Filed Under: , ,
Companies: instagram, tiktok

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “The Kids And Their Algo Speak Show Why All Your Easy Fixes To Content Moderation Questions Are Wrong”

Subscribe: RSS Leave a comment
42 Comments
This comment has been deemed insightful by the community.
Hyman Rosen says:

Moderation

The solution to moderation problems is for platforms to provide tools for moderation and to allow for affinity groups on the platforms to apply those tools as they wish. If a group wants to filter out “dead” and “suicide”, that’s fine, another group may not.

Moderation at scale is solved by breaking down the large groups into smaller ones, until the scale becomes manageable for the groups themselves.

The work that the platforms do now on moderation still needs to be done. It’s just that rather than applying that moderation to all posts automatically, people get to choose whether they want it. The filters should be granular so that there is a panoply of options to choose from. The platforms can still make some moderation universal – illegal content, commercial spam, repetitive content – under the assumption that this is stuff that almost no one would opt into, but they should be leery of doing this too much.

Facebook affinity groups already work this way beneath the overall moderation policies, although in those cases it’s human moderators doing the work.

This comment has been deemed insightful by the community.
PaulT (profile) says:

Re: Re:

Sounds good.

Apart from the facts that it requires users to exposed to a bunch of objectionable material before they can get the filters working as they wish, and there’s tendency for people to use euphemisms and dog whistles to get their bad ideas pushed through filters.

Individual responsibility is good, but there’s also a desire to not have to do that and delegate responsibility to others. The fact that some people find that they have things that get moderated more frequently if a community desires that is not an argument to remove the ability to delegate.

Ultimately that moderation is fine, as long as people in the community that uses it agree.

Bruce C. says:

Re: Re: Re:

But what about…x y and z.

Because internet language and harassment targets evolve about as fast as malware, complaining that a solution isn’t perfect is a strawman. People are subjected to objectionable content (or go seeking it out to hype up moral panic) under the current moderation scheme. Demonstrate that the proposed alternative/addendum is worse than what we have now.

Just as no security solution is perfect, but layered security gives you the best risk reduction for your money, putting some moderation control in the hands of the users allows users more freedom than global filters, while (mostly) allowing users to preserve their sanity.

Personal mute or autodelete lists are another moderation tool to consider in addition to self-managed filter lists. Again, not perfect as long as sock-puppets and aliases can be created, but more control than users have right now.

PaulT (profile) says:

Re: Re: Re:2

“Because internet language and harassment targets evolve about as fast as malware, complaining that a solution isn’t perfect is a strawman.”

No system is perfect, nor can it be. I’m fairly sure this is well covered over many years of articles here.

“People are subjected to objectionable content (or go seeking it out to hype up moral panic) under the current moderation scheme”

Not as much as they would be under a system that forced individual rather than community moderation.

“more control than users have right now.”

That’s an interesting comment to me. In most sites I use, things like mute, block, etc. are already available, and to be honest I rarely come across truly objectionable content on most of them due to my using them. But, I’m happy for the community whose rules I agree with to lessen the number of obviously objectionable things I have to filter, even if that causes the people rejected to whine and have to go elsewhere. I don’t see why it would be better for every member to individually make the same choice after being exposed to something obviously wrong.

nasch (profile) says:

Re: Re: Re:3

I don’t see why it would be better for every member to individually make the same choice

I don’t think that’s what’s being suggested. My impression anyway is that the idea is to have various filters provided by someone (could be the platform itself, or someone else), and entities (Facebook groups, YouTube channels, individual users) could choose to apply the filters they want. Perhaps by default all filters are turned on, or some conservative subset of them. So the user wouldn’t get an unfiltered feed, and it would be up to them to manually build up their filter list. Rather, they could choose to get filters A, B, F, and L, and leave the others turned off. And then adjust as necessary. I think it sounds like a good idea, and may be necessary in the (hoped for) transition from platforms to protocols.

Anonymous Coward says:

Re: Re: Re:

My understanding is this is more or less something mike proposed.

And having pre-canned moderation “subscriptions” (as in something you pull from regularly, payment may or may not be involved) would probably make it easier for tons of people. And would address the problem of “but someone might first see something they can’t cope with”.

Finally I think being overly concerned about someone seeing something objectionable (although let me point out I’m not saying you are) is kind of pointless. Life is full of objectionable things.

PS. “murder everyone” is almost a way to resolve the “life is full of objectionable content” … but then people, like me would object to that course of action.

PaulT (profile) says:

Re: Re: Re:2

“Life is full of objectionable things.”

It is. Which is why people don’t like to use their leisure time looking at them. It’s why they tend to congregate in communities that agree with them and take objection to outsiders trying to force their way in with things that offend them.

This doesn’t even mean that something is fundamentally bad. I might be very proud of the steak I cooked today and have lots of people agreeing with me, that doesn’t mean I have the right to force myself into a vegan group with photos and every member of tat group should have no recourse other than to individually block the photos.

Anonymous Coward says:

Re: Re:

Email, like the phone is at heart, a one to one system, which a limited number will use to contact you, and unsolicited emails, that get through the providers spam filters can be presumed to be spam. Social media on the other hand is a many to many system, where some unsolicited messages are expected which makes it much harder for a user to moderate.

William Null says:

Re: Re: Re:

That’s why I’ve said unsolicited ADVERTISING messages, specifically. Those are easy to filter out, things like SpamAssasin have dealt with those since I can remember. It’s easy to detect an unsolicited advertising message. A simple bayesian filter will do, no need to take context into account or do sentiment analysis, an unsolicited ad is an unsolicited ad.

That Anonymous Coward (profile) says:

Re:

One problem is having humans in charge cost them money, so they dislike that idea.
The other problem is people who demand that the platforms adhere only to what they want to allow others to discuss.

Like firewalls in libraries that block all gay content, be it sexual or not.
I mean I can totally get behind blocking explicit gay sex sites from under age kids, but why block sites that can help kids find answers about themselves if they are questioning?

“People” expect that its a simple tech fix that will be perfect & do exactly what they want… ignoring not everyone thinks or feels the same way about thing, and then break out the torches and pitchforks when it isn’t magically made all better for them.

Many “social” sites don’t want to have lots of little groups, because it means more to try to handle & lets me honest once a Karen learns about a groups where people discuss suicidal feelings she’ll tell 200 friends & they will call for the CEO to be burned at the stake.

Stephen T. Stone (profile) says:

Re:

The solution to moderation problems is for platforms to provide tools for moderation and to allow for affinity groups on the platforms to apply those tools as they wish.

That is part of the solution. The other part is platforms doing whatever they can to prevent objectionable content from showing up in people’s timelines in the first place. A Black person shouldn’t have to put up with racial slurs being slung their way if the platform can stop those slurs from being posted.

Yes, client-side tools like wordfilters and mutes/blocks are important. But they shouldn’t be the only form of moderation on a platform unless said platform wants to deal with the “Worst People” Problem.

This comment has been deemed insightful by the community.
That Anonymous Coward (profile) says:

Something something issues are never as black and white and people imagine.

Hell they are passing laws across the nation pretending that if we ignore transpeople, there will be no transpeople, and the only reason there are transpeople is because a book once mentioned transpeople.

I mean it helps that the average person has never actually spent time with a transperson, so their only experience is what talking heads tell them about the fear they should have because, despite not a single fucking case of it, transpeople are just trying to get into the womens room to rape.

They think it is a choice, like joining a religion with a bunch of other racist assholes, but its not.

Rather than deal with difficult issues, we pretend hiding them from view is the right answer & we’ll let the parents teach their kids about the hard topics.
Working so well for sex, I mean how many DNA paternity shows do we actually need?
And some of the STUPIDEST excuses for why even though he had sex with her, that can’t be his baby.

Its never going to change because there is to much mileage still left in keeping the different as “secret/dirty/wrong” so they can get votes from people enjoying there is an easy target they can go after and not be cancelled.
Perhaps thats why they fear CRT so much, imagine kids learning that humans have been cruel to each other, then claimed they changed, but then make the cruelty legal forcing other humans to be less than human… they might notice this is still being done and its still fucking wrong.

Take every racial epithet, you can find the exact same arguments still being used with just who to blame changed.
Negros will rape white women if we allow them to use white bathrooms.
Negros will steal your children & make them listen to that devil music.
Negros will give your kids drugs & ruin their fine upstanding white moral fiber.
Okay I admit that first example fails a bit when you rotate in teh gays, but we just prey on the homophobia and say they teh gays are only in the restroom to oogle straight guys.

Perhaps the bigger lesson here is that kids are aware of topics you pretend they never heard of, and want to talk about them but don’t see you as someone they can honestly talk with about hard topics?

Not mentioning racism didn’t make it better.
Not mentioning sex ed didn’t make it better.
Not mentioning teh gays didn’t make it better.
Not mentioning transpeople didn’t make it better.
Just saying no to drugs, how’s that working out for ya?

And before the conservative idiots descend, I am not suggesting we teach kids in kindergarten all about gay sex, but it is appropriate to acknowledge that some kids have a mom & dad, just a mom or dad, 2 dads or 2 moms and families come in all sorts of ways but they are all families and they all love their children.

We have people to way more degrees than I have who know the right way to setup educational material that is appropriate for various ages who are WAY more qualified than a bunch of moms who want the right to come in and introduce the Karen Instructional Method of if we disagree with it no one can ever hear about it. These are the same moms who were absolutely positive that masks were just to infringe on their kids freedoms, rather than part of trying to contain a virus we knew very little about & we were doing our best. They wanted teachers to be denied masks & report for work ignoring that their kids could infect the teachers… I guess they forgot how a cold can rapidly pass in a classroom.

People will find ways to communicate, they will use coded language just like everyone else. The difference is these kids are using it to talk about topics that people are overreacting to kids discussing rather than politicians using coded words to “hide” their racism/sexism.

William Null says:

There are other terms for when people want to talk about suicide

Unalive seems to me like it pertains to death in general. However, there are terms that refer to suicide, specifically. Among such terms, “self-deletion” and “getting game over IRL on purpose” are the most common ones from what I’ve seen.

Also, since you see how bad moderation gets and how it’s impossible to do, are you really still going to cling to the idea that you need it? Algorithms are bad at it (because AI can’t understand context yet), humans are bad at it (either out of being overworked or out of actual malice), so why do it beyond most basic one, that is to remove unsolicited advertising messages (which are quite easy to define and even the simplest bayesian filter can deal with them with little to no false positives)?

This comment has been flagged by the community. Click here to show it.

Koby (profile) says:

Re: Re:

so why do it beyond most basic one, that is to remove unsolicited advertising messages

The best excuse given (not that I agree with it) is corporate advertising. Advertisers believe that they can pick and choose the content for which they display ads. They want their ad seen on that feel-good comedy video that got 5mil views. They claim that their brand will get damaged if their product gets advertised alongside some edgy video that discusses a controversial topic. As if viewers care, and can’t distinguish between advertising and message approval. Much to the dismay of the advertisers, their ads are once again being displayed on a video that would get demonetized a two years ago.

As usual, follow the money.

William Null says:

Re: Re:

Yeah, this is one of the things that bothers me. Why don’t platforms just go with substack model where you subscribe (as in, monetary subscription, not “get notified when xXDarkGamingLordXx publishes something”) to a creator and/or creators have a tipjar, with the platform provider skimming from these subscriptions to sustain the platform? Ads should only be used as a last resort when you can’t sustain platform any other way. Nobody likes to see ads anyway, so any platform that does away with them while still being able to sustain itself and focusing on the free speech principles (even if they’re technically not required to do so) would be very good.

I would probably make such thing myself, if I had financial resources to do so.

PaulT (profile) says:

Re: Re:

“The best excuse given (not that I agree with it) is corporate advertising”

If you don’t like corporate advertising, don’t used platforms whose business model is that corporate advertising pays for the service.

“As usual, follow the money.”

Indeed. So stop whining that the services that are paid for by ad money don’t accept what you and your friends do, and move to a platform that allows that, even if that means that you pay up front instead of getting the free ride that people who say things acceptable to the ad machine use.

William Null says:

Re:

If the word filter is global, then I agree it’s easy to get around. After all, everyone knows what words are blocked (more or less), even if the list isn’t public. But how can AnnoyingJerk234 know that KindAnn227 has the phrase “kill yourself” in her private blocklist that is only visible to her?

The answer is he can’t. Everyone will see the post where he advises Kind Ann to end her existence, except for Ann herself.

So AnnoyingJerk234 will get a reaction (probably a negative one, because, let’s face it, he’s a jerk) even if it seems to him that the recipient of his abuse is ignoring him. He wouldn’t know that such phrase is in KindAnn227’s private blocklist, he would be none the wiser. It would just seem to him that his target is ignoring him, while others push back against him (since they can see that post).

That Anonymous Coward (profile) says:

Re: Re:

But its just easier to demand tech do all the work for them & then scream when it fails.

While the data brokers have given them way more data than they ever should have on a person, no one has thought to use to to build word filters for people…. yet.

AnnoyingJerk234 might not get a reaction because his message wasn’t seen, the problem is then the next tier of people who give him the big reaction he was looking for & managing to still barge into KindAnn227’s sphere.

I once sent a picture of peanut butter on a knife to a female friend, said peanut butter was looking rather phallic. She was in on the joke, I was in on the joke, 99% of the people who watched us tweet at each other were in on the joke… the problem was the white knight who road in to save this damsel in distress from my lecherous peanut butter. At first my freind was a little confused what the knight was trying to accomplish, it was just a picture of peanut butter. I explained to her that he had ridden in to save her from the horrors of an image of phallic shaped food… I then excused myself and moved out of the blast radius as she took his head off. She didn’t need saving or protection from me, it literally was a picture of peanut butter not an actual penis, and she would have no problem kicking my ass all on her own if she had been offended.

Sometimes people get involved thinking they are doing a good thing only to make things much worse. If KindAnn227 isn’t responding to the asshat, rather than try to teach him manners/a lesson/etc. just follow the targets lead and deny him any possible joy? It’s hard to do because we want to be helpful, but sometimes doing nothing is the most help.

Aardvark Cheeselog says:

But, between that and the article on algospeak, people should start to realize that whether we like it or not, some people are going to want to talk about suicide. Hiding or shutting down all such forums isn’t going to help if people really want to. They’re going to find a place to go and to talk. Rather than denying the idea that anyone should ever discuss suicide, shouldn’t we be setting up safer places for them to do so?

You’d think that yes, you’d like to be setting up safer places.

But there are some people, and they will never, ever shut up or even acknowledge any critics, who will insist that setting up the safer spaces “sends the wrong message.” And they will veto and sabotage such efforts, ignoring laws if they have to.

There are more such people that you might think, and they can keep their crazy veto vendettas going for longer than you might imagine. Consider the US embargo with Cuba trade, or the War on Drugs.

Dan Neely (profile) says:

Nothing new under the sun

Back in the late 90s internet we’d talk about pron to avoid early filters, or luaP noR (to avoid getting deluged by his fanbois).

Although it’s been a long time since I’ve seen anyone including a modern equivalent to the Carnivore Bait (a string of words intended to trigger an FBI surveillance program of the same name and flood it with false positives) that used to be popular among some groups email or usenet signatures.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...