Be Careful What You Wish For: TikTok Tries To Stop Bullying On Its Platforms… By Suppressing Those It Thought Might Get Bullied

from the this-shit-ain't-that-easy dept

Be careful what you wish for when you demand that internet platforms police the internet for any and all bad stuff. There was a lot of fuss and cringing when this story broke that part of TikTok’s content moderation strategies included suppressing videos by disabled, queer, and fat creators.

Leaked documents reveal how TikTok hid videos of people with disabilities. Queer and fat users were also pushed out of view.

No matter how you look at it, this looks bad. And for good reasons. But, as the company itself claims, it had good intentions behind this, even if the execution was atrocious. There have been tons of reports of bullying on the platform — and like with so many social problems that are making themselves more widely known thanks to technology, the first reaction of many is to blame the tech platform, and to demand they “fix it.”

And, a la the infamous paperclip maximizer thought experiment, what’s the most efficient way to stop bullying? Some figured it might be to hide the likely-to-be-bullied rather than the actual bullies:

The relevant section in the moderation rules is called “Imagery depicting a subject highly vulnerable to cyberbullying”. In the explanations it says that this covers users who are “susceptible to harassment or cyberbullying based on their physical or mental condition?.

According to the memo, mobbing has negative consequences for those affected. Therefore, videos of such users should always be considered as a risk and their reach on the platform should be limited.

TikTok uses its moderation toolbox to limit the visibility of such users. Moderators were instructed to mark people with disabilities as “Risk 4”. This means that a video is only visible in the country where it was uploaded.

And, yes, there is a very reasonable argument that the content moderation team at TikTok/ByteDance should have recognized that this is a horrible way to deal with bullying, you can see how those desperate to deal with “the bullying problem” might end up thinking that this is the simplest path to get people to stop screaming at them about bullying.

This is a key point that we keep trying to raise in the mad dash currently happening to put responsibility on platforms to “clean up” whatever mess politicians and the media see. There’s this weird belief that the platforms can wave a magic wand and make bad stuff go away — when the “easier” solution (if a morally questionable one) is to just figure out a way to hide the real problems or sweep them under the rug.

This is why I keep trying to argue that if we’re highlighting societal problems that are manifesting themselves on social media, expecting tech platform companies to magically solve societal problems is not just going to fail, but it’s going to fail in spectacular and awful ways. This TikTok “hide the people we think might get bullied” is just one example of sweeping a societal problem under the rug to avoid having to improperly answer for it.

Unfortunately, I fear most people will just blame TikTok for it instead.

Filed Under: , , ,
Companies: tiktok

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Be Careful What You Wish For: TikTok Tries To Stop Bullying On Its Platforms… By Suppressing Those It Thought Might Get Bullied”

Subscribe: RSS Leave a comment
95 Comments
This comment has been deemed insightful by the community.
That One Guy (profile) says:

Well that's one way to deal with bullies/trolls...

… take our their targets before they get the chance. As counter-bully tactics go it’s certainly unique at least.

Unfortunately, I fear most people will just blame TikTok for it instead.

While the social/political pressure and those pushing it to ‘Do Something’ certainly carries the majority of the blame, in this case TikTok does deserve a good portion of the blame too, as desperate or not the strategy they went with here is beyond absurd, and if anything seems likely to just encourage bullies/trolls as it provides a way for them to not just act like jackasses but get content they don’t like restricted, adding insult to injury to their would-be victims.

People need to stop losing their minds and heaping blame on the wrong targets and demanding the impossible to be sure, but that doesn’t fully absolve companies/platforms of responsibility when they respond with actions making the problem worse, and if nothing else they should be called out so that others don’t follow suit down the line.

This comment has been deemed insightful by the community.
JoeCool (profile) says:

Re: Well that's one way to deal with bullies/trolls...

It’s not that unique – schools have been doing this same tactic for many decades. Separate the people who might get bullied into different classes, then ignore any bullying until the bullied party throws a punch, then expel the bullied party. It’s always much easier to "deal" with the few bullied people than all the bullies. Less screaming from A-Type parents as well. (A-Type in this case meaning Asshole).

Anonymous Coward says:

Re: Re: Well that's one way to deal with bullies/trolls...

Our schools have been this bad for decades, but "Kingston-area parent believes elected trustee was punished for trying to represent him" ( from a local TV station, CKWS-DT 11 globalnews.ca/news/6258432/ldsb-school-trustee-censure-reaction/ ) suggests that they’ve reached a new low. A teen was being bullied, the vice-principal of the school responded by asking the victim point-blank "Are you gay?", the victim’s parent complained to a school board trustee. The trustee DID THEIR JOB by enquiring at the school as to what was going on and WAS PROMPTLY CENSURED BY THE REST OF THE BOARD and forced to apologise for interfering in the day-to-day operation of the system. The offending instructor, meanwhile, was promoted to headmaster.

That’s the same school board (or a successor) to the one in which a school did nothing when I was being bullied in the early 1980’s and told me to ignore the problem because the bullies "just want attention". When I fought back, they suspended me for two days.

Clearly, nothing has been learned. All that has changed post-Columbine is that it’s possible to be kicked out of class for mentioning Columbine High School. The bullying, meanwhile, continues unimpeded. It’s teaching students a valuable lesson for when they enter a real world where bullies with money and lawyers silence their victims with non-disparagement agreement, non-disclosure agreements and strategic lawsuits against public participation… and in which Russia chooses the president of the Excited States over the objections of 2868692 Americans (the gap in the popular vote, where even a weak candidate is better than this).

R/O/G/S says:

Re: Re: Well that's one way to deal with bullies/trolls...

EXCELLENT POINT.

This is really how it works in the real world. Bars are the same, policing is the same. Nearly every mass shooter/butter knifer/incel car crasher is the by-product of this social contagion.

The victims are most frequently expelled/arrested/disciplined/blamed.

Its bizarre to expect Socmed to somehow overcome that social problem by itself, especially when a huge portion of Socmed harassers are actually institutional bullies and cyberstalkers from within the military/private contractor CVE milieu, and even the corporations themselves.

R/O/G/S says:

Re: Re: Re:2 Well that's one way to deal with bullies/trolls.

Thats great, coming from one of you in a thread about-get this-bullying.

Anything to say of substance? Yeah, I didnt think so.

The cognitive dissonance on your part is itself stifling, but add into that you trolling/cyberstalking my every post, and there you have it.

None of those that you posit as bad guys (aside from the fact that some are women or girls, like Sol Pais) was convicted of any crime, nor openly accused either, so your thesis is the equivalent of justifying lynching.

R/O/G/S says:

Re: Re: Re:4 Well that's one way to deal with bullies

Defensive? No, not in the least, because I have repeatedly supported that claim.

But you could say that I am AWARE that an extremely high percentage of ACs online are potentially dangerous.

And so, I am properly responsive to the patterns of military/police/intel/NGO cyberstalking on forums just like this.

https://www.computerworld.com/article/2475679/online-gaming-surveillance–so-many-nsa—cia-spies–they-were-spying-on-each-other.html

I havent published since 2009, other than my blogs.

Anonymous Coward says:

Re: Re: Re:5 Well that's one way to deal with bul

"an extremely high percentage of ACs online are potentially dangerous"

How does one measure the quantity of anonymous cowards surfing the internet at any particular instant and what specifically makes them dangerous?

What does "properly responsive" mean in this context?

You are not feeling defensive in the least but accuse others of cyber stalking – lol.

R/O/G/S says:

Re: Re: Re:6 Well that's one way to deal with

Yeah, some 40% of ACs online, and especially on Twitter, are often military /intelligence /police afilliated profilers and behavioral analysts who weaponize words, and apply psychic driving techniques on unsuspecting web kiddies, in order to create pretext and predicate to then manufacture terrorists, by endlessly harrassing these guys under the watchful eyes of the Good Guys®.

One recent example was the Pensacola shooter, who was being cyberstalked by none other than FBI -Mossad -CIA affiliated Rita Katz and her Muslim Hatin® organization SITE intelligence, AFTER he filed a sexual harassment complaint against his trainer, and then had to endure six more months of juvenile bullying there, finally targeting his aggressors .

These intel sadists /sickos /whackjobs are nearly ALWAYS ironically close to mass shooters, butter knifers, and Incel car crashers lives immediately before they go ballistic; and, frequently fill the ranks of those guys Twitter /Facebook /other Socmed.

This pattern is SO CLEAR, that these guys web presence gets scrubbed immediately following those events, in order to hide this pattern of malicious and outrageous government conduct, cuz, turrerisms depend from religious mythmaking not fact or empirical evidence .

So, measurement in this case could be derived from the immediate information deficit that follows these events, and compared to the OTHER group of ACs online who always seem to have screencaps for the hungry media; and many of THOSE people, exactly who I say they are, whoever, they are.

Only problem is, we cant get that data after the fact to analyze the content and commenters,, because CVE is designed to avoid scrutiny of its practices. So I work with what I have.

So, lets start with your binary supposition about offense /defense.

In re: threat, yeah I have been cyberstalked numerous times, including dangerous, and life threatening activity that then came offline, by people like you, who exhibit patterns of behavior, in forums that also exhibit “forum behavior. ”

So, I know the patterns, and you are exhibiting them. YOU and your commenting style and lack of overall substantive engagement, lack of empathetic response, Freudian projector rolling like a remote viewing station in the Panoptical sea of data, etc. yes, YOU personally fit a pattern

Then, because you fit a pattern that exists in psychological profiles OF profilers and provocateurs from both weaponized psychology, and that other kind of psychology (and that pattern one that can.be mimicked and replicated by others ) that actually helps people ,I applied that.

I mean, for all I know, youre just another pimple faced twat with an exploding pork burrito in your cumshoot. Maybe you were conceived on your mothers period.

But those bastards usually give up by now, having been easily pacified by said burrito.

So properly responsive in this context is that I am in a relative position of security, with zero actual fear of you, and knowing that I am using YOU to set an example to OTHERS who are also watching this, and YOU.

But especially remember: I am utilizing demonstrative speech, while you began as 1.an illiterate or disingenuos tone troll, and 2.progressed to a flagger, and then 3.an accuser.

While each of these roles deserve their own essay, the presence of three in rapid succession fits a pattern

See how that works?
See how that works, single celler?
See how that works HEY WHY IS THAT BLACK HELICOPTER FOLLOWING ME NOW? !

I make no claim that profiling weaponized commenters, air -gapped comment systems and forum analysis that profiles the profilers is anywhere near an exact science, but ROGS Analysis is getting much, much better at weeding them out, whoever they are.

This comment has been flagged by the community. Click here to show it.

Anonymous Coward says:

Re: Well that's one way to deal with bullies/trolls...

As counter-bully tactics go it’s certainly unique at least.

If the goal is to stop bullying, is there another way? Everything else would be a reaction to bullying that had already happened. That’s part of the "careful what you wish for" thing; the idea that a wish for world peace could be fulfilled by eliminating humanity is much older than the paperclip maximizer.

This comment has been deemed insightful by the community.
That One Guy (profile) says:

Re: Re: Yes, it's called 'punishment for acting like a thug'

If the goal is to stop bullying, is there another way

‘You’ve been reported for abusive behavior, and after investigation it has been determined that the claims were valid. As this is your first offense your account will be suspended for X number of days. Repeat offenses will increase this amount, and after X number of repeat offenses your account will be terminated, with any attempts to create a new account to bypass this block resulting in immediate termination of any such accounts.’

Anonymous Coward says:

Re: Re: Re: Yes, it's called 'punishment for acting like a thug'

That’s a reactive measure to bullying, not a preventative one.

Lots of sites have tried it, and found it to be not so easy. Maybe people can work around the bans, maybe there are too many bullies, maybe nobody agrees what constitutes "bullying"…

That One Guy (profile) says:

Re: Re: Re:2 Yes, it's called 'punishment for acting like a thug'

‘If you do X, you will suffer a punishment for it’ seems pretty preventative to me, as unless you pre-vet everything any punishment is going to by necessity be applied after the bullying has taken place, such that your best bet to cut down on it is to make clear that acting in a particular manner will come with penalties, so those that might be tempted to act that way have to weigh the desire versus the concern that it could cost them.

James Burkhardt (profile) says:

Re: Re: Re:3 Yes, it's called 'punishment for acting like a thug'

Because getting kicked off platforms has been so effective at preventing Sargon of Akkad from being an asshole. /s

Outside of the obvious examples of actual bullies using sock puppet accounts to maintain campaigns of harassment even as they get suspended and banned, you seem to misunderstand how Bullying of this type is a malformation of normal "ribbing" behaviors in which reciprocity can not be achieved, which is part of why those who do not suffer bullying have a difficult time understanding the problematic behavior – they see the normal behavior and do not precieve how that behavior is bad. This means your behavioral rules may get the same backlash we keep seeing over moderation over all. Inconsistent applications lead to false positives and false negatives and can embolden bad actors.

Anonymous Coward says:

Re: Re: Re:2 Yes, it's called 'punishment for acting like a thug'

There is no such thing as a preventative measure to bullying. There never will be. All measures to deal with bullies will be reactive. If those measures are severe enough then they may act as a deterrent but there is nothing you can do to prevent it that is not discriminatory to someone else.

This comment has been deemed insightful by the community.
Wyrm (profile) says:

Re: Re: Well that's one way to deal with bullies/trolls...

Except this doesn’t stop bullying, it participates in this behavior by telling the potential victims that they don’t have the right to speak up. Or appear in public. Or exist online. They should just hole up somewhere inconspicuous.

That was not a solution, it only serves to mask the problem by pretending it doesn’t exist since the possible victims are not there to complain about it.
Simply put, it is a case of "worse than doing nothing".

Anonymous Coward says:

Re: Re: Re: Well that's one way to deal with bullies/trolls...

If you consider preventing people from posting certain types of content "bullying", that people are getting bullied on any website that bans "bad" language, pornography, whatever. (Based on past Techdirt stories, sex workers really are getting widely bullied.)

That was not a solution, it only serves to mask the problem by pretending it doesn’t exist

Of course this was a bad idea. But it has that genie-wish "malicious compliance" level of logic to it.

Wouldn’t banning bullies also just be masking the problem? Sure, it makes the website more pleasant for the remaining users, but it does that by giving them the illusion of a bully-free world. Except the bullies will have just moved somewhere where people are less likely to point out their bad behavior.

Wyrm (profile) says:

Re: Re: Re:2 Well that's one way to deal with bullies/trolls.

Two problems in your response.

  1. It wouldn’t be bullying just by itself. It’s more of a "adding insult to injury" type of conduct. But it’s definitely a wrong move because you tell them "we don’t want you here". Even if that’s not the intent of the service provider, they are participating in the bullying by actually enforcing the very message the bullies were already sending.
  2. You’re equivocating moderating victims of bullying because of who they are (they didn’t violate a T&C or law, they only are targets of such violations) and normal moderation efforts of unwanted content. In the former case, you remove people because others might have a "bad" behavior targeting them. In the latter, you target people publishing "bad" content. (Whatever the definition of "bad" is doesn’t really matter.) Those are fundamentally different.

I only agree in that there was likely no actual malice in this move. It’s just a way to "remove bullying" in a seemingly efficient way. Except it’s not because you keep the actual bad elements in, and send the wrong message to both sides, namely that "bullies are free to target another group of people… that you would then have to also ban".

Anonymous Coward says:

The lady behind GlitterAndLazers has figured out how to avoid the censorship: run a Kool-Aid "ad" disguised as a Christmas video.

The damn 30 second nightmare is everything wrong with entertainment on this planet: fat shaming comments of an obese person pushing a sugar-laced drink on a social media platform run by companies and advertisers justifying censorship based on imaginary threats.

How the fuck did it get this far.

Anonymous Coward says:

Re: Re: Re:

Are you sure it’s Chinese law? I don’t know where their servers are based.

Chinese are indicted in the US all the time, though they often don’t get arrested if they live and stay in China.

I suppose if the Chinese executives living in China needed to be indicted the proper process would be through the US-China mutual legal assistance treaty though usually companies try to comply with the laws in the country their target audience is in.

Anonymous Coward says:

Re: Re: Re:2 Re:

The US actually does indict Chinese in China. It doesn’t matter to me much one way or the other but the DOJ does it all the time.

It’s not really a double standard.

Anyway, I can’t control who gets indicted or how lawsuits played out, I just pointed out there is a law against it just like there is a law against everything.

Anonymous Coward says:

Re: Re: Re:3 Re:

It’s not really a double standard.

When China tells Americans that something they’ve posted is illegal in China (e.g. the "Tank Man" photo), American companies and American courts pay little attention to that. Telling a Chinese company their policies violate American law, and expecting them to care, would be a double standard.

This comment has been deemed insightful by the community.
Bloof (profile) says:

Remove one persecuted group from visibility on the platform and the people doing it will become emboldened and go after the next to try and make the site conform to their vision of what the world should be. We’ve seen how easy it is for organised groups to game the system to punish people on other platforms, after all.

This comment has been deemed insightful by the community.
Anonymous Coward says:

So TikTok’s solution to bulling is the same 8 bit theaters’ black mage’s solution to diseases ("[You can stop an epidemic by killing all the patients, but then some other jerk will just get sick]").

Personally I don’t expect TikTok to solve a problem all the rest of human civilization has never be able to solve in recorded history (even if it’s a slightly narrower subset). However this… I think this would generally be considered creating more problems rather than solving any.

This comment has been deemed insightful by the community.
Rocky says:

Re: Re:

Perhaps you can enlighten us how to make the right decisions then. How do you implement content moderation in such a way that everyone is happy with it?

Just so you know, as long as some group isn’t happy with the solution it means it doesn’t work.

With that, I’ll leave you to your Sisyphean task.

This comment has been deemed insightful by the community.
Stephen T. Stone (profile) says:

Re:

The fact that companies make wrong decisions again and again does not imply that content moderation is not possible.

Of course content moderation is possible. But effective content moderation isn’t possible at the scale of a service like TikTok. Because that kind of moderation doesn’t scale.

This comment has been deemed insightful by the community.
That One Guy (profile) says:

Re: Pick 1 blue marble out of 10 red? Easy. From 100,000 red though?

(But please feel free to explain.)

No problem.

Moderation at large scale is ‘impossible’ both because people are demanding the impossible(whether that be keeping ‘bad stuff’ off without significantly impacting ‘good stuff’, despite the fact that what falls into each category can often change or be context-based, or blaming the platform for the human nature of those that use it), and because the sheer scale means that even superhumanly effective moderation would still result in vast numbers of false positives and missed negatives.

Assume 100 posts per day, with a moderation that is 99% accurate, resulting in either one ‘bad’ post staying up, or one ‘good’ post taken down mistakenly every day. Even then you’d still have people complaining that said ‘bad stuff’ was able to be viewed for a short amount of time and demanding that the platform ‘Do Better’.

Now, add a few zeros so that instead of 100 posts per day the platform is faced with 1,000,000. Sticking with the same impossibly good 99% accuracy rate that leaves 10,000 ‘failures’ of moderation daily, either bad stuff staying up, good stuff being taken down, or more likely a mix of the two. Given the sheer amount of content dealt with this would be an impossible accomplishment to ‘only’ botch ten thousand posts, yet you can be sure people would still be losing their minds, pointing to the thousands of ‘bad posts’ that slip through daily and demanding that the platform ‘Do Something’, showing how even a near perfect moderation effort still wouldn’t be enough.

Content moderation ‘is not possible’ at large scale because the sheer volume ensures that even impossibly accurate moderation efforts will still result in large amounts of mistakes in one direction or the other, whether that be letting bad stuff stay, or good stuff getting the boot. While this doesn’t mean platforms shouldn’t try(within reason) to deal with problems those that are demanding that they do so need to understand that far too often what they are demanding simply isn’t reasonable or even possible, and scale back their demands accordingly.

Anonymous Coward says:

Re: Re: Pick 1 blue marble out of 10 red? Easy. From 100,000 red tho

Assume 100 posts per day, with a moderation that is 99% accurate

That kind of logic can work for something with well-defined rules, like when we say that a firearm is never allowed in the passenger cabin of a commercial flight that goes into, out of, or over the USA. For content moderation, a numeric "accuracy rate" is simply nonsense. Every time we think a rule is clear, someone finds a counterexample to take it into a gray area. We need to agree on a fully objective set of rules before we can pretend to calculate the accuracy.

Anonymous Coward says:

Re: Re: Pick 1 blue marble out of 10 red? Easy. From 100,000 red tho

There is a bit of a flaw in the numbers you listed. To evaluate this properly we also need to know the rate of posts_that_should_be_moderated.

In your example of 100 posts per day, how many should be moderated? If we assume a rate of 10% that should be moderated (far higher than reality on most platforms, I suspect) then the 99% accuracy rate for moderation will let 0.1 "bad" posts through, effectively none. For the 1m post example 100k should be moderated and 1k will be let through.

A problem in moderation at scale still exists but we need to avoid this kind of inaccuracy lest we give naysayers ammunition or obvious means by which to dismiss the whole problem.

That One Guy (profile) says:

Re: Re: Re: Pick 1 blue marble out of 10 red? Easy. From 100,000 red

A fair point, as I typed it out rather quick it was a little rough, so it could certainly do with a little polish, though even in your correction you seem to have highlighted the problem rather well, as 100K posts moderated daily is still an insane number to deal with even if it’s only a fraction of the total number of posts, and even then you’d be looking at a solid thousand ‘mistakes’ on a daily basis with impossible moderation ‘accuracy’.

Anonymous Coward says:

Re: Re: Re:3 Pick 1 blue marble out of 10 red? Easy. From 100

That’s not how moderation works. Moderation is a human activity.

There are automated filters in the large platforms. These do their thing to some degree of accuracy. Undoubtedly they sometimes find things that are referred to humans for evaluation. Atop that are the reports coming in from users for both missed posts and improperly filtered posts. This is the subset that are moderated. The rest were filtered. There is an important difference between those two.

Therefor the number of posts to be moderated is a small fraction of the whole.

Anonymous Coward says:

Re: Re: Re:4 Pick 1 blue marble out of 10 red? Easy. From

As filters are used to replace human moderators, and are making decisions on what to do with a post, including maybe passing it to a human, calling filters different from moderation is a distinction that does not matter as far as the poster is concerned. A filter is simply an automated moderation system. Also, it usually looks like the human moderators deal with reported posts. and maybe challenges to moderation/filter decisions.

This comment has been deemed insightful by the community.
bob says:

Re: Re: Re: Pick 1 blue marble out of 10 red? Easy. From 100,000 red

Add to that task a time limit because every second another 1 blue and 9999 red marbles are dumped into the pile. But you still must be sure to always only get the blue marble and no red ones, so grabbing handfuls hoping the blue one is in the pick wont work. And if a blue marble stays in the pile longer than 1 hr you have also failed. I mean thats plenty of time right? Oh but you are human too so at some point you must sleep, eat, and do all the other human things you normally do. Meanwhile each second more marbles are being dumped into the pile.

Yeah, try to keep up with that with 100% accuracy.

It would be much better if people just became more civil and respectful enough to not throw in their blue marbles all the time. But that requires self responsibility and I don’t think the people of the world are adult enough to handle that.

This comment has been deemed insightful by the community.
Wyrm (profile) says:

Re: Re: Pick 1 blue marble out of 10 red? Easy. From 100,000 red tho

Good enough analogy.
Can be improved with a reminder that those marbles are not just straight "blue" and "red", but shades of blue, red… and purple. Not to mention how some of them will appear more blue or red depending on the lighting.

There has been an experiment described in an article here.
It was about how a few "posts" were given to a team of "moderators" to decide if they should be moderated or not. I don’t remember anything about a time limit, but there were only a few samples of content, so "scale" was not a factor.
It still ended with different results for each sample. It was impossible to have the whole team make unanimous decisions about approving or rejecting the content.

So, If I give you the task of rejecting all blue marbles and keep all red marbles, what are you going to do about this single marble I give you to sort, which happens to be purple?

Perfectly moderating at scale is indeed impossible because… of the scale.
But it’s already impossible because of subjectivity.

It shouldn’t stop sites from improving their moderation process, but the public should stop asking for the impossible. Notify the site when an error is made, and hope they have the resource to investigate. On the other hand, if they don’t have the resources, they should not promise that they will. They should definitely be clear about their limits so that the public can be set their expectations accordingly.

This comment has been deemed insightful by the community.
Anonymous Coward says:

Re: Re:

The fact that companies make wrong decisions again and again does not imply that content moderation is not possible. (But please feel free to explain.)

Masnick’s Impossibility Theorem: Content Moderation At Scale Is Impossible To Do Well
(the thesis being that nobody agrees on what "well" should mean, because you’ll never be able to get everyone to agree on everything)

This comment has been flagged by the community. Click here to show it.

Zof (profile) says:

I'm going to have to spend some time processing this one.

On the face of it… I mean, the folks that are overly sensitive make up a very tiny portion of any social network. Is it right to give that tiny fringe minority cancelling power? Doesn’t it make more sense to protect those sensitive people by shielding them from the real world? Wouldn’t that be about ten times easier?
It still feels like I’m missing something about how elegant that all seems, and much easier than doing the opposite.

Anonymous Coward says:

Re: Re: Re:

"The obvious thing to do is kick bullies and bigots out, then tell them to go fuck themselves somewhere else"

Yes, but unfortunately Twitter lets bullies and bigots run amok, just because US Presidents appointed by High Lord Putin himself should benefit from a Nixonian-style "executive privilege" and be held beyond account.

It makes it really hard to keep a straight face when Melania claims to be against bullying.

R/O/G/S says:

Re: Re: Re: Re:

Bini-do you mind if I call you that, Mr. Netanyahu?

Please explain why you are so mad at Russia, and The Chosen One considering all the special favors Trump has curried for the Israel lobby and you, specifically?

Then, theres this recent evil executive order that no one is talking about:

https://www.breakingisraelnews.com/141446/trump-sign-executive-order-declaring-jewish-people-nation-just-written-genesis/

Anonymous Coward says:

Re: Re: Re:2 Re:

You also said "that’s the best solution". It’s not so obvious to me. Banning people feeds into the us-vs.-them mentality, and the narrative of people being persecuted for their views. It does nothing to do address the systemic problems that lead people to take those views and make those comments. Maybe it will even hurt, when they move to an echo-chamber of a platform where there’s nobody to refute their views (or, if it’s a closed platform, even to watch what they’re up to).

Anonymous Coward says:

Re: Re: Re:4 Re:

I’m not the AC you’re looking for hand wave, but I gave this some thought anyway.

At least for text heavy content, a platform could invest in smarter content filtering that can detect things like offensive comments and prevent them from being posted. This would work about as well as it does in most video game chat lounges, but it would be an improvement over the current situation.

Of course, my idea has the the usual privacy concerns of flagging content, and the questions of who can make the company block content they don’t want other people to see, etc. etc. you know the drill.

Wendy Cockcroft (profile) says:

Re: Re: Re:

The obvious thing to do is kick bullies and bigots out, then tell them to go fuck themselves somewhere else. I don’t know why you can’t grasp how that’s the best solution, but here we are

First define the bullies and bigots. There’s the rub. For every Progressive complaining about abuse towards [protected group] there’s a gang of right-wingers complaining they’re being discriminated against. See Hobby Lobby, Chick-Fil-A, etc., for details.

Yes, I know it’s a case of "Stop hitting my fist with your face!" but whoever holds the levers of power can say and do what they want with impunity, so let’s not be giving the tools to make our lives worse.

Stephen T. Stone (profile) says:

Re: Re: Re:

For every Progressive complaining about abuse towards [protected group] there’s a gang of right-wingers complaining they’re being discriminated against. See Hobby Lobby, Chick-Fil-A, etc., for details.

The difference between people boycotting/shittalking Chick-fil-A and people trying to push, say, queer people out of the public eye is only one group is trying to actively hurt an entire segment of the population.

Wendy Cockcroft (profile) says:

Re: Re: Re:4 Re:

Boycotting is a perfectly legitimate way to express your displeasure. I won’t read any newspaper owned by Rupert Murdoch, and avoid doing anything that might enrich him further. Most lefties would agree with me on principle because he’s a horrible man. I also slag him and his collection of rancid rags off as and when the subject arises.

If every left-wing/progressive wants to boycott Chick-Fil-A because they think the company is run by horrible people, let them. I can understand people wanting to protect their interests and wouldn’t criticise them for doing so.

Wendy Cockcroft (profile) says:

Re: Re: Re:3 Re:

^This.

only one group is trying to actively hurt an entire segment of the population.

People of faith are being clobbered because we’re not a protected group. I’m sorry for your troubles and agree that it’s no fun being a target. I’ve been one myself and can sympathise. I don’t think it’s right to bash any group of people and make their lives a misery. I daresay you’d say the same thing, but at that point terms and conditions apply. Anyone we believe to be actively harmful to vulnerable people tends to be put on the unwritten list of people we won’t bother protecting, e.g. white supremacists.

Unfortunately, deciding who deserves to be put in a protected class is subject to the prevailing fashions of the day. It’s currently fashionable to bash people of faith, radical feminists (who annoy me too, as it happens. I hate cruelty), and people who pick a side in the culture wars, depending on who’s in the majority/minority.

I often vote your comments insightful, Stephen, but today I think you’re calling it wrong; the story is much bigger than your own experiences.

Anonymous Coward says:

Re: I'm going to have to spend some time processing this one.

What you’re missing is that by excluding your "overly sensitive folks" entirely you are engaging is the worst kind of bullying. Everybody deserves the same rights and abilities as everyone else, limited only by their own limitations and whether they’ve proven themselves incapable of interacting peaceably with others. This latter category are often known as sociopaths. Such as yourself.

Anonymous Coward says:

Re: I'm going to have to spend some time processing this one.

Yeah, but your theory would upset the power imbalance of the tyranny of the minority, who coincidentally of course, control 70% of the worlds resources, aka, the One Percent, whoever they are, and who are extremely sensitive about criticism of their shitty, genocidal, slaughtering ways.

I suspect these hidden entities might be pink, hairless talking rats, but I cannot yet provide evidence. Logically, only something that pathetic and scurrilous could be that sensitive.

This comment has been flagged by the community. Click here to show it.

Anonymous Coward says:

Re: Distilled, it comes down to this:

You missed one:

3) Do I placate the advertiser at the expense of throwing the users, good or bad, under the bus.

That’s the reality… these companies aren’t in business to support free expression, justice, liberty, equality or anything else – they’re in business to push intrusive display advertising in your face. They’re a mercenary front, nothing more. And that’s pretty much the entire commercial Internet.

Anonymous Coward says:

Re: Distilled, it comes down to this:

define ‘overly sensitive’.

Remember, this plan included people with physical disabilities too. If anything, they rightfully should be upset if they were being bullied for things completely out of their control. So the response is to isolate the potential victim (in a way that may not even stop the potential bully might I add)?

Or is anybody who would get upset by bullying automatically ‘overly sensitive’?

This comment has been deemed insightful by the community.
bob says:

Re: Distilled, it comes down to this:

after the bullied leave who do you think the bully will go after next, a different minority on your platform. Then after you drive them out the bully picks the next group and it keeps going until only the bully is left. Then the bully leaves and your platform is dead.

Congrats you just demolished your business and income source because you didnt stand against bad behavior on your platform.

Anonymous Coward says:

Re: Re: Distilled, it comes down to this:

Cyberbullying isn’t going to kill a public internet platform as popular as TikTok. TikTok’s problems with bullies and bigots aren’t even as bad as Youtube comments. The trolls are outnumbered by a factor of thousands, if not more.

TikTok has hundreds of millions of users and it is growing rapidly. Even if 1% of those users were bullies that spent their entire time on TikTok being awful to everyone else, they still could never reach a significant portion of the rest of the user base. Even when brigade-ing they can only manage to reach individuals or smaller groups in the community for any extended time, which is what they do.

So TikTok tried to make it harder to discover and find the people that are frequent targets. It’s arguably the only moderation option that can be done proactively, though that choice should really be opt-in and left to their users.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...