Zoom & China: Never Forget That Content Moderation Requests From Government Involve Moral Questions

from the lessons-to-learn dept

If you’ve been around the content moderation/trust and safety debates for many years, you may remember that in the early 2000s, Yahoo got understandably slammed for providing data to the Chinese government that allowed the government to track down and jail a journalist who was critical of the Chinese government. This was a wake up call for many about the international nature of the internet — and the fact that not every “government request” is equal. This, in fact, is a key point that is often raised in discussions about new laws requiring certain content moderation rules to be followed — because not all governments look at content moderation the same way. And efforts by, say, the US government to force internet companies to “block copyright infringement” can and will be used by other countries to justify censorship.

The video conferencing software Zoom is going through what appears to be an accelerated bout of historical catch-up as its popularity has rocketed thanks to global pandemic lockdown. It keeps coming across problems that tons of other companies have gone through before it — with the latest being, as stated above, that requests from all governments are not equal. It started when Zoom closed the account of a US-based Chinese activist, who used Zoom to hold an event commemorating the Tiananmen Square massacre. Zoom initially admitted that it shut the account to “comply” with a request from the Chinese government:

?Just like any global company, we must comply with applicable laws in the jurisdictions where we operate. When a meeting is held across different countries, the participants within those countries are required to comply with their respective local laws. We aim to limit the actions we take to those necessary to comply with local law and continuously review and improve our process on these matters. We have reactivated the US-based account.?

That response did not satisfy anyone and as more and more complaints came in, Zoom put out a much better response, which more or less showed that they’re coming up to speed on a ton of lessons that have already been learned by others (at the very least, it suggests they should hire some experienced trust and safety staffers…). At the very least, though, Zoom admits that in taking the Chinese government’s requests at face value, it “made two mistakes”:

We strive to limit actions taken to only those necessary to comply with local laws. Our response should not have impacted users outside of mainland China. We made two mistakes:

  • We suspended or terminated the host accounts, one in Hong Kong SAR and two in the U.S. We have reinstated these three host accounts.
  • We shut down the meetings instead of blocking the participants by country. We currently do not have the capability to block participants by country. We could have anticipated this need. While there would have been significant repercussions, we also could have kept the meetings running.

There are reasonable points to be made that a company like Zoom should have anticipated issues like this, but at the very least you can give the company credit for admitting (directly) to its mistakes, and coming up with plans and policies to avoid doing it again in the future.

But there is a larger point here that often gets lost in all these discussions about trust and safety, and content moderation. So much of the debates usually focus on the assumption that (1) requests to block or take down content or accounts are done in good faith, and (2) that those making the requests have similar values. That’s frequently not the case at all. We’ve shown this over and over again here on Techdirt in which laws against “fake news” are used to silence critics of a ruling class.

So for anyone pushing for laws that require internet companies to somehow ban or block “bad behavior” and “bad actors,” you need to be able to come up with a definition of those things that won’t be abused horribly by authoritarian governments around the globe.

Filed Under: , , , , , , ,
Companies: zoom

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Zoom & China: Never Forget That Content Moderation Requests From Government Involve Moral Questions”

Subscribe: RSS Leave a comment
25 Comments
This comment has been deemed insightful by the community.
Anonymous Anonymous Coward (profile) says:

The ain't seen nothin' yet

Just how will Zoom handle the request to shut down a meeting when some IP maximalist discovers that one of the meeting participants has a radio playing in the background with some music (already paid for BTW) they claim to own the copyright on? That the cows might hear it has been used already, but I am sure the maximalists will come up with some better excuse to make the claim that it is a ‘secondary’ performance that requires additional fees.

Not being a user, can one record a Zoom meeting, then post it to YouTube? There, the problem would be the YouTube user for posting it (with YouTube in the middle of course) but I could see a dedicated maximalist trying to draw Zoom into things for having aided and abetted the recording.

/serious or /sarcastic…could go either way

Anonymous Coward says:

Re: The ain't seen nothin' yet

Answer:
Yes, a registered account (vs free access) can record a zoom meeting. It is not particularly well compressed, but you can do this thing.

And once you have the recording, you can do with it anything you could do with other such recordings.

You still have the problem of China eg ordering the video blocked in China via ContentID, but then you always have that. You also have the normal maximalist problems of "if there is background music, then it must be banned" sort of thing, or "we reported using this footage so it all belongs to us now".

Upstream (profile) says:

At least they are trying

I have not participated in Zoom, for the same reasons I don’t participate in Facebook, Twitter, or the like. Micah Lee had some unflattering things to say about Zoom here and here. What I found on Zoom’s Privacy and Security page seemed seemed to be largely non-specific fluff, and the term "metadata" is not mentioned at all.

I do find it very refreshing and encouraging that, instead of stonewalling or engaging in double-speak, they have been straight-forward about admitting errors, and, from this article and other things I have read about Zoom, they are indeed trying to "catch up," and that is a very good sign.

vilain (profile) says:

Could it be because their dev team is in China?

Zoom has said previously when they had to consider their security model in light increased free tier use that a lot of their dev team is in China. I wonder if their initial willingness to please the PRC was so their dev team stays in place and isn’t arrested, tried, and thrown in prison.
<p>
WebEx originally came from a Chinese dev team and they still develop that product, so Cisco has the same problem.

Anonymous Coward says:

The problem is western company’s have to comply with laws in Russia and China, or their apps will be blocked and no one will get to use them. So they choice is block or censor some users or lose all your customers in that country.
Laws about copyright can be used to censor users or
remove content eg Many videos on YouTube that are legal in terms of fair use and commentary are taken down by using dmca eg a 2hour video can be taken
down because it has a few seconds of music in it.
Zoon is not like YouTube, most videos are only seen by the people who are in the meeting, you can only
join the meeting using a password. In most cases there is no reason to record the video.
Most videos on YouTube can be viewed by anyone unless the video is set to private.
The reason Zoon took off is because is because its easy to use by anyone , Skype has been redesigned
a number of times which makes it hard to use
and you need to sign up with Microsoft to use it.
I don’t think music company’s will start going after
Zoom since most of the videos are not public
eg they don’t qualify as a public performance

tz1 (profile) says:

They just need a rage mob.

The problem is there is already a defacto ban on some speech and points of view, generally conducted by a rage mob reporting that some post is “icky”, and the person is “icky” and being “icky” goes against something in the terms of service.

And the terms of service are intentionally kept ambiguous so the enforcement will be arbitrary. These contracts of adhesion are exactly like the laws they propose to stop “icky” speeck from the other side. They usually use the term “offensive”, but then say it is hateful too, but it only goes one way.

The enforcers almost all voted for Hillary, and those that didn’t voted for the Green party or are still far to the left. They ban what offends them personally, not what might violate the actual rules, but the ambiguity gives them cover since they never have to explained what terms of service, or how they were violated, just that they did. That would happen with any attempt with a law, but it is happening now.

And usually just after some rage mob wants someone’s scalp. Does anyone remember what SPECIFIC thing Alex Jones was banned everywhere for (including Apple, apparently it went up to Tim Cook)?

Sarah Jeongs tweets were more offenive than anything Milo tweeted, but Milo got banned. Jeongs tweets AFAIK are still up. Candace Owens simply did a find and replace on white to black and those otherwise identical tweets earned her a suspension.

Few if any complain about vertical moderation – offensive terms, profanity, vulgarity, language suggesting violence, etc. like a G v.s. PG v.s. PG13 vs R vs X, as the rules can be fairly clear. Even Gab does that to some extent with its NSFW flag (as does Minds and Parler IIRC).

The problem is the horizontal moderation where the ban limit is anything to the right of a left moving boundary. Or vice versa, but I know of no platform which does so.

This comment has been flagged by the community. Click here to show it.

tz1 (profile) says:

They just need a rage mob.

The problem is there is already a defacto ban on some speech and points of view, generally conducted by a rage mob reporting that some post is “icky”, and the person is “icky” and being “icky” goes against something in the terms of service.

And the terms of service are intentionally kept ambiguous so the enforcement will be arbitrary. These contracts of adhesion are exactly like the laws they propose to stop “icky” speeck from the other side. They usually use the term “offensive”, but then say it is hateful too, but it only goes one way.

The enforcers almost all voted for Hillary, and those that didn’t voted for the Green party or are still far to the left. They ban what offends them personally, not what might violate the actual rules, but the ambiguity gives them cover since they never have to explained what terms of service, or how they were violated, just that they did. That would happen with any attempt with a law, but it is happening now.

And usually just after some rage mob wants someone’s scalp. Does anyone remember what SPECIFIC thing Alex Jones was banned everywhere for (including Apple, apparently it went up to Tim Cook)?

Sarah Jeongs tweets were more offenive than anything Milo tweeted, but Milo got banned. Jeongs tweets AFAIK are still up. Candace Owens simply did a find and replace on white to black and those otherwise identical tweets earned her a suspension.

Few if any complain about vertical moderation – offensive terms, profanity, vulgarity, language suggesting violence, etc. like a G v.s. PG v.s. PG13 vs R vs X, as the rules can be fairly clear. Even Gab does that to some extent with its NSFW flag (as does Minds and Parler IIRC).

The problem is the horizontal moderation where the ban limit is anything to the right of a left moving boundary. Or vice versa, but I know of no platform which does so.

This comment has been deemed insightful by the community.
Anonymous Coward says:

Re: They just need a rage mob.

The problem is there is already a defacto ban on some speech and points of view,

Can you set up your own blog, or find a platform that accepts those points of view? If so, there is no ban on some speech. If you cannot attract an audience, or the platform has a small number of users, then you have a measure of the popularity of such speech.

This comment has been flagged by the community. Click here to show it.

Anonymous Coward says:

TechDirtTopia

Monopoly capital/imperialism is an irrational system. It is not
organized to meet human needs. It is run by a very small ruling class whose
only morality is the morality of the maximum profit.

Join us in MASNICKI, a new independent country, located in Masnick’s basement apartment. We put pillows all around the edges, and covered the whole thing with a blanket, and now we have an INDEPENDENT COUNTRY! Yesiree Bahgdad Bob, we are INDEPENDENT!

WE HAVE A DREAM! THE TECHDIRT DREAM! WE SHALL OVERCOME!

Scary Devil Monastery (profile) says:

Re: Re: Re: TechDirtTopia

So when he’s sullenly whining all the time and calling people bad names half-heartedly he’s on withdrawal and when he churns out wordwalls which degenerate in incoherent screaming and actual death threats he’s high as a kite?

Well, I would have figured it was all some form of schizophrenia but your hypothesis is plausible enough.

Toom1275 (profile) says:

Re: Re: Re:2 TechDirtTopia

Meth use can cause paranoid delusions, including of hallucinatory gangs of unknown people attacking the user.

I’ve seen stories including a guy caught trying to break into a victim’s house claiming he needed safety from the gang of mexicans chasing him down the (empty) street, kids flipping out and going siege-mode against an invisible enemy and jnstead shooting each other, to imagining that occasiinal welfare checks by police are actually a conspiracy of harrassment plotted against him.

That One Guy (profile) says:

Still sound like a good idea?

So for anyone pushing for laws that require internet companies to somehow ban or block "bad behavior" and "bad actors," you need to be able to come up with a definition of those things that won’t be abused horribly by authoritarian governments around the globe.

Or as I like to call it, the ‘Turnabout is fair play’ test.

‘Would you still support a position, argument and/or power if your absolute worst enemy was able to make use of it, potentially against you, to it’s fullest extent the second it was available? If not then you probably shouldn’t be in favor of it.

Leave a Reply to Anonymous Coward Cancel reply

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...