Be Cautious About Big Internet Platforms Bearing Plans For Global Censorship

from the let's-not-get-carried-away-here dept

In the wake of the Christchurch shooting massacre in New Zealand, there has been a somewhat odd focus on the internet platforms — mainly those that ended up hosting copies of the killer’s livestream of the attack. As we previously discussed, this is literally blaming the messenger, and taking away focus from the much deeper issues that led up to the attack. Still, in response, Microsoft’s Brad Smith decided to step forward with a plan to coordinate among big internet companies a system for blocking and taking down such content.

Ultimately, we need to develop an industrywide approach that will be principled, comprehensive and effective. The best way to pursue this is to take new and concrete steps quickly in ways that build upon what already exists.

Smith points to an earlier agreement between YouTube, Facebook, Twitter and Microsoft to form GIFCT, the Global Internet Forum to Counter Terrorism, by which the various big platforms share hashes of content deemed “terrorist content” so they can all spot it across their platforms. Here, Smith suggests expanding that effort:

We need to take new steps to stop perpetrators from posting and sharing acts of violence against innocent people. New and more powerful technology tools can contribute even more than they have already. We must work across the industry to continue advancing existing technologies, like PhotoDNA, that identify and apply digital hashes (a kind of digital identifier) to known violent content. We must also continue to improve upon newer, AI-based technologies that can detect whether brand-new content may contain violence. These technologies can enable us more granularly to improve the ability to remove violent video content. For example, while robust hashing technologies allow automated tools to detect additional copies already flagged as violent, we need to further advance technology to better identify and catch edited versions of the same video.

We should also pursue new steps beyond the posting of content. For example, we should explore browser-based solutions ? building on ideas like safe search ? to block the accessing of such content at the point when people attempt to view and download it.

We should pursue all these steps with a community spirit that will share our learning and technology across the industry through open source and other collaborative mechanisms. This is the only way for the tech sector as a whole to do what will be required to be more effective.

Some of this may be reasonable, but we should be careful. As Emma Llanso neatly lays out in a series of tweets, before we expand the power and role of GIFCT, we should take care of many of the existing concerns with the program. Here’s a (lightly edited) transcription of Llanso’s concerns:

In Brad Smith’s post on Microsoft’s response to the New Zealand attacks, we see another example of a company promoting an expanded role for the GIFCT without addressing any of the long-standing transparency and accountability issues with the consortium. Smith makes several proposals to further centralize and coordinate of content-blocking by major tech companies and fails to include any real discussion of transparency, external accountability to users, or safeguards against censorship.

The closest he gets is describing a “joint virtual command center” of tech companies to coordinate during major events, which would enable tech companies to ensure they “avoid restricting communications that [tech companies unilaterally] decide are in the public interest”. Public interest must be part of the analysis but media orgs & nations have come to different conclusions about how to cover the NZ attacks. It’s naive to suggest that a consensus view of “public interest” could, much less ought, be set by a consortium of US-born tech companies.

There’s also a chilling call to “explore browser-based solutions” to block people’s ability to view or download content, with no recognition of how dangerous it is to push censorship deeper into infrastructure. “Safe-search” is user-controlled; would MSFT’s terror-block be as well?

Smith is calling for discussion about how tech can/should be involved in responding to terrorism, which is reasonable. But any discussion that fails to include transparency and safeguards against censorship, from the very beginning, is irresponsible. I know that many people’s instincts right now are focused on how to take more content down faster, but as Smith notes, “the public rightly expects companies to apply a higher standard.” Takedown policies without safeguards are incomplete and are not “solutions”.

Llanso makes a number of good points here, but a key one to me: while coordination and agreement to act together may sound like a good way to approach global scale issues that can move from platform to platform, it also suggests that there is only one solution to such content (which is to outright ban it across all platforms). That takes out of the equation more creative or alternative approaches. It also takes out context. As we’ve discussed before, in some cases someone’s “terrorist content” is actually evidence of war crimes that it might be useful for someone to have.

Yes, lots of people are rightly concerned that videos and manifestos related to attacks may inspire copycat (or worse) attacks. But trying to stuff the entire thing down the memory hole in a single coordinated plan — where the big internet platforms are the final arbiters of everything — hardly seems like the right solution either. Indeed, taking such a position actually makes it that much harder for different platforms to experiment with different, and possibly more effective, ways of dealing with this kind of content.

Filed Under: ,

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Be Cautious About Big Internet Platforms Bearing Plans For Global Censorship”

Subscribe: RSS Leave a comment
34 Comments
Anonymous Coward says:

Re: Ah good old witch-hunts...

It would seem that ‘violent video games’ are taking a backseat as ‘the source of all corruption’, with ‘open internet platforms’ taking their place as the scapegoat.

That goat died for our sings. We marked it to the edge of town, blamed it for all the sinning we’ve done this year, and sent it out banished to wander the desert until it dies.

We’re sin free!

Stupid evil goat.

Anyone for some infidelity? I’ve got some new derivatives. Wraps up sub-prime medical bills with emerging cryptocurrencies.

That One Guy (profile) says:

Re: 'Nice platform you got there...'

Could be because a lot of the push is due to government pressure, ‘step up, do the impossible, or we’ll break out the regulations’, and when the actions being discussed are motivated less by the companies themselves and more to avoid government regulation in a sense there is government involvement.

If there weren’t politicians insisting that companies ‘Nerd harder or else!’, and they were just doing this stuff on their own then yeah, wouldn’t really be a free speech issue in that way, but as there is it’s not unfair to put it into that category.

Anonymous Coward says:

Re: Re: 'Nice platform you got there...'

Yeah it is worse as a chilling effect thana sa 1 even really shitty laws and worse for rule of law. If there were rules they could appeal or take action. Worse there is never a line for "enough" and it is essentially weaponizing corporate fear of uncertainty.

They have their own culpability of course for bending instead of calling them on their bullshit publicly and unrelentingly. What they should be doing is hitting the bully back, humiliating them and making them cry.

Press them – for details and tear their non-solutions apart along with the ugly implications of the policies and be downright mean. So how exactly will taking down content about people doing bad things bring the dead back to life?

If it is possible to detect terrorism this easily maybe your intelligence agencies should have developed the ability to do that already with their budget and mass privacy violations?

Anonymous Coward says:

Re: Re:

No, no, no. While perhaps not as closely tied to say, the CCP, these companies are tied to the hip with governments at this point. So, at the very least, we have to treat these corporate entities as proxies for governmental censorship.

It’s like with copyright law being outdated for the internet age, so are privacy protections and free speech laws. In many ways, the big internet corp. is worse for free speech since they have more direct ability to do it THAN EVER BEFORE IN HUMAN HISTORY —AND— they totally lack any accountability.

Anonymous Coward says:

The New Zealand shooter live streams and puts out written material stating his intent is to create a spectacle so large that Western governments restrict freedoms. The hope that in restricting freedoms might lead to increased hostility from citizens unreasonably restricted afterward taking up the fight for those curtailed freedoms.

New Zealand government picks up that ball and runs forward with the objectives of the shooter.

Tech companies, not to be out-done, jump on board with ways that they can help restrict human rights because there was once a shooter.

I’m thinking of Charlie Manson and his similar helter skelter plan to try and kick off civil wars and wondering if governments and companies tripped all over themselves back then as well to give the murderer their every demand.

The lesson sent by the response from New Zealand and tech giants is that violence is rewarded. Need a law stating all knee-jerk responses are banned for a year after an event like this, so there is opportunity for discussion and avoid acting on emotion. Shameful response from politicians.

That One Guy (profile) says:

Re: 'Never let a good tragedy go to waste'

It rather reminds me of a quote I remember attributed to Bin Laden a number of years back, talking about how all he had to do was attack the US once and it would tear itself apart from the inside, and would you look at that, both then and now you’ve got politicians tripping over themselves to crack down on the public, handing the assholes everything they could have possibly dreamed of.

When a violent attack is guaranteed to result in instant national/global fame for those responsible, and the governments(local and up) losing their minds and cracking down on freedoms of the public, it’s hardly a wonder you’ve got assholes pulling stunts like this, because whether they’re looking for attention and/or panic they know it works.

Along those lines, while it obviously wouldn’t solve everything, I can’t help but wonder how much a global application of the Some Asshole Initiative would help. No fame, no deep dives into motivation/race/age, any shooter or idiot with a bomb is reduced to nothing more than ‘Some asshole’. If nothing else, it would certainly improve the reporting on stuff like this by removing the fame granted to any asshole who shoots/blows up a bunch of people, and that alone would be an improvement.

Scary Devil Monastery (profile) says:

Re: Re: 'Never let a good tragedy go to waste'

"It rather reminds me of a quote I remember attributed to Bin Laden a number of years back, talking about how all he had to do was attack the US once and it would tear itself apart from the inside, and would you look at that, both then and now you’ve got politicians tripping over themselves to crack down on the public, handing the assholes everything they could have possibly dreamed of. "

Similar to how IS recruiters applauded Trump’s victory, the same way Al-Quaeda cheered for GWB. They were counting on turning muslims in the US into a persecuted minority they could recruit from. with some success, apparently.

If you are an extremist the first thing you want is a bigger pool of dissatisfied potential recruits. And the best way to do that is to turn your 3rd world land grab into a global conflict of ideology.

What bugs me is that back when the terrorists were the PLO, the IRA, the Basques, Baader-Meinhof, etc…every western leader was adamant that we would never allow the terrorists to force us to change our society.

Now look at us. IS commited an atrocity and we fell all over ourselves trying to dig ourselves into a surveillance society. Some asshat murders a bunch of people in NZ and governments suddenly compete in changing society the way he probably intended.

And scared shitless over a spate of ambiguous potential censorship legislation, private corporations and news agencies fall in line out of fear and anxiety.

Anonymous Coward says:

Here’s a question I don’t think I’ve seen raised. Where does video of police shooting someone fall in all this? Those are videos of people being killed too. Might even be live-streamed, by bystanders or victims. Such videos have proven important in holding police accountable in a variety of ways such as by not letting police PR sweep officer mistakes and misbehavior under the rug with false accusations.

So are those considered "violent video content" to be aggressively removed?

Stephen T. Stone (profile) says:

Re: Re:

Then we have violent video game and movie content that could be dinged despite it being digital blood being spilled by digital people. Everything from the Fatalities in Mortal Kombat 11 to the action scenes in John Wick could be declared “violent video content” and removed, despite the fact that such content shows fictional people being “killed”. I have to wonder how much of that content will be labeled as such — and how much companies will pay to “undo” that label because hey, even they need views on YouTube.

norahc (profile) says:

Re: Re: Re:

It is a very good question indeed, and it is one that the people pushing such initiatives, as well as the governments that are pressuring them to do so, don’t give a single fuck about attempting to answer in a coherent, rational manner.

Oh they do care….it’s just that they care about hiding law enforcement misdeeds from the public. Police unions are going to love this. Video shows cop shooting unarmed suspect in the back? Terrorist content….it must be removed.

Frank Lee Dodgy says:

You're telling me? -- You're telling ME, Google-boy?

About Global Censorship? After I’ve been censored for years right here at Techdirt in part because railing at the dangers posed by global mega-corps, which were started for the specific purpose, are implementing it a little more every day, and visibly merging with gov’t too?

No, Masnick. Not even your fanboys are going to believe that you’re suddenly worried about this.

Not after your long history of advocating absolute arbitrary control by corporations:

"And, I think it’s fairly important to state that these platforms have their own First Amendment rights, which allow them to deny service to anyone."

https://www.techdirt.com/articles/20170825/01300738081/nazis-internet-policing-content-free-speech.shtml

The only way to deal with the inevitable control by BIG entities is to piece them up. — AND don’t let globalism grow in first place!

Corporations are NOT persons, contrary to what YOU believe and advocate, do not even have a "Right" to exist at all: they first ask the The Public for permission to exist, and agree to operate under rules in The Public’s marketplace. — NOT politics: to offer products and services. — They’re to serve The Public and operate by Our rules for general societal benefits, not for narrow and solely financial interests.

But as always Masnick’s implicit premise is that these mega-corporations must not be limited — that might stifle "innovation". He totally ignores that the "innovation" nowadays is only of new ways to surveil and control. — And Masnick NEVER calls for any action that would adversely affect their profits.

Igualmente69 (profile) says:

"Yes, lots of people are rightly concerned that videos and manifestos related to attacks may inspire copycat (or worse) attacks."
It is understandable that people in fear react in such ways, but the reality is that there is almost no evidence whatsoever that these things actually happen on any scale.
The correct response on the part of the tech companies, at least in America, is nothing. Freedom of expression means people are allowed to produce media that the majority of the populace finds distasteful.
In addition, suppressing the evidence of crime does not prevent the crime from happening again in the future, despite the continuous stream of lies from politicians in various countries, and in fact might make it harder to catch perpetrators.

Anonymous Coward says:

Censorship is wrong, no doubt about that. If you have been warned you have the option to tune out. I’ve seen much worse in Hollywood productions. I guess ratings will be the next thing. Christchurch is pornography, if you don’t believe it check with Daniel Webster. That doesn’t mean viewing it should be unlawful, just not my preference.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...