Mike Masnick’s Techdirt Profile


About Mike MasnickTechdirt Insider

Mike is the founder and CEO of Floor64 and editor of the Techdirt blog.

He can be found on Twitter at http://www.twitter.com/mmasnick

Posted on Techdirt - 26 April 2018 @ 11:53am

Supreme Court Says Of Course The Patent Office Can Admit It Made A Mistake And Dump Bad Patents

from the phew dept

For the second time in two years, the Supreme Court has needed to weigh in and note that, of course, the US Patent Office can take another look at the crappy patents it already granted, recognize its mistake, and void the patents. A little less than two years ago, it looked at what standards could be used by the Patent Trial and Appeal Board (PTAB) using the Inter Partes Review (IPR) system created by the America Invents Act of 2010. The latest case was much more broad: challenging whether the IPR/PTAB process itself was Constitutional.

The basic idea behind the IPR process was an admission that the USPTO is historically bad at properly reviewing patents before granting them. It grants a lot of bad patents. The IPR process allows anyone to present evidence to the PTO that it made a mistake and granted a patent that should never have been granted. If the PTAB is convinced, it can invalidate the patent. Seems pretty straightforward. Except that the usual patent lovers (mainly patent trolls and big pharma) insisted that this was some sort of unconstitutional taking of property, without the review of a court. This is wrong for a whole bunch of reasons -- starting with the incorrect view of patents as traditional "property."

The Supreme Court ruled on the issue, in a case called Oil States Energy Services v. Greene's Energy Group, and basically said that of course the PTAB can invalidate patents this way. Justice Thomas wrote the majority opinion with a 7 - 2 split (Gorsuch and Roberts dissented). The key issue was whether or not invalidating patents is reserved only for the courts, and most of the Justices don't see any support for that. In short, the majority opinion says what the Patent Office gives, the Patent Office can take away...

This Court has recognized, and the parties do not dispute, that the decision to grant a patent is a matter involving public rights—specifically, the grant of a public franchise. Inter partes review is simply a reconsideration of that grant, and Congress has permissibly reserved the PTO’s authority to conduct that reconsideration. Thus, the PTO can do so without violating Article III.

The majority opinion points out that the Constitution doesn't require the Judicial branch to weigh in on the granting of patents, and thus it need not weigh in on the invalidating of those patents either.

Inter partes review is “a second look at an earlier administrative grant of a patent.” ... The Board considers the same statutory requirements that the PTO considered when granting the patent.... Those statutory requirements prevent the “issuance of patents whose effects are to remove existent knowledge from the public domain.” ... So, like the PTO’s initial review, the Board’s inter partes review protects “the public’s paramount interest in seeing that patent monopolies are kept within their legitimate scope,”... Thus, inter partes review involves the same interests as the determination to grant a patent in the first instance.

The court reasonably compares it to other administrative processes where the government can grant things, such as franchises, and later revoke or amend those franchises.

An important part of the ruling is that patents are not property in the traditional sense, and this isn't case of removing someone's property without the Judicial Branch. Specifically, the court says that patents are really more similar to a franchise right, than a traditional property right:

Patents convey only a specific form of property right—a public franchise.... And patents are “entitled to protection as any other property, consisting of a franchise.” Seymour, 11 Wall. at 533 (emphasis added). As a public franchise, a patent can confer only the rights that “the statute prescribes.” Gayler, supra, at 494; Wheaton v. Peters, 8 Pet. 591, 663–664 (1834) (noting that Congress has “the power to prescribe the conditions on which such right shall be enjoyed”). It is noteworthy that one of the precedents cited by Oil States acknowledges that the patentee’s rights are “derived altogether” from statutes, “are to be regulated and measured by these laws, and cannot go beyond them.”

And while Oil States points to some earlier rulings saying that only the courts can invalidate a patent, the Supreme Court correctly notes that the cases it points to were decided under the Patent Act of 1870 -- and under that Act, courts were necessary. But there is no Constitutional prohibition on Congress setting up different rules for the patent system, and since 2010 with the America Invents Act, Congress decided to allow for this administrative review.

Amusingly, the court also takes Oil States to school for a history lesson. The company had argued that historical principles have established that only a court could invalidate a patent, and points to cases decided in 18th century England. But, the Supreme Court, citing Mark Lemley, points out that Oil States not only gets its history wrong, but that the actual history shows that England had something quite similar to an Inter Partes Review process:

But this history does not establish that patent validity is a matter that, “from its nature,” must be decided by a court.... The aforementioned proceedings were between private parties. But there was another means of canceling a patent in 18th-century England, which more closely resembles inter partes review: a petition to the Privy Council to vacate a patent. See Lemley, supra, at 1681–1682; Hulme, Privy Council Law and Practice of Letters Patent for Invention From the Restoration to 1794, 33 L. Q. Rev. 63 (1917). The Privy Council was composed of the Crown’s advisers.... From the 17th through the 20th centuries, English patents had a standard revocation clause that permitted six or more Privy Counsellors to declare a patent void if they determined the invention was contrary to law, “prejudicial” or “inconvenient,” not new, or not invented by the patent owner. ... Individuals could petition the Council to revoke a patent, and the petition was referred to the Attorney General. The Attorney General examined the petition, considered affidavits from the petitioner and patent owner, and heard from counsel.... Depending on the Attorney General’s conclusion, the Council would either void the patent or dismiss the petition....

The Privy Council was a prominent feature of the English system. It had exclusive authority to revoke patents until 1753, and after that, it had concurrent jurisdiction with the courts.... The Privy Council continued to consider revocation claims and to revoke patents throughout the 18th century.

Just a note: if you're going to cite English legal history at the US Supreme Court to try to establish some sort of general and accepted rule, it's best if you don't skip out on the fact that the actual history supports the other side. Just saying.

The final argument from Oil States that the court rejects, is that the PTAB is a violation because it looks too much like an Article III court, without being an actual court. But the majority opinion basically says "so what?"

But this Court has never adopted a “looks like” test to determine if an adjudication has improperly occurred outside of an Article III court. The fact that an agency uses court-like procedures does not necessarily mean it is exercising the judicial power. See Freytag, 501 U. S., at 910 (opinion of Scalia, J.). This Court has rejected the notion that a tribunal exercises Article III judicial power simply because it is “called a court and its decisions called judgments.”

The dissent from Gorsuch (with Roberts signing on) is all over the place. They seem taken in by the myth of patent greatness, kicking off the dissent with the following:

After much hard work and no little investment you devise something you think truly novel. Then you endure the further cost and effort of applying for a patent, devoting maybe $30,000 and two years to that process alone. At the end of it all, the Patent Office agrees your invention is novel and issues a patent. The patent affords you exclusive rights to the fruits of your labor for two decades. But what happens if someone later emerges from the woodwork, arguing that it was all a mistake and your patent should be canceled? Can a political appointee and his administrative agents, instead of an independent judge, resolve the dispute? The Court says yes. Respectfully, I disagree.

But... that makes no sense. Congress, which has the power to set up the patent system to "promote the progress of the useful arts" has determined (reasonably) that since the Patent Office grants lots of bad patents, it can also review and invalidate its mistakes. That's not a "dispute that needs to be resolved." It's an administrative function that doesn't end once the patent is granted.

Gorsuch seems to think people will read the narrow ruling of the majority to more or less decimate the courts and move lots of disputes into administrative processes. That's silly. The majority opinion is quite clear that it is narrowly focused on the issue before it concerning Congress's authority to enable the Patent Office to invalidate patents. As for the history lesson mentioned above, Gorsuch claims it no longer applies because the English Privy Council stopped invalidating patents in the 18th Century. That's true, but meaningless. The whole reason English patent history is brought up in the first place was because Oil States tried to argue that it was the natural order of patents that only courts could remove them. What they really meant is that it was the natural order starting in the late 18th century... which is a lot less convincing if you're trying to argue a form of "we've always done it this way." The point is... we haven't.

This is a big and important win, protecting everyone from bad patents. The IPR process has been shown to be a tool that is at least somewhat effective in dumping bad patents. We should all strive for a situation in which the PTO doesn't grant bad patents in the first place, but given that it does, it should be able to acknowledge its mistakes and correct them.

Of course, that won't stop patent trolls and big pharma from flipping out. Even the site FiercePharma seems to be completely off its rocker in calling this ruling a "blow for pharma" and describing the IPR process as "hated." Except that's silly. IPR is only hated by those with junk patents. If pharma hates it, it's because they know their patents are junk.

As that article makes clear, though, the powerful Pharma lobby intends to pressure Congress to still kill the IPR process as quickly as possible:

In a statement, PhRMA spokesperson Nicole Longo said the "narrowly tailored decision" found only that IPRs are constitutional, not "efficient or fair." The arguments and a Tuesday ruling in another case—SAS Institute v. Iancu—mean it's "clear there are problems with the IPR process that need to be addressed," she added.

Amusingly, the article also notes that most of the big pharma companies have used the IPR process themselves to invalidate the patents of others, suggesting that maybe they don't actually "hate" it that much after all...

Read More | Leave a Comment..

Posted on Techdirt - 26 April 2018 @ 10:42am

Software Legend Ray Ozzie Thinks He Can Safely Backdoor Encryption Safely; He's Very Wrong

from the and-dangerous dept

There have been ongoing debates for a while now about the stupidity of backdooring encryption, with plenty of experts explaining why there's no feasible way to do it without causing all sorts of serious consequences (some more unintended than others). Without getting too deep into the weeds, the basic issue is that cryptography is freaking difficult and if something goes wrong, you're in a lot of trouble very fast. And it's very, very easy for something to go wrong. Adding in a backdoor to encryption is, effectively, making something go wrong... on purpose. In doing so, however, you're introducing a whole host of other opportunities for many, many things to go wrong, blowing up the whole scheme and putting everyone's information at risk. So, if you're going to show up with a "plan" to backdoor encryption, you better have a pretty convincing argument for how you avoid that issue (because the reality is you can't).

For at least a year (probably more) the one name that has kept coming up over and over as one of the few techies who insists that the common wisdom on backdooring encryption is wrong... is Ray Ozzie. Everyone notes that he's Microsoft's former Chief Software Architect and CTO, but some of us remember him from way before that when he created Lotus Notes and Groove Networks (which was supposed to be the nirvana of collaboration software). In recent months his name has popped up here and there, often by FBI/DOJ folks seeking to backdoor encryption, as having some possible ways forward.

And, recently, Wired did a big story on his backdoor idea, where he plays right into the FBI's "nerd harder" trope, by saying exactly what the FBI wants to hear, and which nearly every actual security expert says is wrong:

Ozzie, trim and vigorous at 62, acknowledged off the bat that he was dealing with a polarizing issue. The cryptographic and civil liberties community argued that solving the problem was virtually impossible, which “kind of bothers me,” he said. “In engineering if you think hard enough, you can come up with a solution.” He believed he had one.

This, of course, is the same sort of thing that James Comey, Christopher Wray and Rod Rosenstein have all suggested in the past few years: "you techies are smart, if you just nerd harder, you'll solve the problem." Ozzie, tragically, is giving them ammo. But he's not delivering the actual goods.

The Wired story details his plan which is not particularly unique. It takes concepts that others have proposed (and which have been shown to not be particularly secure) and puts a fresh coat of paint on them. Basically, the vendor of a device has a private key that it needs to keep secret, and under some "very special circumstances" it can send an employee into the dark chamber to do the requisite dance, retrieve the code, and give it to law enforcement. That's been suggested many times, and it's been explained many times why that opens up all sorts of dangerous scenarios that could put everyone at risk. The one piece that does seem different is that Ozzie wants a sort of limitation on the possible damage his system does if it goes wrong (in one particular way), which is that under his system if the backdoor is used, it can only be used on one phone and then it disables that phone forever:

Ozzie designed other features meant to ­reassure skeptics. Clear works on only one device at a time: Obtaining one phone’s PIN would not give the authorities the means to crack anyone else’s phone. Also, when a phone is unlocked with Clear, a special chip inside the phone blows itself up, freezing the contents of the phone thereafter. This prevents any tampering with the contents of the phone. Clear can’t be used for ongoing surveillance, Ozzie told the Columbia group, because once it is employed, the phone would no longer be able to be used.

So, let's be clear. That piece isn't what's useful in "reassuring skeptics." That piece is the only thing that really appears to be that unique about Ozzie's plan. And it hasn't done much to reassure skeptics. As the report notes, when Ozzie laid this out at a special meeting of super smart folks in the field, it didn't take long for one to spot a hole:

The most dramatic comment came from computer science professor and cryptographer Eran Tromer. With the flair of Hercule Poirot revealing the murderer, he announced that he’d discovered a weakness. He spun a wild scenario involving a stolen phone, a second hacked phone, and a bank robbery. Ozzie conceded that Tromer found a flaw, but not one that couldn’t be fixed.

"Not one that couldn't be fixed." But it took this guy just hearing about the system to find the flaw. There are more flaws. And they're going to be catastrophic. Because that's how cryptogrpahy works. Columbia computer science professor and all around computer security genius Steve Bellovin (who was also at that meeting) highlights how Tromer's flaw-spotting shows why Ozzie's plan is a fantasy with dangerous consequences:

Ozzie presented his proposal at a meeting at Columbia—I was there—to a diverse group. Levy wrote that Ozzie felt that he had "taken another baby step in what is now a two-years-and-counting quest" and that "he'd started to change the debate about how best to balance privacy and law enforcement access". I don't agree. In fact, I think that one can draw the opposite conclusion.

At the meeting, Eran Tromer found a flaw in Ozzie's scheme: under certain circumstances, an attacker can get an arbitrary phone unlocked. That in itself is interesting, but to me the important thing is that a flaw was found. Ozzie has been presenting his scheme for quite some time. I first heard it last May, at a meeting with several brand-name cryptographers in the audience. No one spotted the flaw. At the January meeting, though, Eran squinted at it and looked at it sideways—and in real-time he found a problem that everyone else had missed. Are there other problems lurking? I wouldn't be even slightly surprised. As I keep saying, cryptographic protocols are hard.

Bellovin also points out -- as others have before -- that there's a wider problem here: how other countries will use whatever stupid example the US sets for much more nefarious purposes:

If the United States adopts this scheme, other countries, including specifically Russia and China, are sure to follow. Would they consent to a scheme that relied on the cooperation of an American company, and with keys stored in the U.S.? Almost certainly not. Now: would the U.S. be content with phones unlockable only with the consent and cooperation of Russian or Chinese companies? I can't see that, either. Maybe there's a solution, maybe not—but the proposal is silent on the issue.

And we're just getting started on how many experts are weighing in on just how wrong Ozzie is. Errata Security's Rob Graham pulls no punches pointing out that:

He's only solving the part we already know how to solve. He's deliberately ignoring the stuff we don't know how to solve. We know how to make backdoors, we just don't know how to secure them.

Specifically, Ozzie's plan relies on the idea that companies can keep their master private key safe. To support that this is possible, Ozzie (as the FBI has in the past) points to the fact that companies like Apple already keep their signing keys secret. And that's true. But that assumes incorrectly that signing keys and decryption keys are the same thing and can be treated similarly. They're not and they cannot be. The security protocols around signing keys are intense, but part of that intensity is built around the idea that you almost never have to use a signing key.

A decryption key is a different story altogether, especially with the FBI blathering on about thousands of phones it wants to dig its digital hands into. And, as Graham notes, you quickly run into a scaling issue, and with that scale, you ruin any chance of keeping that key secure.

Yes, Apple has a vault where they've successfully protected important keys. No, it doesn't mean this vault scales. The more people and the more often you have to touch the vault, the less secure it becomes. We are talking thousands of requests per day from 100,000 different law enforcement agencies around the world. We are unlikely to protect this against incompetence and mistakes. We are definitely unable to secure this against deliberate attack.

And, even worse, when that happened, we wouldn't even know.

If Ozzie's master key were stolen, nothing would happen. Nobody would know, and evildoers would be able to freely decrypt phones. Ozzie claims his scheme can work because SSL works -- but then his scheme includes none of the many protections necessary to make SSL work.

What I'm trying to show here is that in a lab, it all looks nice and pretty, but when attacked at scale, things break down -- quickly. We have so much experience with failure at scale that we can judge Ozzie's scheme as woefully incomplete. It's not even up to the standard of SSL, and we have a long list of SSL problems.

And so Ozzie's scheme relies on an impossibility. That you could protect a decryption key that has to be used frequently, the same way that a signing key is currently protected. And that doesn't work. And when it fails, everyone is seriously fucked.

Graham's article also notes that Ozzis is -- in true nerd harder fashion -- focusing on this as a technological problem, ignoring all the human reasons why such a system will fail and such a key won't be protected.

It focuses on the mathematical model but ignores the human element. We already know how to solve the mathematical problem in a hundred different ways. The part we don't know how to secure is the human element.

How do we know the law enforcement person is who they say they are? How do we know the "trusted Apple employee" can't be bribed? How can the law enforcement agent communicate securely with the Apple employee?

You think these things are theoretical, but they aren't.

Cryptography expert (and professor at Johns Hopkins), Matt Green did a fairly thorough tweetstorm debunking of Ozzie's plan as well. He also points out, as Graham does, the disaster scenario of what happens when (not if) the key gets out. But, an even bigger point that Green makes is that Ozzie's plan relies on a special chip in every device... and assumes that we'll design that chip to work perfectly and never get broken. And that's ridiculous:

Green and Graham also both point to the example of GrayKey, the recently reported on tool that law enforcement has been using to crack into all supposedly encrypted iPhones. Already, someone has hacked into the company behind GrayKey and leaked some of the code.

Put it all together and:

Suddenly the fawning over Ozzie's plan doesn't look so good any more, does it? And, again, these are the problems that everyone who has dug into why backdoors are a bad idea have pointed out before:

Green expanded some of his tweets into a blog post as well, which is also worth reading. In it, he also points out that even if we acknowledge the difference between signing keys and decryption keys, companies aren't even that good at keeping signing keys safe (and those are almost certainly going to be more protected that decryption keys since they need to be access much less frequently):

Moreover, signing keys leak all the time. The phenomenon is so common that journalists have given it a name: it’s called “Stuxnet-style code signing”. The name derives from the fact that the Stuxnet malware — the nation-state malware used to sabotage Iran’s nuclear program — was authenticated with valid code signing keys, many of which were (presumably) stolen from various software vendors. This practice hasn’t remained with nation states, unfortunately, and has now become common in retail malware.

And he also digs deeper into the point he made in his tweetstorm about how on the processor side, not even Apple has been able to keep its secure chip from being broken -- yet Ozzie's plan is based almost entirely on the idea that such an unbreakable chip would be available:

The richest and most sophisticated phone manufacturer in the entire world tried to build a processor that achieved goals similar to those Ozzie requires. And as of April 2018, after five years of trying, they have been unable to achieve this goal — a goal that is critical to the security of the Ozzie proposal as I understand it.

Now obviously the lack of a secure processor today doesn’t mean such a processor will never exist. However, let me propose a general rule: if your proposal fundamentally relies on a secure lock that nobody can ever break, then it’s on you to show me how to build that lock.

So that's a bunch of experts highlighting why Ozzie's plan is silly. But, from the policy side it's awful too. Because having Ozzie going around and spouting this debunked nonsense, but with his pedigree, simply gives the "going dark" and "responsible encryption" pundits something to grasp onto to claim they were right all along, even though they weren't. They've said for years that the techies just need to nerd harder, and they will canonize Ray Ozzie as the proof that they were right... even though they're not and his plan doesn't solve any of the really hard problems.

And, as we noted much earlier in this post, cryptography is one of those areas where the hard problems really fucking matter. And if Ozzie's plan doesn't even touch on most of the big ones, it's no plan at all. It's a Potemkin Village that law enforcement types will parade around for the next couple of years insisting that backdoors can be made safely, even though Ozzie's plan is not safe at all. I am sure that Ray Ozzie means well -- and I've got tremendous respect for him and have for years. But what he's doing here is actively harmful -- even if his plan is never implemented. Giving the James Comeys and Chris Wrays of the worlds some facade they can cling to to say that this can be done is only going to create many more problems.

18 Comments | Leave a Comment..

Posted on Techdirt - 26 April 2018 @ 3:23am

Hollywood Front Groups Decide To Kick Facebook While It's Down, Advocate For More Internet Regulations

from the this-will-surprise-basically-no-one dept

It's no secret at all (though, they tried to hide it) that Hollywood and various MPAA front-groups were heavily involved behind the scenes in getting FOSTA/SESTA passed and signed into law. It all goes back to Project Goliath, the plan put together by the MPAA a few years back to use any means necessary to try to attack the fundamental principles of an open internet. While there have been all sorts of attempts, SESTA (i.e., misrepresenting the problem of sex trafficking as being an internet problem, and then pushing legislation that won't stop sex trafficking, but will harm internet companies) was the first to make it through.

But it's unlikely to be the last. Immediately on the heels of everyone now hating on Facebook, various MPAA front groups led by CreativeFuture and the Content Creators Coalition -- both of whom will consistently parrot complete nonsense about how the internet is evil (amusingly, sometimes using the very platforms they seek to destroy) -- have now sent a letter to lawmakers demanding more regulation of the internet and, in particular, more chipping away at intermediary liability protections that enable the free and open internet (the letter was first reported by TorrentFreak).

Most of the letter continues to play up the exaggerated moral panic around Facebook's actions. As we've noted many times, there are reasons to complain about Facebook, but so many of the complaints are on bad solutions, and that's absolutely true with this particular letter. Specifically, this letter presents three demands:

Last week’s hearing was an important first step in ensuring that Facebook, Google, Twitter, and other internet platforms must (1) take meaningful action to protect their users’ data, (2) take appropriate responsibility for the integrity of the news and information on their platforms, and (3) prevent the distribution of unlawful and harmful content through their channels.

On number one: yes, companies should do a better job protecting data, but the real issue is that companies shouldn't be holding onto so much data in the first place. Rather individual internet users should have a lot more control and power over their own use of data, which is very different than what these Hollywood groups are demanding. Besides, given Hollywood's history of being hacked and leaking all sorts of data, it certainly seems like a glass houses sort of situation, doesn't it?

As for number two: "take appropriate responsibility for the integrity of the news and information on their platforms." Really? This is Hollywood and content creators directly calling for censorship, which is truly fucked up if you think about it. After all, for much of Hollywood's history, politicians have complained about the kind of content that it puts out, and demanded censorship in response. Is Hollywood now really claiming to call for other industries to go through the same sort of nonsense? Should we apply the same rules to the MPAA studios? When they put out movies that are a historical farce, such as the very, very wrong propaganda flick Zero Dark Thirty, should Hollywood be required to "take appropriate responsibility" for spewing pro-torture propaganda? Because if they're insisting that internet platforms have to take responsibility for what user's post, it's only reasonable to say that Hollywood studios should take responsibility when they release movies that are similar nonsense.

And, finally, number three: preventing the distribution of unlawful and "harmful" content. Again, one has to wonder what the fuck happened to the legacy entertainment industry that it would now be advocating for some sort of legal ban on "harmful content." Remember, this is the same industry that has regularly been accused of producing "harmful" TV shows, movies and music. And now they're on record speaking out against harmful content? How quickly do you think that's going to boomerang back on Hollywood concerning its own content.

It's almost as if Hollywood is so focused on its hatred of the internet that the geniuses they brought in to run these front groups have no clue how their own arguments will end up shooting content creators right in the foot. I mean, if we're going to stop "harmful" content, doesn't that just give more fodder to religious groups attacking the legacy entertainment industry over "blasphemy," sex and drugs? Won't groups advocating against loosening morals use that to demand that Hollywood stop producing films that support these kinds of activities. Or what about violence, which Hollywood has glorified for decades.

Now, some of us who actually support free speech recognize that Hollywood should be able to produce those kinds of movies and TV shows and musicians should be able to record whatever music they want. But we also think that internet platforms should be open to decide what content they allow on their platforms as well. It's a shame that Hollywood seems to think free speech is only important in special circumstances when it applies to professionally produced content. Because that's exactly what this letter is suggesting.

The letter also includes this nonsense:

The real problem is not Facebook, or Mark Zuckerberg, regardless of how sincerely he seeks to own the “mistakes” that led to the hearing last week. The problem is endemic in a system that applies a different set of rules to the internet and fails to impose ordinary norms of accountability on businesses that are built around monetizing other people’s personal information and content.

This is... wrong. There isn't a "different set of rules." CDA 230 and DMCA 512 are both rules designed to properly apply liability to the party who actually breaks the law. Both of them say that just because someone uses a platform for illegal behavior it doesn't make the platform liable (the individual is still very much liable). That's not a different set of rules. And to argue that internet companies are not "accountable" is similarly ridiculous. We have a decently long history of the internet at this point, and we see, over and over again, when companies get too powerful, they become complacent. And when they do dumb things, competitors spring up, and the older companies fade away.

Hollywood, of course, isn't quite used to that kind of creative destruction. The major studios of the MPAA are 20th Century Fox (founded: 1935), Paramount (founded: 1912), Universal Studios (founded: 1912), Warner Bros. (founded: 1923), Disney Studios (founded: 1923) and Sony Pictures (which traces its lineage back to Columbia Pictures in 1924 or CBC Film Sales Corp in 1918). In other words, these are not entities used to creative upstarts taking their place. They work on the belief that the big guys are always the big guys.

And, really, at a time when many of Hollywood's biggest names are being brought down in "me too" moments, when it's clear that they had institutional support of their abuse going back decades, is it really appropriate for Hollywood, of all places, to be arguing that the tech industry needs to take more responsibility? This seems like little more than a hypocritical attempt by the usual MPAA front groups to kick Facebook while its down and try to use the anger over Facebook's mistakes to try to chip away at the internet they've always disliked.

Read More | 15 Comments | Leave a Comment..

Posted on Techdirt - 25 April 2018 @ 3:30pm

Turns Out Lots Of People Want To Play The CIA's Card Game

from the collect-it-all dept

Well, it appears we can both confirm and acknowledge that lots and lots of people want to play the CIA's in-house training card game. As we announced on Monday, we've taken the available details of the internal CIA game Collection Deck, and are in the process of turning it a version you can actually play, which we're renaming CIA: Collect It All. To see if anyone else actually wanted it, we put it on Kickstarter and set what we thought was a fairly high bar: $30,000. And yet, we hit that in about 40 hours and we still more than three and a half weeks to go. We're a bit blown away by how many people are interested, and we're committed to making the game as awesome as we can possibly make it. We recently posted an update to the campaign concerning questions around international shipping, since that's been a big topic of conversation, so if you're interested in that, go check it out.

CIA: Collect It All on Kickstarter

Either way, thanks to all of you who quickly jumped in and backed the campaign (and told others about it). As we've noted in the campaign, the idea here is to do this as a one shot deal, not to keep making the game. So, while anyone can download the FOIA'd release of the rules and make your own, if you want one of our versions, you'll need to back this campaign.

CIA: Collect It All on Kickstarter

12 Comments | Leave a Comment..

Posted on Techdirt - 25 April 2018 @ 9:33am

Could The DOJ Be Violating SESTA/FOSTA?

from the quite-possible dept

Last week, Gizmodo's Dell Cameron has a great report on how the DOJ's Amber Alert site was configured so stupidly that it could be used to redirect people to any website (this was also true of weather.gov and the National Oceanic and Atmospheric Administration). And it was being used. To redirect people to hardcore porn. Basically, the sites were designed such that just by knowing the right URL and adding a new URL to the end, it would redirect to those sites. Porn sites used this for a couple of reasons: first, since they'd now be getting referrals from high ranking sites, it can help their Google ranking. Second, because the primary URL would come from a trusted source again, it would help their Google ranking. And, finally, the links may look much more legit to people doing searches (though that would be more true of scam sites than porn sites).

Redirect scripts like this used to be fairly common, but they died off long ago. Except in the federal government. From Cameron's article:

“This is like the 1990s called and wants its vulnerable redirect script back,” said Adriel Desautels, founder of the penetration testing firm Netragard.

But, here's the thing: does this mean that the DOJ (and the NOAA) could be violating SESTA/FOSTA? It's possible! And that just goes to show how poorly drafted the law is. Remember, under the law, it is now illegal to "participate in a venture" that "knowingly" is "assisting, supporting, or facilitating" a violation of sex trafficking laws. So, if someone were to create a DOJ Amber Alert redirect to a sex trafficking website (or just an escort site, since people keep insisting those serve little purpose other than sex trafficking) would the DOJ be in violation?

The obvious response is that the DOJ isn't "knowingly" doing this. But... is that true? As Cameron's article notes, every time you hit one of those Amber Alert redirects, the DOJ gives you a nice little parting message:

Is that enough to "knowingly" participate? Maybe. I would bet that if non-governmental websites popped up similar messages, SESTA/FOSTA supporters would argue it's proof of knowledge. After all, Rep. Cathy McMorris Rodgers claimied that merely "turning a blind eye" was enough to prove "knowledge." And here, clearly, the DOJ must be logging those exit pages. Is it ignoring them? Is that turning a blind eye? Does that count as knowledge?

Maybe it's a stretch, but the fact that the language of the bill even makes this a possibility just demonstrates how poorly drafted the bill is, and shame on all the politicians who refused to step up and fix it.

29 Comments | Leave a Comment..

Posted on Techdirt - 24 April 2018 @ 3:46pm

Patent Troll That Sued EFF And Lost... Now Loses Its Bullshit Patent As Well

from the trolling-karma dept

Remember GEMSA (Global Equty Management (SA) Pty. Ltd.)? That's the Australian patent troll who "won" a Stupid Patent of the Month award from EFF for its silly patent (US Patent 6,690,400 on "virtual cabinets representing a discrete operating system." GEMSA sued a bunch of companies, including Airbnb and Zillow for supposedly violating the patent. Oh, and then it sued EFF in Australia, getting an order from the court demanding that EFF take down its article and barring EFF from ever publishing anything about any GEMSA patents.

That kinda thing is not going to fly in the US, and so EFF went to court in the US, seeking declaratory judgment that such an Australian court order was totally unenforceable in the US under the SPEECH Act. Late last year, the court gave a thorough and complete victory to EFF, making it clear that GEMSA could not, in any way, hope to enforce its Australian order in the US, as it clearly would violate EFF's First Amendment rights.

And now, the US Patent Office has basically killed GEMSA's patent that EFF called out in the first place, via the all important inter partes review system that is currently being challenged at the Supreme Court (ruling coming soon...).

The ’400 patent described its “invention” as “a Graphic User Interface (GUI) that enables a user to virtualize the system and to define secondary storage physical devices through the graphical depiction of cabinets.” In other words, virtual storage cabinets on a computer. E-Bay, Alibaba, and Booking.com, filed a petition for inter partes review arguing that claims from the ’400 patent were obvious in light of the Partition Magic 3.0 User Guide (1997) from PowerQuest Corporation. Three administrative patent judges from the Patent Trial and Appeal Board (PTAB) agreed.

The PTAB opinion notes that Partition Magic’s user guide teaches each part of the patent’s Claim 1, including the portrayal of a “cabinet selection button bar,” a “secondary storage partitions window,” and a “cabinet visible partition window.”

The opinion demonstrated this graphically as well:

The PTAB laughed off GEMSA's argument that the original owner of the patent, Flash Vos, somehow "moved the computer industry a quantum leap forward in the late 90's" by pointing out that GEMSA "has put forth no evidence that Flash Vos or GEMSA actually had any commercial success." Ouch.

I'm curious if GEMSA will now seek to sue the US Patent Office in Australia as well...

Read More | 14 Comments | Leave a Comment..

Posted on Techdirt - 24 April 2018 @ 9:29am

Facebook Derangement Syndrome: Don't Blame Facebook For Company Scraping Public Info

from the it's-public-info dept

Earlier this month I talked a little bit about "Facebook Derangement Syndrome" in which the company, which has real and serious issues, is getting blamed for other stuff. It's fun to take potshots at Facebook, and we can talk all we want about the actual problems Facebook has (specifically its half-hearted attempts at transparency and user control), but accusing the company of all sorts of things that are not actually a problem doesn't help. It actually makes it that much harder to fix things.

The latest case in point. Zack Whittaker, who is one of the absolute best cybersecurity reporters out there, had a story up recently on ZDNet about a data mining firm called Localblox, that was pulling all sorts of info to create profiles on people... leaking 48 million profiles by failing to secure an Amazon S3 instance (like so many such Amazon AWS leaks, this one was spotted by Chris Vickery at Upgard, who seems to spot leaks from open S3 instances on weekly basis).

There is a story here and Whittaker's coverage of it is good and thorough. But the story is in Localblox's crap security (though the company has tried to claim that most of those profiles were fake and just for testing). However, many people are using the story... to attack Facebook. Digital Trends claims that this story is "the latest nightmare for Facebook." Twitter users were out in force blaming Facebook.

But, if you look at the details, this is just Facebook Derangement Syndrome all over again. Localblox built up its data via a variety of means, but the Facebook data was apparently scraped. That is, it used its computers to scrape public information from Facebook accounts (and Twitter, LinkedIn, Zillow, elsewhere) and then combined that with other data, including voter rolls (public!) and other data brokers, to build more complete profiles. Now, it's perfectly reasonable to point out that combining all of this data can raise some privacy issues -- but, again, that's a Localblox issue if there's a real issue there, rather than a Facebook one.

And, this is clearly the kind of thing that Facebook actively tries to prevent. Remember, as we've covered, the company went on a legal crusade against another scraper company, Power.com, using the CFAA to effectively kill that company's useful service.

Here's why this kind of thing matters: if you blame Facebook for this kind of thing, then you actively encourage Facebook to go out of its way to block scraping or other efforts to free up user data. That means, it ends up giving Facebook more control over user data. Allowing scrapers of public info (again, the fact that this is public info is important) could actually limit Facebook's powers, and enable other companies to pop up and make use of the data inside Facebook to build other (competing) services. The ability to scrape Facebook would allow third parties to build tools to give users more control over their Facebook accounts.

But when we look on scraping of public info as somehow a "breach" of Facebook (which, again, is separate from the messed up nature of Localblox leaking data itself), we're pushing everyone towards a world where Facebook has more control, more dominance and less competition. And that should be the last thing that anyone (outside of Facebook) wants.

22 Comments | Leave a Comment..

Posted on Techdirt - 23 April 2018 @ 12:27pm

It's Spreading: Lindsey Graham Now Insisting 'Fairness Doctrine' Applies To The Internet

from the aren't-you-supposed-to-understand-law? dept

Remember when Republicans were against the "Fairness Doctrine"? Apparently, that's now out the window, so long as they can attack Facebook. As we noted recently, Senator Ted Cruz appears to be pushing for the strangest interpretation of Section 230 around (in direct conflict with (a) what the law says and (b) how the courts have interpreted it) saying that in order to make use of CDA 230's immunity "good samaritan" clause, internet service providers need to be "neutral." Again, that's not what the law says. It's also an impossible standard, and one that would lead to results that would piss off lots of people. The similarities to the FCC's concept of the "Fairness Doctrine" are pretty clear, though such a rule on the internet would be an even bigger deal, since the Fairness Doctrine only applied to broadcast TV.

And, it appears that Cruz's incorrect interpretation is spreading like a virus. Senator Lindsey Graham is now spewing the same nonsense.

Sen. Lindsey Graham (R-S.C.), a subcommittee chairman on the Senate Judiciary Committee, told MT that he remains concerned about potential bias when it comes to content moderation on social media platforms. “They enjoy liability protections because they’re neutral platforms. At the end of the day, we’ve got to prove to the American people that these platforms are neutral,” he said. “You’ve got to have a validating system where the government can come in and validate that this whole system is neutral.”

So much about this is wrong. First, Section 230 of the CDA has never required platforms to be neutral. Indeed, it would be silly to do so because what the hell does "neutral" even mean in this context? Second, claiming that someone needs to "prove" that the platforms are neutral. What does that even mean? How would you prove that anything is "neutral" anyway? Finally, saying that the government has to validate that your internet service is neutral raises a very large number of very serious First Amendment questions. What kind of Senator is not only so wrong about the law, but thinks that the appropriate role of government is to "validate" internet platforms over what kind of speech they allow?

Separately, this is the same Lindsey Graham who just recently was demanding that social media sites do more to takedown content he didn't like. Now, apparently, he's up in arms over the fact that the sites took down content he did like. If Graham truly wants websites to "do everything possible to combat" terrorist groups using the internet, then attacking CDA 230 is the worst possible way to do that. CDA 230 gives websites the power to moderate and filter out such content without fear of facing legal liability. In other words, it's an excellent tool for getting websites to takedown extremist content. To then turn around and insist that sites should lose CDA 230 protections because they also took down some content you like... raises all sorts of First Amendment issues. You're basically saying websites should only remove the content I dislike, and if they remove content I like I'm going to put their existence at risk. Guess what happens then? Sites will stop moderating entirely, leaving up more of the "bad" content you dislike.

Oh, and back to that whole "fairness doctrine" thing. Guess who is vehemently against it? You guessed right: Lindsey Graham. Opposing the Fairness Doctrine is one of the few issues on his tech campaign platform and a few years back he put out a press release touting his conservative credentials including "barring the use of federal funds to re-instate the Fairness Doctrine" (which was silly anyway, because there was no appetite to re-instate the Fairness Doctrine).

Except, now, as soon as he can pretend to be up in arms over Facebook, Graham seems to jump on the idea of a Fairness Doctrine for the internet. Not only that, but he seems to think it's already in place (it isn't) and requires the government to go in and validate platforms over what speech they allow.

58 Comments | Leave a Comment..

Posted on Techdirt - 23 April 2018 @ 9:00am

The CIA Made A Card Game... And We're Releasing It

from the come-and-get-it dept

Yes, the CIA made a card game. And... we're releasing it. No, really. If you want to play the top secret card game that the CIA used to train analysts, you can now back our Kickstarter project for CIA: Collect It All.

CIA: Collect It All on Kickstarter

Let me explain how we got here...

We write a lot about the CIA here on Techdirt -- often covering just how secretive the organization is around responding to FOIA requests. After all, this is the same organization that invented the famous "Glomar Response" to a FOIA request: the now ubiquitous "we can neither confirm, nor deny." And that one "invention" is used all the time. Indeed, if you have a few extra hours to spend, feel free to go through just our archives demonstrating CIA obstructionism over FOIA.

But... the organization actually did recently respond to a set of interesting FOIA requests. Back in 2017, at SXSW, the CIA revealed its gaming efforts, and even let some attendees play them. That resulted in a few FOIA requests for the details of the game, including one by MuckRock's Mitchell Kotler and another by entrepreneur Douglas Palmer. In response to the FOIA requests, the CIA released the details of some of the games (though, somewhat redacted, and in typical FOIA response gritty photo-copy style), including a card game called "Collection Deck." My first reaction was... "Hey, that would be fun to play..." And then I had a second thought.

There's another super popular topic here on Techdirt: the public domain and how important it is to build on works in the public domain. Remember, under Section 105 of the US Copyright Act, works of the federal government of the United States are not subject to copyright and are in the public domain.

We've already been working with Randy Lubin of Diegetic Games on a few different projects (including Working Futures and others you'll need to stay tuned for). So, we started talking about making a version of the CIA's game to play for ourselves. And everyone we mentioned it to wanted to play as well. And the more we looked at the details, the more we realized that we could make a much nicer version (while paying homage to the original and its route through FOIAdom) that was playable, and maybe even offer some changes, fixes and alternative rules. We decided to name our version, "CIA: Collect It All." Not only does "Collect It All" spell out CIA and pay homage to the CIA's "Collection Deck" name, "Collect It All" was also General Keith Alexander's surveillance motto that we roundly mocked due to its inherent conflict with the old 4th Amendment. Anyway, this seemed like a way to take back the phrase a bit.

And that led us to Kickstarter. We're using Kickstarter in the real original sense of Kickstarter. We had an idea that we thought was pretty damn cool that we wanted for ourselves. And we want to see if others want it as well so we can produce it at scale. If people want it, awesome. We'll make a bunch. If we're wrong and no one really wants it... well, we'll probably still make a copy for ourselves, but you're on your own, working with redacted photocopies.

CIA: Collect It All on Kickstarter

So... here's a chance to:

  1. Get a cool, fun game that until just recently was a top secret training game by created by the CIA -- which, come on, is pretty cool
  2. Help support Techdirt and all the reporting we do (including reporting on the CIA pushing back on FOIA requests)
  3. Demonstrate why building on the public domain is a good thing
  4. Did I mention that you get to play a fun game, with awesome design work (much better than the CIA's), that was originally created by the CIA?
So, check it out and back us on Kickstarter. And tell your friends. Because, look, they wanted to be CIA agents when they were kids too.

37 Comments | Leave a Comment..

Posted on Techdirt - 20 April 2018 @ 7:39pm

Democratic National Committee's Lawsuit Against Russians, Wikileaks And Various Trump Associates Full Of Legally Nutty Arguments

from the slow-down-there-dnc dept

This morning I saw a lot of excitement and happiness from folks who greatly dislike President Trump over the fact that the Democratic National Committee had filed a giant lawsuit against Russia, the GRU, Guccifier 2, Wikileaks, Julian Assange, the Trump campaign, Donald Trump Jr., Jared Kushner, Paul Manafort, Roger Stone and a few other names you might recognize if you've followed the whole Trump / Russia soap opera over the past year and a half. My first reaction was that this was unlikely to be the kind of thing we'd cover on Techdirt, because it seemed like a typical political thing. But, then I looked at the actual complaint and it's basically a laundry list of the laws that we regularly talk about (especially about how they're abused in litigation). Seriously, look at the complaint. There's a CFAA claim, an SCA claim, a DMCA claim, a "Trade Secrets Act" claim... and everyone's favorite: a RICO claim.

Most of the time when we see these laws used, they're indications of pretty weak lawsuits, and going through this one, that definitely seems to be the case here. Indeed, some of the claims made by the DNC here are so outrageous that they would effectively make some fairly basic reporting illegal. One would have hoped that the DNC wouldn't seek to set a precedent that reporting on leaked documents is against the law -- especially given how reliant the DNC now is on leaks being reported on in their effort to bring down the existing president. I'm not going to go through the whole lawsuit, but let's touch on a few of the more nutty claims here.

The crux of the complaint is that these groups / individuals worked together in a conspiracy to leak DNC emails and documents. And, there's little doubt at this point that the Russians were behind the hack and leak of the documents, and that Wikileaks published them. Similarly there's little doubt that the Trump campaign was happy about these things, and that a few Trump-connected people had some contacts with some Russians. Does that add up to a conspiracy? My gut reaction is to always rely on Ken "Popehat" White's IT'S NOT RICO, DAMMIT line, but I'll leave that analysis to folks who are more familiar with RICO.

But let's look at parts we are familiar with, starting with the DMCA claim, since that's the one that caught my eye first. A DMCA claim? What the hell does copyright have to do with any of this? Well...

Plaintiff's computer networks and files contained information subject to protection under the copyright laws of the United States, including campaign strategy documents and opposition research that were illegally accessed without authorization by Russia and the GRU.

Access to copyrighted material contained on Plaintiff's computer networks and email was controlled by technological measures, including measures restricting remote access, firewalls, and measures restricting acess to users with valid credentials and passwords.

In violation of 17 U.S.C. § 1201(a), Russia, the GRU, and GRU Operative #1 circumvented these technological protection measures by stealing credentials from authorized users, condcting a "password dump" to unlawfully obtain passwords to the system controlling access to the DNC's domain, and installing malware on Plaintiff's computer systems.

Holy shit. This is the DNC trying to use DMCA 1201 as a mini-CFAA. They're not supposed to do that. 1201 is the anti-circumvention part of the DMCA and is supposed to be about stopping people from hacking around DRM to free copyright-covered material. Of course, 1201 has been used in all sorts of other ways -- like trying to stop the sale of printer cartridges and garage door openers -- but this seems like a real stretch. Russia hacking into the DNC had literally nothing to do with copyright or DRM. Squeezing a copyright claim in here is just silly and could set an awful precedent about using 1201 as an alternate CFAA (we'll get to the CFAA claims in a moment). If this holds, nearly any computer break-in to copy content would also lead to DMCA claims. That's just silly.

Onto the CFAA part. As we've noted over the years, the Computer Fraud and Abuse Act is quite frequently abused. Written in response to the movie War Games to target "hacking," the law has been used for basically any "this person did something we dislike on a computer" type issues. It's been dubbed "the law that sticks" because in absence of any other claims that one always sticks because of how broad it is.

At least this case does involve actual hacking. I mean, someone hacked into the DNC's network, so it actually feels (amazingly) that this may be one case where the CFAA claims are legit. Those claims are just targeting the Russians, who were the only ones who actually hacked the DNC. So, I'm actually fine with those claims. Other than the fact that they're useless. It's not like the Russian Federation or the GRU is going to show up in court to defend this. And they're certainly not going to agree to discovery. I doubt they'll acknowledge the lawsuit at all, frankly. So... reasonable claims, impossible target.

Then there's the Stored Communications Act (SCA), which is a part of ECPA, the Electronic Communications Privacy Act, which we've written about a ton and it does have lots of its own problems. These claims are also just against Russia, the GRU and Guccifer 2.0, and like the DMCA claims appear to be highly repetitive with the CFAA claims. Instead of just unauthorized access, it's now unauthorized access... to communications.

It's then when we get into the trade secrets part where things get... much more problematic. These claims are brought against not just the Russians, but also Wikileaks and Julian Assange. Even if you absolutely hate and / or distrust Assange, these claims are incredibly problematic against Wikileaks.

Defendants Russia, the GRU, GRU Operative #1, WikiLeaks, and Assange disclosed Plaintiff's trade secrets without consent, on multiple dates, discussed herein, knowing or having reason to know that trade secrets were acquired by improper means.

If that violates the law, then the law is unconstitutional. The press regularly publishes trade secrets that may have been acquired by improper means by others and handed to the press (as is the case with this content being handed to Wikileaks). Saying that merely disclosing the information is a violation of the law raises serious First Amendment issues for the press.

I mean, what's to stop President Trump from using the very same argument against the press for revealing, say, his tax returns? Or reports about business deals gone bad, or the details of secretive contracts? These could all be considered "trade secrets" and if the press can't publish them that would be a huge, huge problem.

In a later claim (under DC's specific trade secrets laws), the claims are extended to all defendants, which again raises serious First Amendment issues. Donald Trump Jr. may be a jerk, but it's not a violation of trade secrets if someone handed him secret DNC docs and he tweeted them or emailed them around.

There are also claims under Virginia's version of the CFAA. The claims against the Russians may make sense, but the complaint also makes claims against everyone else by claiming they "knowingly aided, abetted, encouraged, induced, instigated, contributed to and assisted Russia." Those seem like fairly extreme claims for many of the defendants, and again feel like the DNC very, very broadly interpreting a law to go way beyond what it should cover.

As noted above, there are some potentially legit claims in here around Russia hacking into the DNC's network (though, again, it's a useless defendant). But some of these other claims seem like incredible stretches, twisting laws like the DMCA for ridiculous purposes. And the trade secret claims against the non-Russians is highly suspect and almost certainly not a reasonable interpretation of the law under the First Amendment.

Read More | 136 Comments | Leave a Comment..

Posted on Free Speech - 20 April 2018 @ 3:33pm

Michael Cohen Drops Ridiculous Lawsuit Against Buzzfeed After Buzzfeed Sought Stormy Daniels' Details

from the fighting-fires dept

Donald Trump's long time lawyer, Michael Cohen has been in a bit of hot water of late. As you no doubt heard, the FBI raided Cohen's office and home seeking a bunch of information, some of which related to the $130,000 he paid to adult performer Stormy Daniels. Already there have been a few court appearances in which Cohen (and Donald Trump) have sought to suppress some of what's been seized, but that doesn't seem to be going too well. At the same time, Cohen is still fighting Daniels in court, which also doesn't seem to be going too well.

Given all of that, it's not too surprising that Cohen has decided to dismiss his ridiculous lawsuit against Buzzfeed for publishing the Christopher Steele dossier. As we pointed out, that lawsuit was going nowhere, because it sought to hold Buzzfeed liable for content created by someone else (oh, and that leaves out that much of what Cohen claimed was defamatory may actually have been true.

And while many are suggesting Cohen dropped that lawsuit because the other lawsuits are a much bigger priority, there may be another important reason as well. As we noted last month, through a somewhat complex set of circumstances, the lawsuit against Buzzfeed may have resulted in Cohen having to reveal the details he's been avoiding concerning Stormy Daniels. That's because Buzzfeed was claiming that Cohen's interactions with Daniels were relevant to its case, and it was likely to seek that information as part of the case moving forward.

In other words, dropping the Buzzfeed lawsuit (that he was going to lose anyway), Cohen wasn't just ditching a distraction in the face of more important legal issues, he may be hoping to cut off at least one avenue for all the stuff he's been trying to keep secret from becoming public. That doesn't mean it won't become public eventually. After all the DOJ has a bunch of it. But it does suggest that Cohen had more than one reason to drop the Buzzfeed lawsuit.

4 Comments | Leave a Comment..

Posted on Techdirt - 20 April 2018 @ 1:30pm

How Twitter Suspended The Account Of One Of Our Commenters... For Offending Himself?

from the come-on,-jack dept

If you spend any time at all in Techdirt's comments, you should be familiar with That Anonymous Coward. He's a prolific and regular commenter (with strong opinions). He also spends a lot of time on Twitter. Well, at least until a week or so ago when Twitter suspended his account. It's no secret that Twitter has been getting a lot of pressure from people to be more proactive in shutting down and cutting off certain accounts. There are even a bunch of people who claim that Twitter should suspend the President's account -- though we think that would be a really bad idea.

As we've pointed out in the past, people who demand that sites shut down and suspend accounts often don't realize how difficult it is to do this at scale and not fuck up over and over again. Indeed, we have plenty of stories about sites having trouble figuring out what content is really problematic. Indeed, frequently these stories show that the targets of trolls and abusers are the ones who end up suspended.

You can read TAC's open letter to Jack Dorsey, which also includes an account of what happened. In short, over a year ago, TAC responded to something Ken "Popehat" White had tweeted, and referred to himself -- a gay man -- as "a faggot." Obviously, many people consider this word offensive. But it's quite obvious from how it was used here that this was a situation of someone using the word to refer to himself and to reclaim the slur.

Twitter then demanded that he delete the tweet and "verify" his phone number. TAC refused both requests. First, it was silly to delete the tweet because it's clearly not "hateful content" given the context. Second, as someone who's whole point is being "Anonymous" giving up his phone number doesn't make much sense. And, as he notes in his open letter, people have tried to sue him in the past. There's a reason he stays pseudononymous:

Why do I have to supply a cell phone number to get back on the platform? I've been a user for 5 years and have never used a cell phone to access your service. I am a nym, but I am an established nym. I own the identity & amazingly there are several hundred people following my nym. I interact with the famous & infamous, they tweet back to me sometimes. I survived a few lawsuits trying to get my real name from platforms, because I called Copyright Trolls extortionists... they were offended & tried to silence me with fear of lawsuits. I'm still a nym, they've been indicted by the feds. There are other Copyright Trolls who dislike me, so staying a nym is in my best interest.

TAC also points out the general inconsistencies in Twitter's enforcement, noting that other slurs are not policed, and even the slur that caused his account to be shut down (over a year after he used it) did not lead to other accounts facing the same issues.

Incredibly, TAC points out that he appealed the suspension... and Twitter trust and safety rejected the appeal. It was only on the second appeal -- and seven days later -- that Twitter recognized its mistake and restored his account.

Now, some may be quick to blame Twitter for this mess, but it again seems worth pointing out what an impossible situation this is. Platforms like Twitter are under tremendous pressure to moderate out "bad" content. But people have very little understanding of two important things: (1) the scale at which these platforms operate, and (2) how difficult it is to determine what's "bad" -- especially without full context. The only way to handle reports and complaints at scale is to either automate the process, hire a ton of people, or both. And no matter which choice you make, serious mistakes are going to be made. AI is notoriously bad at understanding context. People are under pressure to go through a lot of content very quickly to make quick judgments -- which also doesn't bode well for understanding context.

So, once again, we should be pretty careful what we ask for when we demand that sites be quicker about shutting down and suspending accounts. You might be surprised who actually has their accounts shut down. That's not to say sites should never suspend accounts, but the rush to pressure companies into doing so represents a fundamental misunderstanding of how such demands will be handled. TAC's week-long forced sabbatical is just a small example of those unintended consequences.

Read More | 89 Comments | Leave a Comment..

Posted on Techdirt - 20 April 2018 @ 11:55am

FOSTA/SESTA Passed Thanks To Facebook's Vocal Support; New Article Suggests Facebook Is Violating FOSTA/SESTA

from the self-own dept

One of the main reasons FOSTA/SESTA is now law is because of Facebook's vocal support for the bill. Sheryl Sandberg repeatedly spoke out in favor of the bill, misrepresenting what the bill actually did. In our own post-mortem on what happened with FOSTA/SESTA we noted that a big part of the problem was that many people inside Facebook (incredibly) did not appear to understand how CDA 230 works, and thus misunderstood how FOSTA/SESTA would create all sorts of problems. Last month, we noted that there was some evidence to suggest that Facebook itself was violating the law it supported.

However, a new article from Buzzfeed presents even more evidence of just how much liability Facebook may have put on itself in supporting the law. The article is fairly incredible, talking about how Facebook has allowed a group on its site that helps landlords seek out gay sex in exchange for housing -- and the report is chilling in how far it goes. In some cases, it certainly appears to reach the level of sex trafficking, where those desperate for housing basically become sex slaves to their landlords.

Today, in the first instalment of this series, we uncover some of the damage done to these young men – the sexual violence – by landlords, and reveal how they are being enabled by two major internet companies, one of which is Facebook. The world’s largest social media platform, BuzzFeed News can reveal, is hosting explicit posts from landlords promising housing in return for gay sex.

In multiple interviews with the men exchanging sex for rent and groups trying to deal with the crisis, BuzzFeed News also uncovered a spectrum of experiences that goes far beyond what has so far been documented, with social media, hook-up apps, and chemsex parties facilitating everything.

At best, impoverished young men are seeking refuge in places where they are at risk of sexual exploitation. At worst, teenagers are being kept in domestic prisons where all personal boundaries are breached, where their lives are in danger.

I've seen multiple people point out -- accurately -- that the article's focus on Facebook here is a little silly. The real focus should be on the "landlords" who are seeking out and taking advantage of desperate young men in need of a place to live. But, given that the focus is on Facebook, it certainly appears that Facebook has the knowledge required to be a violation of FOSTA/SESTA:

Despite the explicit nature of the postings on the group’s site, the administrator told BuzzFeed News that Facebook has not intervened. “We have never had an incident from Facebook,” he said. “If they [members] want to post something that will not fly with Facebook I write them, and tell them what needs to be changed.”

This has not stopped explicit notices being posted.

When approached by BuzzFeed News to respond to issues relating to this group, Facebook initially replied promising that a representative would comment. That response, however, did not materialise, despite several attempts by BuzzFeed News, over several days, to invite Facebook to do so. A week after first contacting the social media company, the group remains on its site.

It still seems wrong to blame Facebook for what the horrific landlords are doing here, but, hey, FOSTA/SESTA is now the law, and it's the law thanks in large part to Facebook's strong support for it. So, given all of this, will Facebook now face legal action, either from the victims of this group or from law enforcement?

11 Comments | Leave a Comment..

Posted on Techdirt - 20 April 2018 @ 9:37am

Sex Workers Set Up Their Own Social Network In Response To FOSTA/SESTA; And Now It's Been Shut Down Due To FOSTA/SESTA

from the censorship-at-work dept

Just a few weeks ago we wrote about how a group of sex workers, in response to the passing of FOSTA/SESTA, had set up their own social network, called Switter, which was a Mastodon instance. As we noted in our post, doing so was unlikely to solve any of the problems of FOSTA/SESTA, because it's perhaps even more likely that Switter itself would become a target of FOSTA/SESTA (remember, with FOSTA, the targeting goes beyond "sex trafficking" to all prostitution).

And, indeed, it appears I was not the only one to think so. The organization that created Switter, Assembly Four, put up a note saying that Cloudflare had shut down Switter claiming the site was in violation of its terms of service.

Cloudflare has been made aware that your site is in violation of our published Terms of Service. Pursuant to our published policy, Cloudflare will terminate service to your website.

Cloudflare will terminate your service for switter{.}at by disabling our authoritative DNS.

Assembly Four asked Cloudflare to clarify just what term it had violated and the company has now come out and noted that it reluctantly pulled the plug on Switter out of a fear that it would create criminal liability for Cloudflare under FOSTA/SESTA. Cloudflare was among the companies who lobbied against the bill, and they note that they disagree with the way the bill was drafted -- but given the nature of the law, the company feels compelled to take this action:

“[Terminating service to Switter] is related to our attempts to understand FOSTA, which is a very bad law and a very dangerous precedent,” he told me in a phone conversation. “We have been traditionally very open about what we do and our roles as an internet infrastructure company, and the steps we take to both comply with the law and our legal obligations—but also provide security and protection, let the internet flourish and support our goals of building a better internet.”

Remember, this was a site for sex workers to communicate with each other. It was purely a platform for speech. And it's being shut down because of fears from the vague and poorly drafted FOSTA/SESTA bill. In other words, yet more confirmation that just as free speech experts predicted, FOSTA/SESTA would lead to outright suppression of speech.

I've seen some complaints on Twitter that Cloudflare should have stood up for Switter and not done this. I don't think that's reasonable. The penalties under FOSTA/SESTA are not just fines. It's a criminal statute. It's one thing to take a stand when you're facing monetary damages or something of that nature. It's something altogether different when you're asking a company to stand up to criminal charges based on a law that is incredibly vague and broad, and for which there is no caselaw. Yes, it would be nice to have some companies push back and potentially help to invalidate the law as unconstitutional, but you can't demand that of every company.

I am curious, though, how supporters of FOSTA/SESTA react to this. Do they not care that sex workers want to be able to communicate? Do they not care that social networks are being shut down over this? Do they not care about speech being suppressed?

44 Comments | Leave a Comment..

Posted on Techdirt - 20 April 2018 @ 3:23am

Bad Decisions: Google Screws Over Tools Evading Internet Censorship Regimes

from the who's-fronting-now? dept

Just as places like Russia are getting more aggressive with companies like Google and Amazon in seeking to stop online communications they can't monitor, Google made a move that really fucked over a ton of people who rely on anti-censorship tools. For years, various anti-censorship tools from Tor to GreatFire to Signal have made use of "domain fronting." That's a process by which services could get around censorship by effectively appearing to send traffic via large companies' sites, such as Google's. The link above describes the process as follows:

Domain fronting works at the application layer, using HTTPS, to communicate with a forbidden host while appearing to communicate with some other host, permitted by the censor. The key idea is the use of different domain names at different layers of communication. One domain appears on the “outside” of an HTTPS request—in the DNS request and TLS Server Name Indication—while another domain appears on the “inside”—in the HTTP Host header, invisible to the censor under HTTPS encryption. A censor, unable to distinguish fronted and nonfronted traffic to a domain, must choose between allowing circumvention traffic and blocking the domain entirely, which results in expensive collateral damage. Domain fronting is easy to deploy and use and does not require special cooperation by network intermediaries. We identify a number of hard-to-block web services, such as content delivery networks, that support domain-fronted connections and are useful for censorship circumvention. Domain fronting, in various forms, is now a circumvention workhorse.

In short, because most countries are reluctant to block all of Google, the ability to use Google for domain fronting was incredibly useful in getting around censorship. And now it's gone. Google claims that it never officially supported it, that this was a result of a planned update, and it has no intention of bringing it back:

“Domain fronting has never been a supported feature at Google,” a company representative said, “but until recently it worked because of a quirk of our software stack. We’re constantly evolving our network, and as part of a planned software update, domain fronting no longer works. We don’t have any plans to offer it as a feature.”

As Ars Technica notes, companies like Google may be concerned that it could lead to larger blocks that could harm customers. But, as Access Now points out, there are larger issues at stake, concerning individuals who are put at risk through such censorship:

“As a repository and organizer of the world’s information, Google sees the power of access to knowledge. Likewise, the company understands the many ingenious ways that people evade censors by piggybacking on its networks and services. There’s no ignorance excuse here: Google knows this block will levy immediate, adverse effects on human rights defenders, journalists, and others struggling to reach the open internet,” said Peter Micek, General Counsel at Access Now. “To issue this decision with a shrug of the shoulders, disclaiming responsibility, damages the company’s reputation and further fragments trust online broadly, for the foreseeable future.”

“Google has long claimed to support internet freedom around the world, and in many ways the company has been true to its beliefs. Allowing domain fronting has meant that potentially millions of people have been able to experience a freer internet and enjoy their human rights. We urge Google to remember its commitment to human rights and internet freedom and allow domain fronting to continue,” added Nathan White, Senior Legislative Manager at Access Now.

Google doesn't need to support domain fronting, and there are reasonable business reasons for not doing so. But... there are also strong human rights reasons why the company should reconsider. In the past, Google has taken principled stands on human rights. This is another time that it should seriously consider doing so.

27 Comments | Leave a Comment..

Posted on Techdirt - 19 April 2018 @ 12:09pm

Of Course The RIAA Would Find A Way To Screw Over The Public In 'Modernizing' Copyright

from the modernization-for-us,-but-not-for-you dept

I haven't had a chance to write much about the latest attempt to update copyright law in the US, under the title of the "Music Modernization Act," but in part that was because Congress did something amazing: it came up with a decent solution to modernizing some outdated aspects of copyright law, that almost everyone agreed were pretty decent ideas for improvement. The crux of the bill was making music licensing easier and much clearer, which is very much needed, giving what a complete shit show music licensing is today.

There was a chance to have this actually create a nice solution that would help artists, help online music services and generally make more works available to the public. It was a good thing. But... leave it to the RIAA to fuck up a good thing. You see, with there being pretty much universal support for the Music Modernization Act, the RIAA stepped in and pushed for it to be combined with a different copyright reform, known as the "CLASSICS Act."

What is the CLASSICS Act? Well, it's actually based on a good idea -- fixing the mess that is pre-1972 sound recordings. We've written about this for years, and without getting too deep into the weeds, the basic thing is that prior to February of 1972, sound recordings were not covered by federal copyright. Compositions were still protected, but not the actual recording. To deal with that, various states set up their own state-based copyright laws for those works -- sometimes in statute, sometimes through common law. But, as part of the "transition" of bringing sound recordings into federal copyright, Congress also (ridiculously) said that sound recordings prior to 1972 would remain under whatever ridiculous state copyright laws existed until 2047. And thanks to Sonny Bono, that got pushed back to 2067. As Public Knowledge points out, that's created a ridiculous situation, keeping important works out of the public domain for nearly two centuries:

State copyrights are, for all intents and purposes, indefinite. Back in 1972 -- when Congress first federalized (new) sound recordings -- Congress sought to “fix” this problem by declaring that all state copyrights in pre-’72 sound recordings would expire on February 15, 2047. They picked 2047 so that recordings made immediately prior to the new law’s passage (i.e. the last sound recordings protected only by state copyright) would be kept out of the public domain for a full 75 years, the same as their newer, federally-protected counterparts.

However, because that 2047 date applied indiscriminately to all sound recordings made before 1972, recordings ended up with mind-boggling terms of potential state protection. Thomas Edison’s original sound recordings, made in 1877, wouldn’t be guaranteed to enter the public domain until 170 years after it was first created. Congress doubled down on this decision in 1998, pushing the date back another 20 years, to 2067. That Edison recording now is kept out of the public domain for 190 years -- enough to provide theoretical royalties to eight generations of the original artist’s descendants. As a result, with a few exceptions (mostly when artists have affirmatively committed their works to the public domain) there are no sound recordings in the public domain in the United States, period.

As we've noted for years, the proper way to fix this is just to put pre-1972 sound recordings under federal copyright law, and give them the same public domain date they would have received if they had been covered by federal copyright law all along. This is not, by any means, a perfect solution. It has some additional drawbacks, but at a high level, it puts all sound recordings on a level playing field and makes sure that there isn't confusion over different treatments for a song recorded in March 1972 from one recorded in March of 1971.

But that's not what the CLASSICS Act does. While it claims to put those works on the same footing as federal copyright law, it only does that part way. It sets it up so that streaming services providers will now have to pay performance royalties on the pre-1972 works (after a bunch of court rulings -- but not all -- have suggested they don't need to pay such fees). Again, that's fine if it puts all the works on the same level playing field. But the CLASSICS Act doesn't quite do that. It just adds the "pay the RIAA" part, and leaves out the "oh yeah and let these works go into the public domain on the same schedule as all other works" part. In other words, under the CLASSICS Act, these royalties will have to be paid way beyond when those works should go into the public domain.

Public Knowledge further points out that the CLASSICS Act also ignores termination rights, which would benefit artists (but hurt the labels) and could also cause serious harm to archives and libraries:

Most of the protections that libraries, archives, and other nonprofits rely upon apply specifically to reproduction and distribution of works, but not to their public performance. CLASSICS/MMA 2 creates a federal right that covers performance, but not reproduction or distribution -- those parts of the copyright regime that serve as the lifeblood of these institutions.

EFF highlights some more issues with the CLASSICS Act, including how it would basically lock in existing providers like Pandora, Spotify and Sirius XM, but cause problems for any upstart:

Creating new barriers to the use of old creative works is not what copyright is for. Copyright is a bargain: authors and artists get limited, exclusive rights over their works as an incentive to create. In return, the public is enriched by new art and authorship and can use works in ways and times that fall outside the rightsholder’s zone of exclusivity. Creating new rights in recordings that have been around for 46 years or more doesn’t create any new incentives. It simply creates a new subsidy for rightsholders, most of whom are not the recording artists. The CLASSICS Act gives nothing back to the public. It doesn’t increase access to pre-1972 recordings, which are already played regularly on Internet radio stations. And it doesn’t let the public use these recordings without permission any sooner: state copyrights will continue until 2067, when federal law takes over fully.

The CLASSICS Act will put today’s digital music giants like Pandora and Sirius XM in a privileged position. Many of them already pay royalties for some pre-72 recordings as part of private agreements with record labels, on terms that simply won’t be available to smaller Web streamers like college and independent radio stations.

Unfortunately, this combined Frankenstein of the bill with both the good stuff and the bad sailed through the Judiciary Committee this week.

You will, undoubtedly, see stories celebrating this bill moving forward. And many of them will make accurate statements about how parts of this bill are really good. But parts of it are really bad and damaging to the public domain. The proper response would be to fix the CLASSICS Act such that it actually modernizes pre-1972 works by putting them under federal copyright law, rather than this half-assed way that only adds in the licensing requirements, without the rest of what copyright law is supposed to bring us.

23 Comments | Leave a Comment..

Posted on Techdirt - 19 April 2018 @ 9:40am

France Testing Out Special Encrypted Messenger For Gov't Officials As It Still Seeks To Backdoor Everyone Else's Encryption

from the roll-yer-own dept

The French government has been pushing for a stupid "backdoors" policy in encryption for quite some time. A couple years ago, following various terrorist attacks, there was talk of requiring backdoors to encrypted communications, and there was even a bill proposed that would jail execs who refused to decrypt data. Current President Emmanuel Macron has come out in favor of backdoors as well, even as he's a heavy user of Telegram (which isn't considered particularly secure encryption in the first place).

But now, the French government is apparently moving forward with its own, homegrown, encrypted messaging system, out of a fear that other -- non-French -- encrypted messaging apps will be forced into providing backdoors to their own systems:

The French government is building its own encrypted messenger service to ease fears that foreign entities could spy on private conversations between top officials, the digital ministry said on Monday.

None of the world’s major encrypted messaging apps, including Facebook’s WhatsApp and Telegram - a favorite of President Emmanuel Macron - are based in France, raising the risk of data breaches at servers outside the country.

There are a number of silly things here. First off, the fact that they're doing this should make it clear why it's been so stupid to have the government itself calling for backdoors. Clearly, the French government understands the risks involved, or it wouldn't be doing this in the first place. The message it seems to be sending is that keeping messages and communications secure is important... but only for government officials. For the peasants? Let them eat insecure messages, I guess.

Second, there should be questions about how well this will be implemented. The report does note that they're using "free-to-use code found on the Internet," which (hopefully?) means they're basing it on Open Whisper Systems' encrypted messaging code, which is freely available and is generally considered the gold standard (Update: actually it's based on Riot/Matrix and apparently the plan is to open source it -- which is good). However, doing encrypted messaging well is... difficult. It's the kind of thing that lots of people -- even experts -- get wrong. Rolling your own can often get messy, and you have to bet that a government rolling its own encryption for government officials to use is going to be a clear target for nation-state level hackers to try to break in. That's not to say it can't be done, but there are a lot of tradeoffs here, and I'm not sure that the best encryption is going to come from a government employee.

Also, the report suggests that this technology "could be eventually made available to all citizens," which would certainly be interesting, but would seem to contradict with all of those reports and statements about demanding backdoored encryption. Given how often the French government (and the President) have asked for backdoors, would any French citizen ever feel particularly secure using an "encrypted" messaging system offered up by that same French government?

16 Comments | Leave a Comment..

Posted on Techdirt - 18 April 2018 @ 12:02pm

Reminder: Fill Out Your Working Futures Survey And Help Define The Future Of Work

from the future-of-work dept

As a reminder, our Working Futures scenario planning game around the future of work question is in full swing. If you haven't yet filled out our survey, please do so soon. There have been some great, thoughtful and insightful ideas provided so far, and it's already shaped some of how we'll be proceeding. We've been hard at work designing the specifics of how the "game" part of this will work, with our first workshop to be held next week. While that event is invite only, we still have a few open seats -- so if you'll be in San Francisco next week and think you have something you can add to this discussion, feel free to request an invite via the website. The event itself will be an interactive, guided game for developing a bunch of scenarios. Once we've had a chance to go through the results, we'll begin sharing some of the details -- but the overall results will only get better if you participate as well -- so go fill out the survey.

10 Comments | Leave a Comment..

Posted on Techdirt - 18 April 2018 @ 9:39am

Stupid Copyright: MLB Shuts Down Twitter Account Of Guy Who Shared Cool MLB Gifs

from the you're-not-helping dept

Another day, another story of copyright gone stupid. This time it involves Major League Baseball, which is no stranger to stupid copyright arguments. Going back fifteen years, we wrote about Major League Baseball claiming that other websites couldn't even describe professional baseball games. There was a legal fight over this and MLB lost. A decade ago, MLB was shutting down fan pages for doing crazy things like "using a logo" of their favorite sports team. And, of course, like all major professional sports leagues, MLB has long engaged in copyfraud by claiming that "any account of this game, without the express written consent of Major League Baseball is prohibited", which is just false. MLB has also made up ridiculous rules about how much reporters can post online at times, restricting things that they have no right to actually restrict.

The latest seems particularly stupid. Following on some sort of silly spat in which a guy named Kevin Clancy at Barstool Sports (the same brainiacs who wanted to sue the NFL for having sorta, not really, similar merchandise) got pissed off at a popular Twitter account called @PitchingNinja run by a guy named Rob Friedman, who would tweet out GIFs and videos of interesting pitches from MLB games. Apparently, the dudebro Clancy from Barstool sports pointed out that Friedman was violating the made up rules that MLB has on how much someone is allowed to share on social media, leading a ton of Clancy's fans to "report" Friedman. Twitter shut down Friedman's account -- leading said dudebro, Clancy, to celebrate.

In a podcast interview with that very same Barstool Sports, who got his account shut down, Frideman notes that "there's such a thing as fair use." Indeed, his use of images and videos appears to be fairly obviously fair use. Since we can't see his account while it's suspended, we'll go off of the Yahoo Sports description of the @PitchingNinja account:

Nearly every Rob Friedman tweet arrives offering four things: a baseball player’s name, a pitch he has thrown, an adjective to describe that pitch and a short video clip to illustrate it. Changeups are “ridiculous,” and fastballs are “absurd,” and sliders are “nasty,” and sometimes they’re “disgusting” and “filthy” and “obscene” and every other sort of visceral descriptor, too. Friedman is best known as @PitchingNinja, and his nearly 50,000 followers relish his ability to curate baseball’s deep cuts – the sort of physics-bending pitches average fans may not notice but ones in which pitching nerds luxuriate.

So, going through a quick four factor test -- Friedman is adding commentary, using a tiny amount of a game, not doing this for any commercial advantage and, if anything increasing the market for MLB's product. It seems like a pretty clear cut fair use situation. MLB has told Yahoo that they expect to come to some sort of agreement to let Friedman back on Twitter:

League sources told Yahoo Sports that they expect to “quickly and easily” reach a resolution with Friedman that would allow him to continue posting pitching GIFs. In a letter to the league official who filed the DMCA complaint, Friedman, a lawyer by trade, outlined his argument on how what he does benefits the league.

But, of course, it's bullshit that they should even need to do this in the first place. The whole point of fair use is that you don't need permission, and you don't need to reach an agreement. And yet, according to Yahoo, MLB seems to think it needs to come to an agreement with Friedman over what is fair use:

MLB plans to contact Friedman in the coming days, if not sooner, at which point they are likely to agree on what constitutes fair use.

But, they don't need to agree. The law says what fair use is, and MLB doesn't get to change that to suit their own whims.

Friedman also told Yahoo the following:

“I also understand that MLB has every right to protect its product,” he wrote in the email, which he shared with Yahoo Sports. “I’m most certainly not trying to deprive MLB of any value, instead I’m trying to create value by helping pitchers have a sense of community, learn, and appreciate the game. Rather than debate the legal matter, I am more than happy to give MLB all of my gifs for free or work out some other content deal that just allows me to use MLB content, as permitted, for fair use, to help pitchers, coaches, and fans understand the game. I would be happy to donate any content for free and execute a copyright license ensuring that MLB owns any gifs I create.”

That's... weird. MLB already owns the copyright to the videos. Fair use is what lets Friedman make use of them without needing a license. So I'm not sure what he's talking about licensing them back to MLB. That doesn't really make much sense. But, you still see the underlying point that he's making, which is that he's building more interest in the game, and he's not trying to claim any ownership or make any money from what he's doing, it's just for the love of sharing the game and educating people. Which, you know, is the kind of thing that fair use is explicitly designed to enable.

And, of course, no one should take Twitter off the hook here for suspending Friedman's account. Twitter could have (and should have) rejected the DMCA notices and pointed out that the @PitchingNinja account was engaging in fair use. Instead, it shut down the account, and once again showed how copyright is regularly abused for censorship, rather than any legitimate purpose under the Copyright Act.

18 Comments | Leave a Comment..

Posted on Free Speech - 17 April 2018 @ 12:04pm

How Government Pressure Has Turned Transparency Reports From Free Speech Celebrations To Censorship Celebrations

from the this-is-not-good dept

For many years now, various internet companies have released Transparency Reports. The practice was started by Google years back (oddly, Google itself fails me in finding its original trasnparency report). Soon many other internet companies followed suit, and, while it took them a while, the telcos eventually joined in as well. Google's own Transparency Report site lists out a bunch of other companies that now issue such reports:

We've celebrated many of these transparency reports over the years, often demonstrating the excesses of attempts to stifle and censor speech or violate users privacy, and in how these reports often create incentives for these organizations to push back against those demands. Yet, in an interesting article over at Politico, a former Google policy manager warns that the purpose of these platforms is being flipped on its head, and that they're now being used to show how much these platforms are willing to censor:

Fast forward a decade and democracies are now agonizing over fake news and terrorist propaganda. Earlier this month, the European Commission published a new recommendation demanding that internet companies remove extremist and other objectionable content flagged to them in less than an hour — or face legislation forcing them to do so. The Commission also endorsed transparency reports as a way to demonstrate how they are complying with the law.

Indeed, Google and other big tech companies still publish transparency reports, but they now seem to serve a different purpose: to convince authorities in Europe and elsewhere that the internet giant is serious about cracking down on illegal content. The more takedowns it can show, the better.

If true, this is a pretty horrific result of something that should be a good thing: more transparency, more information sharing and more incentives to make sure that bogus attempts to stifle speech and invade people's privacy are not enabled.

Part of the issue, of course, is the fact that governments have been increasingly putting pressure on internet platforms to take down speech, and blaming internet platforms for election results or policies they dislike. And the companies then feel the need to show the governments that they do take these "issues" seriously, by pointing to the content they do takedown. So, rather than alerting the public to all the stuff they don't take down, the platforms are signalling to governments (and some in the public too, frankly) that they frequently take down content. And, unfortunately, that's backfiring, as it's making politicians (and some individuals) claim that this just proves the platforms aren't censoring enough.

The pace of private sector censorship is astounding — and it’s growing exponentially.

The article talks about how this is leading to censorship of important and useful content, such as the case where an exploration of the dangers of Holocaust revisionism got taken down because YouTube feared that a look into it might actually violate European laws against Holocaust revisionism. And, of course, such censorship machines are regularly abused by authoritarian governments:

Turkey demands that internet companies hire locals whose main task is to take calls from the government and then take down content. Russia reportedly is threatening to ban YouTube unless it takes down opposition videos. China’s Great Firewall already blocks almost all Western sites, and much domestic content.

Similarly, a recent report on how Facebook's censorship of reports of ethnic cleansing in Burma are incredibly disturbing:

Rohingya activists—in Burma and in Western countries—tell The Daily Beast that Facebook has been removing their posts documenting the ethnic cleansing of Rohingya people in Burma (also known as Myanmar). They said their accounts are frequently suspended or taken down.

That article has many examples of the kind of content that Facebook is pulling down and notes that in Burma, people rely on Facebook much more than in some other countries:

Facebook is an essential platform in Burma; since the country’s infrastructure is underdeveloped, people rely on it the way Westerners rely on email. Experts often say that in Burma, Facebook is the internet—so having your account disabled can be devastating.

You can argue that there should be other systems for them to use, but the reality of the situation right now is they use Facebook, and Facebook is deleting reports of ethnic cleansing.

Having democratic governments turn around and enable more and more of this in the name of stopping "bad" speech is acting to support these kinds of crackdowns.

Indeed, as Europe is pushing for more and more use of platforms to censor, it's important that someone gets them to understand how these plans almost inevitably backfire. Daphne Keller at Stanford recently submitted a comment to the EU about its plan, noting just how badly demands for censorship of "illegal content" can turn around and do serious harm.

Errors in platforms’ CVE content removal and police reporting will foreseeably, systematically, and unfairly burden a particular group of Internet users: those speaking Arabic, discussing Middle Eastern politics, or talking about Islam. State-mandated monitoring will, in this way, exacerbate existing inequities in notice and takedown operations. Stories of discriminatory removal impact are already all too common. In 2017, over 70 social justice organizations wrote to Facebook identifying a pattern of disparate enforcement, saying that the platform applies its rules unfairly to remove more posts from minority speakers. This pattern will likely grow worse in the face of pressures such as those proposed in the Recommendation.

There are longer term implications of all of this, and plenty of reasons why we should be thinking about structuring the internet in better ways to protect against this form of censorship. But the short term reality remains, and people should be wary of calling for more platform-based censorship over "bad" content without recognizing the inevitable ways in which such policies are abused or misused to target the most vulnerable.

12 Comments | Leave a Comment..

More posts from Mike Masnick >>