Mike Masnick’s Techdirt Profile


About Mike MasnickTechdirt Insider

Mike is the founder and CEO of Floor64 and editor of the Techdirt blog.

He can be found on Twitter at http://www.twitter.com/mmasnick

Posted on Techdirt - 20 April 2018 @ 7:39pm

Democratic National Committee's Lawsuit Against Russians, Wikileaks And Various Trump Associates Full Of Legally Nutty Arguments

from the slow-down-there-dnc dept

This morning I saw a lot of excitement and happiness from folks who greatly dislike President Trump over the fact that the Democratic National Committee had filed a giant lawsuit against Russia, the GRU, Guccifier 2, Wikileaks, Julian Assange, the Trump campaign, Donald Trump Jr., Jared Kushner, Paul Manafort, Roger Stone and a few other names you might recognize if you've followed the whole Trump / Russia soap opera over the past year and a half. My first reaction was that this was unlikely to be the kind of thing we'd cover on Techdirt, because it seemed like a typical political thing. But, then I looked at the actual complaint and it's basically a laundry list of the laws that we regularly talk about (especially about how they're abused in litigation). Seriously, look at the complaint. There's a CFAA claim, an SCA claim, a DMCA claim, a "Trade Secrets Act" claim... and everyone's favorite: a RICO claim.

Most of the time when we see these laws used, they're indications of pretty weak lawsuits, and going through this one, that definitely seems to be the case here. Indeed, some of the claims made by the DNC here are so outrageous that they would effectively make some fairly basic reporting illegal. One would have hoped that the DNC wouldn't seek to set a precedent that reporting on leaked documents is against the law -- especially given how reliant the DNC now is on leaks being reported on in their effort to bring down the existing president. I'm not going to go through the whole lawsuit, but let's touch on a few of the more nutty claims here.

The crux of the complaint is that these groups / individuals worked together in a conspiracy to leak DNC emails and documents. And, there's little doubt at this point that the Russians were behind the hack and leak of the documents, and that Wikileaks published them. Similarly there's little doubt that the Trump campaign was happy about these things, and that a few Trump-connected people had some contacts with some Russians. Does that add up to a conspiracy? My gut reaction is to always rely on Ken "Popehat" White's IT'S NOT RICO, DAMMIT line, but I'll leave that analysis to folks who are more familiar with RICO.

But let's look at parts we are familiar with, starting with the DMCA claim, since that's the one that caught my eye first. A DMCA claim? What the hell does copyright have to do with any of this? Well...

Plaintiff's computer networks and files contained information subject to protection under the copyright laws of the United States, including campaign strategy documents and opposition research that were illegally accessed without authorization by Russia and the GRU.

Access to copyrighted material contained on Plaintiff's computer networks and email was controlled by technological measures, including measures restricting remote access, firewalls, and measures restricting acess to users with valid credentials and passwords.

In violation of 17 U.S.C. § 1201(a), Russia, the GRU, and GRU Operative #1 circumvented these technological protection measures by stealing credentials from authorized users, condcting a "password dump" to unlawfully obtain passwords to the system controlling access to the DNC's domain, and installing malware on Plaintiff's computer systems.

Holy shit. This is the DNC trying to use DMCA 1201 as a mini-CFAA. They're not supposed to do that. 1201 is the anti-circumvention part of the DMCA and is supposed to be about stopping people from hacking around DRM to free copyright-covered material. Of course, 1201 has been used in all sorts of other ways -- like trying to stop the sale of printer cartridges and garage door openers -- but this seems like a real stretch. Russia hacking into the DNC had literally nothing to do with copyright or DRM. Squeezing a copyright claim in here is just silly and could set an awful precedent about using 1201 as an alternate CFAA (we'll get to the CFAA claims in a moment). If this holds, nearly any computer break-in to copy content would also lead to DMCA claims. That's just silly.

Onto the CFAA part. As we've noted over the years, the Computer Fraud and Abuse Act is quite frequently abused. Written in response to the movie War Games to target "hacking," the law has been used for basically any "this person did something we dislike on a computer" type issues. It's been dubbed "the law that sticks" because in absence of any other claims that one always sticks because of how broad it is.

At least this case does involve actual hacking. I mean, someone hacked into the DNC's network, so it actually feels (amazingly) that this may be one case where the CFAA claims are legit. Those claims are just targeting the Russians, who were the only ones who actually hacked the DNC. So, I'm actually fine with those claims. Other than the fact that they're useless. It's not like the Russian Federation or the GRU is going to show up in court to defend this. And they're certainly not going to agree to discovery. I doubt they'll acknowledge the lawsuit at all, frankly. So... reasonable claims, impossible target.

Then there's the Stored Communications Act (SCA), which is a part of ECPA, the Electronic Communications Privacy Act, which we've written about a ton and it does have lots of its own problems. These claims are also just against Russia, the GRU and Guccifer 2.0, and like the DMCA claims appear to be highly repetitive with the CFAA claims. Instead of just unauthorized access, it's now unauthorized access... to communications.

It's then when we get into the trade secrets part where things get... much more problematic. These claims are brought against not just the Russians, but also Wikileaks and Julian Assange. Even if you absolutely hate and / or distrust Assange, these claims are incredibly problematic against Wikileaks.

Defendants Russia, the GRU, GRU Operative #1, WikiLeaks, and Assange disclosed Plaintiff's trade secrets without consent, on multiple dates, discussed herein, knowing or having reason to know that trade secrets were acquired by improper means.

If that violates the law, then the law is unconstitutional. The press regularly publishes trade secrets that may have been acquired by improper means by others and handed to the press (as is the case with this content being handed to Wikileaks). Saying that merely disclosing the information is a violation of the law raises serious First Amendment issues for the press.

I mean, what's to stop President Trump from using the very same argument against the press for revealing, say, his tax returns? Or reports about business deals gone bad, or the details of secretive contracts? These could all be considered "trade secrets" and if the press can't publish them that would be a huge, huge problem.

In a later claim (under DC's specific trade secrets laws), the claims are extended to all defendants, which again raises serious First Amendment issues. Donald Trump Jr. may be a jerk, but it's not a violation of trade secrets if someone handed him secret DNC docs and he tweeted them or emailed them around.

There are also claims under Virginia's version of the CFAA. The claims against the Russians may make sense, but the complaint also makes claims against everyone else by claiming they "knowingly aided, abetted, encouraged, induced, instigated, contributed to and assisted Russia." Those seem like fairly extreme claims for many of the defendants, and again feel like the DNC very, very broadly interpreting a law to go way beyond what it should cover.

As noted above, there are some potentially legit claims in here around Russia hacking into the DNC's network (though, again, it's a useless defendant). But some of these other claims seem like incredible stretches, twisting laws like the DMCA for ridiculous purposes. And the trade secret claims against the non-Russians is highly suspect and almost certainly not a reasonable interpretation of the law under the First Amendment.

Read More | 92 Comments | Leave a Comment..

Posted on Free Speech - 20 April 2018 @ 3:33pm

Michael Cohen Drops Ridiculous Lawsuit Against Buzzfeed After Buzzfeed Sought Stormy Daniels' Details

from the fighting-fires dept

Donald Trump's long time lawyer, Michael Cohen has been in a bit of hot water of late. As you no doubt heard, the FBI raided Cohen's office and home seeking a bunch of information, some of which related to the $130,000 he paid to adult performer Stormy Daniels. Already there have been a few court appearances in which Cohen (and Donald Trump) have sought to suppress some of what's been seized, but that doesn't seem to be going too well. At the same time, Cohen is still fighting Daniels in court, which also doesn't seem to be going too well.

Given all of that, it's not too surprising that Cohen has decided to dismiss his ridiculous lawsuit against Buzzfeed for publishing the Christopher Steele dossier. As we pointed out, that lawsuit was going nowhere, because it sought to hold Buzzfeed liable for content created by someone else (oh, and that leaves out that much of what Cohen claimed was defamatory may actually have been true.

And while many are suggesting Cohen dropped that lawsuit because the other lawsuits are a much bigger priority, there may be another important reason as well. As we noted last month, through a somewhat complex set of circumstances, the lawsuit against Buzzfeed may have resulted in Cohen having to reveal the details he's been avoiding concerning Stormy Daniels. That's because Buzzfeed was claiming that Cohen's interactions with Daniels were relevant to its case, and it was likely to seek that information as part of the case moving forward.

In other words, dropping the Buzzfeed lawsuit (that he was going to lose anyway), Cohen wasn't just ditching a distraction in the face of more important legal issues, he may be hoping to cut off at least one avenue for all the stuff he's been trying to keep secret from becoming public. That doesn't mean it won't become public eventually. After all the DOJ has a bunch of it. But it does suggest that Cohen had more than one reason to drop the Buzzfeed lawsuit.

5 Comments | Leave a Comment..

Posted on Techdirt - 20 April 2018 @ 1:30pm

How Twitter Suspended The Account Of One Of Our Commenters... For Offending Himself?

from the come-on,-jack dept

If you spend any time at all in Techdirt's comments, you should be familiar with That Anonymous Coward. He's a prolific and regular commenter (with strong opinions). He also spends a lot of time on Twitter. Well, at least until a week or so ago when Twitter suspended his account. It's no secret that Twitter has been getting a lot of pressure from people to be more proactive in shutting down and cutting off certain accounts. There are even a bunch of people who claim that Twitter should suspend the President's account -- though we think that would be a really bad idea.

As we've pointed out in the past, people who demand that sites shut down and suspend accounts often don't realize how difficult it is to do this at scale and not fuck up over and over again. Indeed, we have plenty of stories about sites having trouble figuring out what content is really problematic. Indeed, frequently these stories show that the targets of trolls and abusers are the ones who end up suspended.

You can read TAC's open letter to Jack Dorsey, which also includes an account of what happened. In short, over a year ago, TAC responded to something Ken "Popehat" White had tweeted, and referred to himself -- a gay man -- as "a faggot." Obviously, many people consider this word offensive. But it's quite obvious from how it was used here that this was a situation of someone using the word to refer to himself and to reclaim the slur.

Twitter then demanded that he delete the tweet and "verify" his phone number. TAC refused both requests. First, it was silly to delete the tweet because it's clearly not "hateful content" given the context. Second, as someone who's whole point is being "Anonymous" giving up his phone number doesn't make much sense. And, as he notes in his open letter, people have tried to sue him in the past. There's a reason he stays pseudononymous:

Why do I have to supply a cell phone number to get back on the platform? I've been a user for 5 years and have never used a cell phone to access your service. I am a nym, but I am an established nym. I own the identity & amazingly there are several hundred people following my nym. I interact with the famous & infamous, they tweet back to me sometimes. I survived a few lawsuits trying to get my real name from platforms, because I called Copyright Trolls extortionists... they were offended & tried to silence me with fear of lawsuits. I'm still a nym, they've been indicted by the feds. There are other Copyright Trolls who dislike me, so staying a nym is in my best interest.

TAC also points out the general inconsistencies in Twitter's enforcement, noting that other slurs are not policed, and even the slur that caused his account to be shut down (over a year after he used it) did not lead to other accounts facing the same issues.

Incredibly, TAC points out that he appealed the suspension... and Twitter trust and safety rejected the appeal. It was only on the second appeal -- and seven days later -- that Twitter recognized its mistake and restored his account.

Now, some may be quick to blame Twitter for this mess, but it again seems worth pointing out what an impossible situation this is. Platforms like Twitter are under tremendous pressure to moderate out "bad" content. But people have very little understanding of two important things: (1) the scale at which these platforms operate, and (2) how difficult it is to determine what's "bad" -- especially without full context. The only way to handle reports and complaints at scale is to either automate the process, hire a ton of people, or both. And no matter which choice you make, serious mistakes are going to be made. AI is notoriously bad at understanding context. People are under pressure to go through a lot of content very quickly to make quick judgments -- which also doesn't bode well for understanding context.

So, once again, we should be pretty careful what we ask for when we demand that sites be quicker about shutting down and suspending accounts. You might be surprised who actually has their accounts shut down. That's not to say sites should never suspend accounts, but the rush to pressure companies into doing so represents a fundamental misunderstanding of how such demands will be handled. TAC's week-long forced sabbatical is just a small example of those unintended consequences.

Read More | 79 Comments | Leave a Comment..

Posted on Techdirt - 20 April 2018 @ 11:55am

FOSTA/SESTA Passed Thanks To Facebook's Vocal Support; New Article Suggests Facebook Is Violating FOSTA/SESTA

from the self-own dept

One of the main reasons FOSTA/SESTA is now law is because of Facebook's vocal support for the bill. Sheryl Sandberg repeatedly spoke out in favor of the bill, misrepresenting what the bill actually did. In our own post-mortem on what happened with FOSTA/SESTA we noted that a big part of the problem was that many people inside Facebook (incredibly) did not appear to understand how CDA 230 works, and thus misunderstood how FOSTA/SESTA would create all sorts of problems. Last month, we noted that there was some evidence to suggest that Facebook itself was violating the law it supported.

However, a new article from Buzzfeed presents even more evidence of just how much liability Facebook may have put on itself in supporting the law. The article is fairly incredible, talking about how Facebook has allowed a group on its site that helps landlords seek out gay sex in exchange for housing -- and the report is chilling in how far it goes. In some cases, it certainly appears to reach the level of sex trafficking, where those desperate for housing basically become sex slaves to their landlords.

Today, in the first instalment of this series, we uncover some of the damage done to these young men – the sexual violence – by landlords, and reveal how they are being enabled by two major internet companies, one of which is Facebook. The world’s largest social media platform, BuzzFeed News can reveal, is hosting explicit posts from landlords promising housing in return for gay sex.

In multiple interviews with the men exchanging sex for rent and groups trying to deal with the crisis, BuzzFeed News also uncovered a spectrum of experiences that goes far beyond what has so far been documented, with social media, hook-up apps, and chemsex parties facilitating everything.

At best, impoverished young men are seeking refuge in places where they are at risk of sexual exploitation. At worst, teenagers are being kept in domestic prisons where all personal boundaries are breached, where their lives are in danger.

I've seen multiple people point out -- accurately -- that the article's focus on Facebook here is a little silly. The real focus should be on the "landlords" who are seeking out and taking advantage of desperate young men in need of a place to live. But, given that the focus is on Facebook, it certainly appears that Facebook has the knowledge required to be a violation of FOSTA/SESTA:

Despite the explicit nature of the postings on the group’s site, the administrator told BuzzFeed News that Facebook has not intervened. “We have never had an incident from Facebook,” he said. “If they [members] want to post something that will not fly with Facebook I write them, and tell them what needs to be changed.”

This has not stopped explicit notices being posted.

When approached by BuzzFeed News to respond to issues relating to this group, Facebook initially replied promising that a representative would comment. That response, however, did not materialise, despite several attempts by BuzzFeed News, over several days, to invite Facebook to do so. A week after first contacting the social media company, the group remains on its site.

It still seems wrong to blame Facebook for what the horrific landlords are doing here, but, hey, FOSTA/SESTA is now the law, and it's the law thanks in large part to Facebook's strong support for it. So, given all of this, will Facebook now face legal action, either from the victims of this group or from law enforcement?

11 Comments | Leave a Comment..

Posted on Techdirt - 20 April 2018 @ 9:37am

Sex Workers Set Up Their Own Social Network In Response To FOSTA/SESTA; And Now It's Been Shut Down Due To FOSTA/SESTA

from the censorship-at-work dept

Just a few weeks ago we wrote about how a group of sex workers, in response to the passing of FOSTA/SESTA, had set up their own social network, called Switter, which was a Mastodon instance. As we noted in our post, doing so was unlikely to solve any of the problems of FOSTA/SESTA, because it's perhaps even more likely that Switter itself would become a target of FOSTA/SESTA (remember, with FOSTA, the targeting goes beyond "sex trafficking" to all prostitution).

And, indeed, it appears I was not the only one to think so. The organization that created Switter, Assembly Four, put up a note saying that Cloudflare had shut down Switter claiming the site was in violation of its terms of service.

Cloudflare has been made aware that your site is in violation of our published Terms of Service. Pursuant to our published policy, Cloudflare will terminate service to your website.

Cloudflare will terminate your service for switter{.}at by disabling our authoritative DNS.

Assembly Four asked Cloudflare to clarify just what term it had violated and the company has now come out and noted that it reluctantly pulled the plug on Switter out of a fear that it would create criminal liability for Cloudflare under FOSTA/SESTA. Cloudflare was among the companies who lobbied against the bill, and they note that they disagree with the way the bill was drafted -- but given the nature of the law, the company feels compelled to take this action:

“[Terminating service to Switter] is related to our attempts to understand FOSTA, which is a very bad law and a very dangerous precedent,” he told me in a phone conversation. “We have been traditionally very open about what we do and our roles as an internet infrastructure company, and the steps we take to both comply with the law and our legal obligations—but also provide security and protection, let the internet flourish and support our goals of building a better internet.”

Remember, this was a site for sex workers to communicate with each other. It was purely a platform for speech. And it's being shut down because of fears from the vague and poorly drafted FOSTA/SESTA bill. In other words, yet more confirmation that just as free speech experts predicted, FOSTA/SESTA would lead to outright suppression of speech.

I've seen some complaints on Twitter that Cloudflare should have stood up for Switter and not done this. I don't think that's reasonable. The penalties under FOSTA/SESTA are not just fines. It's a criminal statute. It's one thing to take a stand when you're facing monetary damages or something of that nature. It's something altogether different when you're asking a company to stand up to criminal charges based on a law that is incredibly vague and broad, and for which there is no caselaw. Yes, it would be nice to have some companies push back and potentially help to invalidate the law as unconstitutional, but you can't demand that of every company.

I am curious, though, how supporters of FOSTA/SESTA react to this. Do they not care that sex workers want to be able to communicate? Do they not care that social networks are being shut down over this? Do they not care about speech being suppressed?

36 Comments | Leave a Comment..

Posted on Techdirt - 20 April 2018 @ 3:23am

Bad Decisions: Google Screws Over Tools Evading Internet Censorship Regimes

from the who's-fronting-now? dept

Just as places like Russia are getting more aggressive with companies like Google and Amazon in seeking to stop online communications they can't monitor, Google made a move that really fucked over a ton of people who rely on anti-censorship tools. For years, various anti-censorship tools from Tor to GreatFire to Signal have made use of "domain fronting." That's a process by which services could get around censorship by effectively appearing to send traffic via large companies' sites, such as Google's. The link above describes the process as follows:

Domain fronting works at the application layer, using HTTPS, to communicate with a forbidden host while appearing to communicate with some other host, permitted by the censor. The key idea is the use of different domain names at different layers of communication. One domain appears on the “outside” of an HTTPS request—in the DNS request and TLS Server Name Indication—while another domain appears on the “inside”—in the HTTP Host header, invisible to the censor under HTTPS encryption. A censor, unable to distinguish fronted and nonfronted traffic to a domain, must choose between allowing circumvention traffic and blocking the domain entirely, which results in expensive collateral damage. Domain fronting is easy to deploy and use and does not require special cooperation by network intermediaries. We identify a number of hard-to-block web services, such as content delivery networks, that support domain-fronted connections and are useful for censorship circumvention. Domain fronting, in various forms, is now a circumvention workhorse.

In short, because most countries are reluctant to block all of Google, the ability to use Google for domain fronting was incredibly useful in getting around censorship. And now it's gone. Google claims that it never officially supported it, that this was a result of a planned update, and it has no intention of bringing it back:

“Domain fronting has never been a supported feature at Google,” a company representative said, “but until recently it worked because of a quirk of our software stack. We’re constantly evolving our network, and as part of a planned software update, domain fronting no longer works. We don’t have any plans to offer it as a feature.”

As Ars Technica notes, companies like Google may be concerned that it could lead to larger blocks that could harm customers. But, as Access Now points out, there are larger issues at stake, concerning individuals who are put at risk through such censorship:

“As a repository and organizer of the world’s information, Google sees the power of access to knowledge. Likewise, the company understands the many ingenious ways that people evade censors by piggybacking on its networks and services. There’s no ignorance excuse here: Google knows this block will levy immediate, adverse effects on human rights defenders, journalists, and others struggling to reach the open internet,” said Peter Micek, General Counsel at Access Now. “To issue this decision with a shrug of the shoulders, disclaiming responsibility, damages the company’s reputation and further fragments trust online broadly, for the foreseeable future.”

“Google has long claimed to support internet freedom around the world, and in many ways the company has been true to its beliefs. Allowing domain fronting has meant that potentially millions of people have been able to experience a freer internet and enjoy their human rights. We urge Google to remember its commitment to human rights and internet freedom and allow domain fronting to continue,” added Nathan White, Senior Legislative Manager at Access Now.

Google doesn't need to support domain fronting, and there are reasonable business reasons for not doing so. But... there are also strong human rights reasons why the company should reconsider. In the past, Google has taken principled stands on human rights. This is another time that it should seriously consider doing so.

26 Comments | Leave a Comment..

Posted on Techdirt - 19 April 2018 @ 12:09pm

Of Course The RIAA Would Find A Way To Screw Over The Public In 'Modernizing' Copyright

from the modernization-for-us,-but-not-for-you dept

I haven't had a chance to write much about the latest attempt to update copyright law in the US, under the title of the "Music Modernization Act," but in part that was because Congress did something amazing: it came up with a decent solution to modernizing some outdated aspects of copyright law, that almost everyone agreed were pretty decent ideas for improvement. The crux of the bill was making music licensing easier and much clearer, which is very much needed, giving what a complete shit show music licensing is today.

There was a chance to have this actually create a nice solution that would help artists, help online music services and generally make more works available to the public. It was a good thing. But... leave it to the RIAA to fuck up a good thing. You see, with there being pretty much universal support for the Music Modernization Act, the RIAA stepped in and pushed for it to be combined with a different copyright reform, known as the "CLASSICS Act."

What is the CLASSICS Act? Well, it's actually based on a good idea -- fixing the mess that is pre-1972 sound recordings. We've written about this for years, and without getting too deep into the weeds, the basic thing is that prior to February of 1972, sound recordings were not covered by federal copyright. Compositions were still protected, but not the actual recording. To deal with that, various states set up their own state-based copyright laws for those works -- sometimes in statute, sometimes through common law. But, as part of the "transition" of bringing sound recordings into federal copyright, Congress also (ridiculously) said that sound recordings prior to 1972 would remain under whatever ridiculous state copyright laws existed until 2047. And thanks to Sonny Bono, that got pushed back to 2067. As Public Knowledge points out, that's created a ridiculous situation, keeping important works out of the public domain for nearly two centuries:

State copyrights are, for all intents and purposes, indefinite. Back in 1972 -- when Congress first federalized (new) sound recordings -- Congress sought to “fix” this problem by declaring that all state copyrights in pre-’72 sound recordings would expire on February 15, 2047. They picked 2047 so that recordings made immediately prior to the new law’s passage (i.e. the last sound recordings protected only by state copyright) would be kept out of the public domain for a full 75 years, the same as their newer, federally-protected counterparts.

However, because that 2047 date applied indiscriminately to all sound recordings made before 1972, recordings ended up with mind-boggling terms of potential state protection. Thomas Edison’s original sound recordings, made in 1877, wouldn’t be guaranteed to enter the public domain until 170 years after it was first created. Congress doubled down on this decision in 1998, pushing the date back another 20 years, to 2067. That Edison recording now is kept out of the public domain for 190 years -- enough to provide theoretical royalties to eight generations of the original artist’s descendants. As a result, with a few exceptions (mostly when artists have affirmatively committed their works to the public domain) there are no sound recordings in the public domain in the United States, period.

As we've noted for years, the proper way to fix this is just to put pre-1972 sound recordings under federal copyright law, and give them the same public domain date they would have received if they had been covered by federal copyright law all along. This is not, by any means, a perfect solution. It has some additional drawbacks, but at a high level, it puts all sound recordings on a level playing field and makes sure that there isn't confusion over different treatments for a song recorded in March 1972 from one recorded in March of 1971.

But that's not what the CLASSICS Act does. While it claims to put those works on the same footing as federal copyright law, it only does that part way. It sets it up so that streaming services providers will now have to pay performance royalties on the pre-1972 works (after a bunch of court rulings -- but not all -- have suggested they don't need to pay such fees). Again, that's fine if it puts all the works on the same level playing field. But the CLASSICS Act doesn't quite do that. It just adds the "pay the RIAA" part, and leaves out the "oh yeah and let these works go into the public domain on the same schedule as all other works" part. In other words, under the CLASSICS Act, these royalties will have to be paid way beyond when those works should go into the public domain.

Public Knowledge further points out that the CLASSICS Act also ignores termination rights, which would benefit artists (but hurt the labels) and could also cause serious harm to archives and libraries:

Most of the protections that libraries, archives, and other nonprofits rely upon apply specifically to reproduction and distribution of works, but not to their public performance. CLASSICS/MMA 2 creates a federal right that covers performance, but not reproduction or distribution -- those parts of the copyright regime that serve as the lifeblood of these institutions.

EFF highlights some more issues with the CLASSICS Act, including how it would basically lock in existing providers like Pandora, Spotify and Sirius XM, but cause problems for any upstart:

Creating new barriers to the use of old creative works is not what copyright is for. Copyright is a bargain: authors and artists get limited, exclusive rights over their works as an incentive to create. In return, the public is enriched by new art and authorship and can use works in ways and times that fall outside the rightsholder’s zone of exclusivity. Creating new rights in recordings that have been around for 46 years or more doesn’t create any new incentives. It simply creates a new subsidy for rightsholders, most of whom are not the recording artists. The CLASSICS Act gives nothing back to the public. It doesn’t increase access to pre-1972 recordings, which are already played regularly on Internet radio stations. And it doesn’t let the public use these recordings without permission any sooner: state copyrights will continue until 2067, when federal law takes over fully.

The CLASSICS Act will put today’s digital music giants like Pandora and Sirius XM in a privileged position. Many of them already pay royalties for some pre-72 recordings as part of private agreements with record labels, on terms that simply won’t be available to smaller Web streamers like college and independent radio stations.

Unfortunately, this combined Frankenstein of the bill with both the good stuff and the bad sailed through the Judiciary Committee this week.

You will, undoubtedly, see stories celebrating this bill moving forward. And many of them will make accurate statements about how parts of this bill are really good. But parts of it are really bad and damaging to the public domain. The proper response would be to fix the CLASSICS Act such that it actually modernizes pre-1972 works by putting them under federal copyright law, rather than this half-assed way that only adds in the licensing requirements, without the rest of what copyright law is supposed to bring us.

21 Comments | Leave a Comment..

Posted on Techdirt - 19 April 2018 @ 9:40am

France Testing Out Special Encrypted Messenger For Gov't Officials As It Still Seeks To Backdoor Everyone Else's Encryption

from the roll-yer-own dept

The French government has been pushing for a stupid "backdoors" policy in encryption for quite some time. A couple years ago, following various terrorist attacks, there was talk of requiring backdoors to encrypted communications, and there was even a bill proposed that would jail execs who refused to decrypt data. Current President Emmanuel Macron has come out in favor of backdoors as well, even as he's a heavy user of Telegram (which isn't considered particularly secure encryption in the first place).

But now, the French government is apparently moving forward with its own, homegrown, encrypted messaging system, out of a fear that other -- non-French -- encrypted messaging apps will be forced into providing backdoors to their own systems:

The French government is building its own encrypted messenger service to ease fears that foreign entities could spy on private conversations between top officials, the digital ministry said on Monday.

None of the world’s major encrypted messaging apps, including Facebook’s WhatsApp and Telegram - a favorite of President Emmanuel Macron - are based in France, raising the risk of data breaches at servers outside the country.

There are a number of silly things here. First off, the fact that they're doing this should make it clear why it's been so stupid to have the government itself calling for backdoors. Clearly, the French government understands the risks involved, or it wouldn't be doing this in the first place. The message it seems to be sending is that keeping messages and communications secure is important... but only for government officials. For the peasants? Let them eat insecure messages, I guess.

Second, there should be questions about how well this will be implemented. The report does note that they're using "free-to-use code found on the Internet," which (hopefully?) means they're basing it on Open Whisper Systems' encrypted messaging code, which is freely available and is generally considered the gold standard (Update: actually it's based on Riot/Matrix and apparently the plan is to open source it -- which is good). However, doing encrypted messaging well is... difficult. It's the kind of thing that lots of people -- even experts -- get wrong. Rolling your own can often get messy, and you have to bet that a government rolling its own encryption for government officials to use is going to be a clear target for nation-state level hackers to try to break in. That's not to say it can't be done, but there are a lot of tradeoffs here, and I'm not sure that the best encryption is going to come from a government employee.

Also, the report suggests that this technology "could be eventually made available to all citizens," which would certainly be interesting, but would seem to contradict with all of those reports and statements about demanding backdoored encryption. Given how often the French government (and the President) have asked for backdoors, would any French citizen ever feel particularly secure using an "encrypted" messaging system offered up by that same French government?

16 Comments | Leave a Comment..

Posted on Techdirt - 18 April 2018 @ 12:02pm

Reminder: Fill Out Your Working Futures Survey And Help Define The Future Of Work

from the future-of-work dept

As a reminder, our Working Futures scenario planning game around the future of work question is in full swing. If you haven't yet filled out our survey, please do so soon. There have been some great, thoughtful and insightful ideas provided so far, and it's already shaped some of how we'll be proceeding. We've been hard at work designing the specifics of how the "game" part of this will work, with our first workshop to be held next week. While that event is invite only, we still have a few open seats -- so if you'll be in San Francisco next week and think you have something you can add to this discussion, feel free to request an invite via the website. The event itself will be an interactive, guided game for developing a bunch of scenarios. Once we've had a chance to go through the results, we'll begin sharing some of the details -- but the overall results will only get better if you participate as well -- so go fill out the survey.

10 Comments | Leave a Comment..

Posted on Techdirt - 18 April 2018 @ 9:39am

Stupid Copyright: MLB Shuts Down Twitter Account Of Guy Who Shared Cool MLB Gifs

from the you're-not-helping dept

Another day, another story of copyright gone stupid. This time it involves Major League Baseball, which is no stranger to stupid copyright arguments. Going back fifteen years, we wrote about Major League Baseball claiming that other websites couldn't even describe professional baseball games. There was a legal fight over this and MLB lost. A decade ago, MLB was shutting down fan pages for doing crazy things like "using a logo" of their favorite sports team. And, of course, like all major professional sports leagues, MLB has long engaged in copyfraud by claiming that "any account of this game, without the express written consent of Major League Baseball is prohibited", which is just false. MLB has also made up ridiculous rules about how much reporters can post online at times, restricting things that they have no right to actually restrict.

The latest seems particularly stupid. Following on some sort of silly spat in which a guy named Kevin Clancy at Barstool Sports (the same brainiacs who wanted to sue the NFL for having sorta, not really, similar merchandise) got pissed off at a popular Twitter account called @PitchingNinja run by a guy named Rob Friedman, who would tweet out GIFs and videos of interesting pitches from MLB games. Apparently, the dudebro Clancy from Barstool sports pointed out that Friedman was violating the made up rules that MLB has on how much someone is allowed to share on social media, leading a ton of Clancy's fans to "report" Friedman. Twitter shut down Friedman's account -- leading said dudebro, Clancy, to celebrate.

In a podcast interview with that very same Barstool Sports, who got his account shut down, Frideman notes that "there's such a thing as fair use." Indeed, his use of images and videos appears to be fairly obviously fair use. Since we can't see his account while it's suspended, we'll go off of the Yahoo Sports description of the @PitchingNinja account:

Nearly every Rob Friedman tweet arrives offering four things: a baseball player’s name, a pitch he has thrown, an adjective to describe that pitch and a short video clip to illustrate it. Changeups are “ridiculous,” and fastballs are “absurd,” and sliders are “nasty,” and sometimes they’re “disgusting” and “filthy” and “obscene” and every other sort of visceral descriptor, too. Friedman is best known as @PitchingNinja, and his nearly 50,000 followers relish his ability to curate baseball’s deep cuts – the sort of physics-bending pitches average fans may not notice but ones in which pitching nerds luxuriate.

So, going through a quick four factor test -- Friedman is adding commentary, using a tiny amount of a game, not doing this for any commercial advantage and, if anything increasing the market for MLB's product. It seems like a pretty clear cut fair use situation. MLB has told Yahoo that they expect to come to some sort of agreement to let Friedman back on Twitter:

League sources told Yahoo Sports that they expect to “quickly and easily” reach a resolution with Friedman that would allow him to continue posting pitching GIFs. In a letter to the league official who filed the DMCA complaint, Friedman, a lawyer by trade, outlined his argument on how what he does benefits the league.

But, of course, it's bullshit that they should even need to do this in the first place. The whole point of fair use is that you don't need permission, and you don't need to reach an agreement. And yet, according to Yahoo, MLB seems to think it needs to come to an agreement with Friedman over what is fair use:

MLB plans to contact Friedman in the coming days, if not sooner, at which point they are likely to agree on what constitutes fair use.

But, they don't need to agree. The law says what fair use is, and MLB doesn't get to change that to suit their own whims.

Friedman also told Yahoo the following:

“I also understand that MLB has every right to protect its product,” he wrote in the email, which he shared with Yahoo Sports. “I’m most certainly not trying to deprive MLB of any value, instead I’m trying to create value by helping pitchers have a sense of community, learn, and appreciate the game. Rather than debate the legal matter, I am more than happy to give MLB all of my gifs for free or work out some other content deal that just allows me to use MLB content, as permitted, for fair use, to help pitchers, coaches, and fans understand the game. I would be happy to donate any content for free and execute a copyright license ensuring that MLB owns any gifs I create.”

That's... weird. MLB already owns the copyright to the videos. Fair use is what lets Friedman make use of them without needing a license. So I'm not sure what he's talking about licensing them back to MLB. That doesn't really make much sense. But, you still see the underlying point that he's making, which is that he's building more interest in the game, and he's not trying to claim any ownership or make any money from what he's doing, it's just for the love of sharing the game and educating people. Which, you know, is the kind of thing that fair use is explicitly designed to enable.

And, of course, no one should take Twitter off the hook here for suspending Friedman's account. Twitter could have (and should have) rejected the DMCA notices and pointed out that the @PitchingNinja account was engaging in fair use. Instead, it shut down the account, and once again showed how copyright is regularly abused for censorship, rather than any legitimate purpose under the Copyright Act.

18 Comments | Leave a Comment..

Posted on Free Speech - 17 April 2018 @ 12:04pm

How Government Pressure Has Turned Transparency Reports From Free Speech Celebrations To Censorship Celebrations

from the this-is-not-good dept

For many years now, various internet companies have released Transparency Reports. The practice was started by Google years back (oddly, Google itself fails me in finding its original trasnparency report). Soon many other internet companies followed suit, and, while it took them a while, the telcos eventually joined in as well. Google's own Transparency Report site lists out a bunch of other companies that now issue such reports:

We've celebrated many of these transparency reports over the years, often demonstrating the excesses of attempts to stifle and censor speech or violate users privacy, and in how these reports often create incentives for these organizations to push back against those demands. Yet, in an interesting article over at Politico, a former Google policy manager warns that the purpose of these platforms is being flipped on its head, and that they're now being used to show how much these platforms are willing to censor:

Fast forward a decade and democracies are now agonizing over fake news and terrorist propaganda. Earlier this month, the European Commission published a new recommendation demanding that internet companies remove extremist and other objectionable content flagged to them in less than an hour — or face legislation forcing them to do so. The Commission also endorsed transparency reports as a way to demonstrate how they are complying with the law.

Indeed, Google and other big tech companies still publish transparency reports, but they now seem to serve a different purpose: to convince authorities in Europe and elsewhere that the internet giant is serious about cracking down on illegal content. The more takedowns it can show, the better.

If true, this is a pretty horrific result of something that should be a good thing: more transparency, more information sharing and more incentives to make sure that bogus attempts to stifle speech and invade people's privacy are not enabled.

Part of the issue, of course, is the fact that governments have been increasingly putting pressure on internet platforms to take down speech, and blaming internet platforms for election results or policies they dislike. And the companies then feel the need to show the governments that they do take these "issues" seriously, by pointing to the content they do takedown. So, rather than alerting the public to all the stuff they don't take down, the platforms are signalling to governments (and some in the public too, frankly) that they frequently take down content. And, unfortunately, that's backfiring, as it's making politicians (and some individuals) claim that this just proves the platforms aren't censoring enough.

The pace of private sector censorship is astounding — and it’s growing exponentially.

The article talks about how this is leading to censorship of important and useful content, such as the case where an exploration of the dangers of Holocaust revisionism got taken down because YouTube feared that a look into it might actually violate European laws against Holocaust revisionism. And, of course, such censorship machines are regularly abused by authoritarian governments:

Turkey demands that internet companies hire locals whose main task is to take calls from the government and then take down content. Russia reportedly is threatening to ban YouTube unless it takes down opposition videos. China’s Great Firewall already blocks almost all Western sites, and much domestic content.

Similarly, a recent report on how Facebook's censorship of reports of ethnic cleansing in Burma are incredibly disturbing:

Rohingya activists—in Burma and in Western countries—tell The Daily Beast that Facebook has been removing their posts documenting the ethnic cleansing of Rohingya people in Burma (also known as Myanmar). They said their accounts are frequently suspended or taken down.

That article has many examples of the kind of content that Facebook is pulling down and notes that in Burma, people rely on Facebook much more than in some other countries:

Facebook is an essential platform in Burma; since the country’s infrastructure is underdeveloped, people rely on it the way Westerners rely on email. Experts often say that in Burma, Facebook is the internet—so having your account disabled can be devastating.

You can argue that there should be other systems for them to use, but the reality of the situation right now is they use Facebook, and Facebook is deleting reports of ethnic cleansing.

Having democratic governments turn around and enable more and more of this in the name of stopping "bad" speech is acting to support these kinds of crackdowns.

Indeed, as Europe is pushing for more and more use of platforms to censor, it's important that someone gets them to understand how these plans almost inevitably backfire. Daphne Keller at Stanford recently submitted a comment to the EU about its plan, noting just how badly demands for censorship of "illegal content" can turn around and do serious harm.

Errors in platforms’ CVE content removal and police reporting will foreseeably, systematically, and unfairly burden a particular group of Internet users: those speaking Arabic, discussing Middle Eastern politics, or talking about Islam. State-mandated monitoring will, in this way, exacerbate existing inequities in notice and takedown operations. Stories of discriminatory removal impact are already all too common. In 2017, over 70 social justice organizations wrote to Facebook identifying a pattern of disparate enforcement, saying that the platform applies its rules unfairly to remove more posts from minority speakers. This pattern will likely grow worse in the face of pressures such as those proposed in the Recommendation.

There are longer term implications of all of this, and plenty of reasons why we should be thinking about structuring the internet in better ways to protect against this form of censorship. But the short term reality remains, and people should be wary of calling for more platform-based censorship over "bad" content without recognizing the inevitable ways in which such policies are abused or misused to target the most vulnerable.

12 Comments | Leave a Comment..

Posted on Techdirt - 16 April 2018 @ 10:43am

After Removing US From Negotiating Process, Now Trump Suddenly Wants US Back In TPP

from the say-what-now? dept

The Trans Pacific Partnership (TPP) Agreement is deeply unpopular with Americans for a variety of reasons (some of which we'll discuss below). Because of its unpopularity, both Donald Trump and Hillary Clinton denounced the agreement during their campaign for the Presidency. Trump's denunciation seemed a lot more genuine -- he's argued against free trade and in favor of protectionism for quite a long time. Clinton's denunciation was highly suspect, as she had long been a supporter of the TPP, and many people expected that, if elected, she'd flip flop back to support the agreement. Of course, she didn't get elected... but now it's apparently, Trump who has flip flopped to now supporting TPP.

President Trump, in a sharp reversal, told a gathering of farm-state lawmakers and governors on Thursday morning that the United States was looking into rejoining a multicountry trade agreement known as the Trans-Pacific Partnership, a deal he pulled out of days after assuming the presidency.

Mr. Trump’s reconsideration of an agreement he once denounced as a “rape of our country” caught even his closest advisers by surprise and came as his administration faces stiff pushback from Republican lawmakers, farmers and other businesses concerned that the president’s threat of tariffs and other trade barriers will hurt them economically.

We spent years explaining the many, many problems associated with TPP. While we tend to be supporters of free trade, the problem with the TPP was that it wasn't actually a free trade agreement. Yes, a few parts of it included lowering tariffs and opening borders to trade (and those parts were, for the most part, pretty good), but the bigger part of the agreement was that it was an "investment" agreement, rather than a trade agreement. And thus it included two parts that were really problematic.

First, was an intellectual property section which was the exact opposite of "free trade." Rather it required higher barriers to trade, creating mercantilist barriers to information and ideas, in locking up "intellectual property" under ever more draconian terms. The second part was what we've referred to as the "corporate sovereignty" section, which is officially referred to as "Investor State Dispute Settlement" provisions or (ISDS). This is a system by which companies can effectively take governments to a private tribunal, who will determine if their regulations cut into the expected profits of the company. The original idea behind such corporate sovereignty provisions was to deal with the situations in which, say, a big company invested in an economically developing country, and that country's leadership suddenly decided to seize the factory or whatever. But, as we've seen, over the years, is that ISDS/corporate sovereignty has mainly been used as a tool for corruption.

Given all of that, we were happy that one of President Trump's first moves in office was to drop out of the TPP, even as we noted that he was clearly doing so for the wrong reasons (his stated reasons being wishing for more protectionism, when it was the lowering of trade barriers that we found to be the only good parts of the TPP).

With the US out of the TPP, the remaining countries picked up the ball and ran with it -- under the leadership of Canada who agreed to remove the intellectual property section. An agreement was reached earlier this year without the awful copyright and patent provisions, but with corporate sovereignty still in there. It's ironic that Canada took over the leadership role, since it was actually a late entrant into the TPP after the US bent over backwards to keep Canada out of the agreement, partly in the belief that it would push back on things like the draconian intellectual property section.

So... given all of that it seems doubly ironic that Trump now apparently says he wants back in. His tweet on the subject is, as per usual, somewhat nonsensical.

Claiming he'd only rejoin the TPP if the deal is better than what Obama negotiated is a reasonable enough claim to make, but if that was the case... why did Trump completely drop out of the negotiations and let the other countries conclude all of the negotiations without any US influence at all? Reopening such negotiations at this point seems like a total non-starter, and even if it happened, the US would be at a distinct disadvantage, given that everyone else has already agreed to nearly everything.

And, of course, there's little to suggest that the attempt to rejoin now is to get rid of things like corporate sovereignty, or to do the actual good stuff around lowering trade barriers (this is coming just weeks after Trump announced plans to put in place tariffs on certain Chinese products) and soon after the dubious claim that winning trade wars is "easy."

As far as I can tell, this appears to be Trump trying to make a group of people he was talking to happy, and not really understanding the details:

As he often does, the president started to change gears after hearing complaints from important constituents — in this case, Republican lawmakers who said farmers and other businesses in their states would suffer from his trade approach since they send many of their products abroad.

That, of course, seems like an odd way to lead. Or to negotiate.

Chances are nothing significant comes of this -- certainly not a wholescale renegotiation of the TPP. Instead, we've just got yet another political mess.

32 Comments | Leave a Comment..

Posted on Techdirt - 13 April 2018 @ 10:41am

Ted Cruz Demands A Return Of The Fairness Doctrine, Which He Has Mocked In The Past, Due To Misunderstanding CDA 230

from the grandstanding-idiocy dept

Remember the Fairness Doctrine? It was an incredibly silly policy of the FCC from 1949 to 1987 requiring some form of "equal time" to "the other side" of controversial matters of public interest. It's a dumb idea because most issues have a lot more than two sides, and simply pitting two arguments against one another tends to do little to elucidate actual truth -- but does tend to get people to dig in more. However, despite the fact that the fairness doctrine was killed more than 30 years ago, Republicans* regularly claim that it's about to be brought back.

* Our general policy is not to focus on political parties, unless it's a necessary part of the story, and in this case it is. If you look at people freaking out about the supposed return of the fairness doctrine (which is not returning) it is always coming from Republicans, stirring up their base and claiming that Democrats are trying to bring back the fairness doctrine to silence the Rush Limbaughs and Sean Hannitys of the world.

But that's why it's so bizarre that Ted Cruz has taken to the pages of Fox News... to incorrectly claim that the fairness doctrine applies to the internet based on his own tortured (i.e. dead wrong) reading of Section 230 of the Communications Decency Act. We already discussed how wrong Cruz was about CDA 230 in his questions to Mark Zuckerberg (while simultaneously noting how ridiculous Zuck's responses were).

In his Fox News op-ed, Cruz argues that if a platform is "non-neutral" it somehow loses CDA 230 protections:

Section 230 of the Communications Decency Act (CDA) states: "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider."

This is a good provision. It means that, for example, if you run a blogging platform and someone posts a terrorist threat in the comments section, you’re not treated as the person making the threat. Without Section 230, many social media networks could be functionally unable to operate.

In order to be protected by Section 230, companies like Facebook should be “neutral public forums.” On the flip side, they should be considered to be a “publisher or speaker” of user content if they pick and choose what gets published or spoken.

This is Cruz only reading Section (c)(1) of CDA 230, and totally ignoring the part right after it that says:

No provider or user of an interactive computer service shall be held liable on account of any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected

There's plenty of case law that has made it clear that moderating content on your platform doesn't make you liable under CDA 230. The very point of this section was to encourage exactly this kind of moderation. Indeed, this part of the CDA was added directly in response to the infamous Stratton Oakmont v. Prodigy case, where Prodigy was found liable for certain posts in part because it moderated other messages.

Now, you could claim that Cruz is not misreading the law -- but rather he's advocating a return to the days of that Stratton Oakmont ruling being law. After all he says "Facebook should be 'neutral public forums.'" But, that's both an impossible standard (what the hell is "neutral" in this context anyway?) and basically calling for a return to the fairness doctrone.

Republicans, who have spent years freaking out about the fairness doctrine should be really pissed off at Cruz for basically demanding not just a return of the fairness doctrine, but demanding it for all online platforms and setting it at an impossible standard.

And, of course, this is the same Cruz who has railed against the fairness doctrine itself in the past.

"You know, the Obama FCC has invoked the Fairness Doctrine a number of times with sort of wistful glances to the past. Nostalgia," he said. "You know if I had suggested years ago that the Obama administration would send government observers into the newsrooms of major media organizations, that claim would have been ridiculed. And yet that is exactly what the FCC did."

Amusingly, this was right after he railed against the Obama FCC for pushing for net neutrality.

So... to sum up Ted Cruz's views on the internet, net neutrality is evil and attack on free speech, but platform "neutrality" is necessary. How does that work? Oh, and the fairness doctrine is censorship, but Facebook needs to engage in a form of the fairness doctrine or face stifling civil liability.

It's almost as if Ted Cruz has no idea what the fuck he's talking about concerning internet regulations, free speech, neutrality and fairness -- but does know that if he hits on certain buzzwords, he's sure to fire up his base.

126 Comments | Leave a Comment..

Posted on Techdirt - 13 April 2018 @ 9:31am

Amended Complaint Filed Against Backpage... Now With SESTA/FOSTA

from the because-of-course dept

What a weird week for everyone promoting FOSTA/SESTA as being necessary to takedown Backpage.com. After all, last Friday, before FOSTA/SESTA was signed into law, the FBI seized Backpage and all its servers, and indicted a bunch of execs there (and arrested a few of them). The backers of FOSTA/SESTA even tried to take credit for the shutting down of the site, despite the fact that the law they "wrote" wasn't actually the law yet. Separately, as we pointed out, after the bill was approved by Congress, but before it was signed into law, two separate courts found that Backpage was not protected by CDA 230 in civil suits brought by victims of sex trafficking.

On Wednesday, President Trump finally signed the bill despite all of the reasons we were told it was necessary already proven to be untrue (and many of the concerns raised by free speech advocates already proven true). And, on Thursday, in the civil case in Massachusetts (the first to rule that Backpage wasn't protected by CDA 230 for ads where it helped create illegal content), an amendment complaint was filed, this time with FOSTA/SESTA included. Normally, this wouldn't make any sense, but thanks to the unconstitutional retroactive clause in FOSTA/SESTA it could possibly apply (assuming the judge ignores the Constitutional problems).

From the amended complaint:

In March 2018, Congress passed the “Allow States and Victims to Fight Online Sex Trafficking Act of 2017” (“FOSTA”), and the President signed it into law on April 11, 2018. Pub. L. No. 115-___, ___ Stat. ___ (2018) (codified at, inter alia, 47 U.S.C. § 230). FOSTA specifically states, among its legislative findings, that Section 230 of the Communications Decency Act (“CDA”), 47 U.S.C. § 230, “was never intended to provide legal protection to websites that . . . facilitate traffickers in advertising the sale of unlawful sex with sex trafficking victims,” and that “websites that promote and facilitate prostitution have been reckless in allowing the sale of sex trafficking victims and have done nothing to prevent the trafficking of children and victims of force, fraud, and coercion.” FOSTA § 2(1)-(2). Accordingly, Congress passed FOSTA to “clarify that section 230 of [the CDA] does not prohibit the enforcement against providers and users of interactive computer services of Federal and State criminal and civil law relating to sexual exploitation or sex trafficking.” ... FOSTA amended, inter alia, Section 230(e) of the CDA to provide that “[n]othing in this section (other than subsection (c)(2)(A)) shall be construed to impair or limit . . . any claim in a civil action brought under section 1595 of title 18, United States Code, if the conduct underlying the claim constitutes a violation of section 1591 of that title.” Id. § 4(a). FOSTA also provides that its amendment to Section 230(e) “shall apply regardless of whether the conduct alleged occurred, or is alleged to have occurred, before, on, or after [FOSTA’s] date of enactment.” ... The effect of FOSTA is to ensure that website operators like Backpage can be held civilly liable to their victims for their violations of federal criminal law.

And thus, the retroactive clause is already in play. Assuming Backpage continues to fight this, you have to imagine it will note the serious constitutional problems with retroactive clauses like the one in FOSTA/SESTA.

But, that of course, depends on Backpage being around to fight this, and the company is gone thanks to the DOJ action. Oh, and apparently the company and its CEO have accepted plea deals to plead guilty to certain charges (though many other execs have pleaded not guilty).

Still, expect to see other civil lawsuits attempt to use the FOSTA/SESTA retroactive clause in the very near future.

Read More | 43 Comments | Leave a Comment..

Posted on Techdirt - 12 April 2018 @ 12:02pm

Open Letter On Ending Attacks On Security Research

from the it's-too-important dept

The Center for Democracy and Technology has put together an important letter from experts on the importance of security research. This may sound obvious, but increasingly we're seeing attacks on security researchers, where the messenger is blamed for finding and/or disclosing bad security practices or breaches -- and that makes us all less safe by creating chilling effects.

On April 10, 2018, over fifty experts and expert advocates published a statement in support of security research and against efforts to chill or intimidate security researchers. Computer and network security research, white-hat hacking, and vulnerability disclosure are legal, legitimate, and needed now more than ever to understand flaws in the information systems that increasingly pervade our lives.

Security researchers hesitate to report vulnerabilities and weaknesses to companies for fear of facing legal retribution; these chilling effects invite the release of anonymous, public zero-day research instead of coordinated disclosure. The undersigned urge support for security researchers and reporters in their work, and decry those who oppose research and discussion of privacy and security risks. Harming these efforts harms us all.

I'm proud to have signed onto the letter, which you can read here (or embedded below). In it, we cite two legal cases in which a reporter and security researcher were sued for their work disclosing security vulnerabilities. These kinds of lawsuits are a disgrace and need to stop.

The most recent cases include Keeper v. Goodin and River City Media v. Kromtech ; in the first case, a reporter was sued for reporting on the details of a vulnerability, and in the second case a security researcher is being sued for investigating a publicly accessible spam server. These lawsuits not only endanger a free and open press but risk a “chilling effect” towards research designed to improve cybersecurity. Security researchers hesitate to report vulnerabilities and weaknesses to companies for fear of facing legal retribution; these chilling effects invite the release of anonymous, public zero-day research instead of coordinated disclosure.

It's kind of sad that this kind of letter is even needed, but these kinds of things are happening way too often.

Read More | 5 Comments | Leave a Comment..

Posted on Techdirt - 12 April 2018 @ 9:39am

Despite Repeated Evidence That It's Unnecessary And Damaging, Trump Signs SESTA/FOSTA

from the because-of-course dept

This was no surprise, but as everyone expected, yesterday President Trump signed SESTA/FOSTA into law leading to the usual excitement from the bill's supporters -- despite the fact that events of the past couple weeks have proved them all wrong. The bill's supporters repeatedly insisted that SESTA/FOSTA was necessary to stop one company, Backpage.com, because (they falsely insisted) CDA 230 made the site completely immune. Except, that's clearly not true. In the two weeks since the bill was approved by Congress, two separate courts declared Backpage not protected by CDA 230 and (more importantly) the DOJ seized the whole damn site and indicted most of the company's execs -- all without SESTA/FOSTA.

And, on top of that, many many sites have already shut down or modified how they do business because of SESTA/FOSTA proving that the bill's reach clearly is impacting free expression online -- just as tons of civil liberties experts warned. And that's not even touching on the very real concerns of those involved in sex work on how SESTA/FOSTA literally puts their lives in danger -- and how it makes it that much more difficult to actually rescue victims of sex trafficking.

As usual, Professor Eric Goldman has a pretty thorough summary of the situation, and notes that there are still a bunch of open questions -- including the inevitable constitutional challenges to the bill. The retroactive clause (saying it applies to things that happened prior to the bill being signed) is so obviously unconstitutional that even the Justice Department warned that it would doom the bill if not fixed (which Congress dutifully ignored). But, to me, there's a bigger question: whether or not a First Amendment challenge could knock out SESTA/FOSTA in the same way that it got most of the original Communications Decency Act tossed out 20 years ago (CDA 230 was all that survived of the original CDA).

I am also curious whether or not we will see any reaction from those who promoted and supported SESTA for the past year or so, when the rates of sex trafficking don't decrease, but the ability to rescue such victims does decline. Somehow, I get the feeling they'll have moved on and forgotten all of this. And that's because, for most of them, "stopping sex trafficking" was a convenient excuse for trying to attack the internet.

20 Comments | Leave a Comment..

Posted on Techdirt - 12 April 2018 @ 6:34am

California Bill Could Introduce A Constitutionally Questionable 'Right To Be Forgotten' In The US

from the well-meaning-but-poorly-thought-out dept

As we've pointed out concerning the General Data Protection Regulation (GDPR) in the EU, the thinking behind the regulation is certainly well-meaning and important. Giving end users more control over their own data and increasing privacy controls is, generally speaking, a good idea. However, the problem is in the drafting of the GDPR, which is done in a manner that will lead to widespread censorship. A key part of the problem is that when you think solely in terms of "privacy" or "data protection" you sometimes forget about speech rights. I have no issue with giving more control over actually private information to the individuals whose information is at stake. But the GDPR and other such efforts take a much more expansive view of what information can be controlled, including public information about a person. That's why we've been troubled by the GDPR codifying a "right to be forgotten." We've already seen how the RTBF is leading to censorship, and doing more of that is not a good idea.

But now the idea is spreading. Right here in California, Assemblymember Mark Levine has introduced a local version of the GDPR, called the California Data Protection Authority, which includes two key components: a form of a right to be forgotten and a plan for regulations "to prohibit edge provider Internet Web sites from conducting potentially harmful experiments on nonconsenting users." If you're just looking from the outside, both of these might sound good as a first pass. Giving end users more control over their data? Sounds good. Preventing evil websites from conducting "potentially harmful experiments"? Uh, yeah, sounds good.

But, the reality is that both of these ideas, as written, seem incredibly broad and could create all sorts of new problems. First, on the right to be forgotten aspect, the language is painfully vague:

It is the intent of the Legislature to ensure that personal information can be removed from the database of an edge provider, defined as any individual or entity in California that provides any content, application, or service over the Internet, and any individual or entity in California that provides a device used for accessing any content, application, or service over the Internet, when a user chooses not to continue to be a customer of that edge provider.

Any content? Any application? At least the bill does limit "personal information" to a limited category of topics, so we're not just talking about "embarrassing" information, a la the EU's interpretation of the right to be forgotten. But "personal information" is still somewhat vague. It does include "medical information" which is further defined as "any individually identifiable information, in electronic or physical form, regarding the individual’s medical history or medical treatment or diagnosis by a health care professional." So, would that mean that if we wrote about SF Giants pitcher Madison Bumgarner, and the fact that his broken pinky required pins and he won't be able to pitch for a few weeks... we'd be required to take that information down if he requested it? That seems like a pretty serious First Amendment problem.

This is the problem with writing broad legislation that doesn't take into account the reality that sometimes this kind of information is made public for perfectly good reasons.

Similarly, the prohibition on "potentially harmful experiments." How does one define "potentially harmful"? Websites are in a never-ending state of experimentation. That's how they work. Everyone gets a different view on sites like Amazon and Netflix and Facebook and Google, because they're all trying to customize how they look for you. Is that "potentially harmful"? Maybe? It's also potentially very, very helpful. Before just throwing out the ability of websites to try to build better products, it seems like we should have a lot more of an exploration of the issue than just saying nothing "potentially harmful" is allowed. Because almost anything can be "potentially harmful."

Again, I'm quite sure that Levine's intentions here are perfectly good. There are very good reasons (obviously!) why so many people are concerned about the data that companies like Facebook, Google, Amazon and others are collecting on people. And these are big companies with a lot of power. But these rules seem vague and "potentially harmful" themselves. Beyond blocking perfectly natural "experimenting" in terms of how websites are run, these rules won't just impact those giants, but every website, including small ones like, say, this blog. Can we experiment in how we display our information? Or is that "potentially harmful" in that it might upset some of our regulars? That may sound silly, but under this law, it's not at all clear what is meant by "potentially harmful."

There are important discussions to be had about protecting individuals' privacy, and about experiments done by large companies with lots of data. But the approach in this bill seems to just rush into the fray without bothering to consider the actual consequences of these kinds of broad regulations.

18 Comments | Leave a Comment..

Posted on Techdirt - 11 April 2018 @ 3:33am

Latest EU Copyright Plan Would Ban Copyright Holders From Using Creative Commons

from the because-that's-how-stupid-things-have-gotten dept

We recently noted that the latest version of the EU's copyright directive, being pushed by MEP Axel Voss (though the metadata showed that it actually came from the EU Commission), would bring back horrible censorial ideas like mandatory filtering. As we noted, such a plan would likely kill important sites like Github, which would have trouble functioning as a repository for sharing code if it had to block... sharing of code. But the plan keeps getting worse. As MEP Julia Reda recently explained, with each new version that Voss puts out, the end results are more and more ridiculous. Under the latest, it includes:

  1. News sites should not be able to give out free licenses (an “inalienable right to remuneration”)
  2. Press agencies should also be granted this right – effectively giving them control over the spreading of facts
  3. Money publishers make from the law should be shared with journalists in some cases
  4. There should be an exception for individuals who share news content for “legitimate private and non-commercial uses”
  5. A newly added justification of the law is to supposedly fight fake news

Many of these ideas are similar to what Spain implemented back in 2014, as a form of a "link tax" with the goal of forcing Google to pay any publication that it sent traffic to (which, you know, sounds kind of backwards, especially given how much emphasis sites put on search engine optimization). In response to that, Google News pulled out of Spain entirely, and a study a year later found that the law ended up doing quite a lot of harm to Spanish publications -- especially smaller ones.

However, as Creative Commons noted in response to this latest proposal, the most ridiculous part of all of this is that it doesn't allow sites that want to share their content to do so:

This press publisher’s right (also commonly known as the “Link Tax”) already poses a significant threat to an informed and literate society. But Voss wants to amplify its worst features by asserting that press publishers will receive—whether they like it or not—an “inalienable right to obtain an [sic] fair and proportionate remuneration for such uses.” This means that publishers will be required to demand payment from news aggregators.

This inalienable right directly conflicts with publishers who wish to share freely and openly using Creative Commons licenses. As we’ve warned before, an unwaivable right to compensation would interfere with the operation of open licensing by reserving a special and separate economic right above and beyond the intention of some publishers. For example, the Spanish news site eldiario.es releases all of their content online for free under the Creative Commons Attribution-ShareAlike license. By doing so, they are granting to the public a worldwide, royalty-free license to use the work under certain terms. Other news publishers in Europe using CC licenses that could also find themselves swept up under this new provision include La Stampa, 20 Minutos, and openDemocracy.

Forcing publishers who use CC to accept additional inalienable rights to be remunerated violates the letter and spirit of Creative Commons licensing and denies publishers the freedom to conduct business and share content as they wish. The proposal would pose an existential threat to the over 1.3 billion CC-licensed works online, shared freely by hundreds of millions of creators from around the world.

Once again this appears to be copyright policy driven solely by the interests of a single party: big publishers who are annoyed at Google for aggregating news and are demanding payment. It doesn't take into account (1) whether or not this is necessary (2) whether or not this makes sense (3) what will be the impact on other aggregators (4) what will be the impact on tons of other publications and (5) what will be in the best interest of the public.

It's a pretty bad way to make policy, though it's all too common when it comes to copyright.

73 Comments | Leave a Comment..

Posted on Techdirt - 10 April 2018 @ 10:40am

Vimeo Copyright Infringement Case Still Going Nearly A Decade Later, With Another Partial Win For Vimeo

from the this-case-will-never-end dept

I'll admit that I'd forgotten this case was still going on, but after nearly a decade, there it is. The case involves record labels suing web hosting site Vimeo for copyright infringement. The case, which was first filed in 2009, initially focused on Vimeo's promotion of so-called "lipdubs." Vimeo is a much smaller competitor to YouTube for hosting videos, but in the 2007 to 2009 timeframe, got some attention for hosting these "lipdubs" of people singing along to famous songs. Perhaps the most famous was one done by the staff of Vimeo itself. The case has taken many, many, many twists and turns.

Back in 2013, the record labels got a big win on two points. First, the court said that Vimeo may be liable for so-called "red flag" infringement (i.e., knowing that something was absolutely infringing and doing nothing about it) but also saying that the DMCA safe harbors did not apply to songs recorded prior to February 15th, 1972. If you don't recall, pre-1972 sound recordings did not get copyright protection (their compositions did, but not their recordings). So that got appealed and in 2016, the 2nd Circuit said of course those works are covered by the DMCA's safe harbors. The Supreme Court was petitioned, but declined to hear the case.

And thus, the case goes back down to the district court again, with Vimeo now trying to get other claims (such as "unfair competition") dismissed under the DMCA's safe harbor provisions. And the latest ruling grants... some of them. It says that now that it's been told by the appeals court that the DMCA's safe harbors do apply to pre-1972 works, it believes that the unfair competition claims are really based on the copyright claims, and thus Vimeo is protected.

The question here is whether it can also apply to non-copyright claims that are founded on copyright infringement. The answer is yes. In the same way that the statute does not distinguish between federal and state copyright-infringement claims... it does not distinguish between copyright-infringement claims and other types of claims that result in "liab[ility]... for infringement of copyright." Instead, the safe harbor precludes liability for a particular type of conduct--namely, "infringement of copyright." This reading plainly encompasses copyright-infringement claims, because "[o]ne who has been found liable for infringement of copyright under state [or federal] laws has indisputably been found 'liable for infringement of copyright.".... But it also covers other claims for which liability requires proof of copyright infringement. Whenever copyright infringement is a necessary element of a claim, liability for that claim amounts to liability "for infringement of copyright" under the DMCA because no liability could be imposed absent the relevant copyright infringement.

The record labels try to get around this by arguing unfair competition is totally different, but the court rightly recognizes that if true, this would allow any copyright holder to completely get around the DMCA's safe harbors by throwing an unfair competition claim in. Furthermore, it points out that if there is no infringement of the works under the DMCA then Vimeo hasn't misappropriated any works and "without misappropriation, Plaintiffs' unfair-competition claims fail."

That's mostly good news for Vimeo, which gets a bunch of claims dismissed. But not all of them. Going way back to earlier in the case, there were still claims on post-1972 sound recordings over the possibility of "red flag knowledge" which had been sitting around, and here the court finds that some of the pre-1972 songs could also have red flag knowledge. So it dismisses some, and throws some others into the other pile for when the case moves forward.

As to each of those claims, Vimeo apparently concedes that Plaintiffs have alleged red-flag knowledge... The Court at this stage must accept those allegations as true and draw all reasonable inferences in Plaintiffs' favor.... And bad faith generally "may be inferred from the [defendant's] actual or constructive knowledge of the [plaintiff's]" property right..... There are also other allegations in the Amended Complaint from which a jury could reasonably infer that Vimeo had some sort of more general intent to profit at Plaintiff's expense from its users' infringements of the pre-1972 recordings. Thus, based on the Amended Complaint and Vimeo's concession that Plaintiffs have at least alleged red-flag knowledge as to the remaining unfair-competition claims, those 59 claims survive Vimeo's motion to dismiss...

And thus, the case continues. As law professor Eric Goldman notes about this case, the fact that such a case is limping along in its 9th year, and still hasn't even reached the summary judgment stage, suggests some worrisome things for innovation in this market:

Consider this: have you ever pondered why YouTube is the dominant video hosting platform? Here’s one hypothesis to explore. It took YouTube nearly a decade, and well over $100M, to eventually settle its DMCA lawsuit. YouTube’s competitor Veoh won its DMCA safe harbor defense in court but ran out of money and dropped out of the industry along the way. YouTube’s competitor Vimeo has been hemorrhaging cash fighting this litigation since 2009. And what potential video hosting investor wants to shovel the first $100M+ of raised capital into the inevitable DMCA lawfare with the copyright owners–before you even start building a viable or profitable business? I think we can connect the dots between the lack of competition in video hosting and the safe harbor’s (defective) design.

Read More | 9 Comments | Leave a Comment..

Posted on Techdirt - 9 April 2018 @ 12:02pm

Facebook Derangement Syndrome: The Company Has Problems, But Must We Read The Worst Into Absolutely Everything?

from the tough-to-take-people-seriously dept

Since the whole Facebook/Cambridge Analytica thing broke, we've been pointing out that there are many, many valid concerns about things Facebook has done, but people seem to be freaking out about things it didn't actually do and that's bad, because freaking out about the wrong things will make things worse, not better. Indeed, that seems to be the direction things are heading in.

One thing I've noticed in having this discussion a few times now both online and off is that there's appears to be a bit of Facebook derangement syndrome going on. It seems to go something like this: Facebook did some bad things concerning our privacy, and therefore every single possible thing that Facebook does or Mark Zuckerberg says must have some evil intent. This is silly. Not only is it obviously wrong, but (more importantly) it makes it that much more difficult to have a serious discussion on the actual mistakes of Facebook and Zuckerberg, and to find ways to move forward productively.

I'll give one example of this in practice, because it's been bugging me. Back in January, in the podcast we had with Nabiha Syed about free speech and the internet, where the question of platform moderation came up, I brought up an idea I've discussed a few times before. Noting that one of the real problems with platform moderation is the complete lack of transparency and/or due process, I wondered whether or not there could be an independent judicial-type system that could be set up to determine whether or not an account truly violated a site's policies. As I noted in the podcast, there could clearly be some problems with this (our own judicial system is costly and inefficient), but I still think there may be something worth exploring there. After all, one reason why so many people get upset about internet companies making these kinds of decisions is that they don't know why they're being made, and there's no real way to appeal. An open judicial system of sorts could solve at least some of those problems, bringing both transparency and due process to the issue.

And while I've talked about this idea a few times before, I've never seen anyone else appear to take it seriously... until I was surprised to see Zuckerberg suggest something similar in his interview with Ezra Klein at Vox. That interview has been criticized for being full of softball questions, which is pretty fair criticism. But I still found this part interesting:

Here are a few of the principles. One is transparency. Right now, I don’t think we are transparent enough around the prevalence of different issues on the platform. We haven’t done a good job of publishing and being transparent about the prevalence of those kinds of issues, and the work that we’re doing and the trends of how we’re driving those things down over time.

A second is some sort of independent appeal process. Right now, if you post something on Facebook and someone reports it and our community operations and review team looks at it and decides that it needs to get taken down, there’s not really a way to appeal that. I think in any kind of good-functioning democratic system, there needs to be a way to appeal. And I think we can build that internally as a first step.

But over the long term, what I’d really like to get to is an independent appeal. So maybe folks at Facebook make the first decision based on the community standards that are outlined, and then people can get a second opinion. You can imagine some sort of structure, almost like a Supreme Court, that is made up of independent folks who don’t work for Facebook, who ultimately make the final judgment call on what should be acceptable speech in a community that reflects the social norms and values of people all around the world.

Huh. That's almost exactly what I suggested. Again, I also see some potential problems with this kind of setup and am not 100% convinced it's the best idea -- but it does solve some of the very real existing problems. But, the knee jerk "everything Zuckerber says must be bad" crowd kinda took this statement and ran with it... straight into a wall. Here's the tweet that Laura Rosenberger, a former high level government staffer, had to say in response to that part of Zuck's interview:

If you can't read it, she says:

This is terrifying. Facebook essentially sees itself becoming a system on world governance, complete with its own Supreme Court.

So, first of all, this gets what Zuckerberg said exactly backwards. Indeed, it takes a special kind of "must-hate-on-everything-he-says" attitude to misread a statement about being more transparent and more accountable to an outside set of arbitrators, and turn it into Facebook wants to build its own Supreme Court. I mean, he literally says it should be an outside panel reviewing Facebook's decisions, and she turns it into "Facebook's own Supreme Court."

But, of course, her tweet got tons of retweets, and lots of people agreeing and chipping in comments about how Zuckerberg is a sociopath and dangerous and whatnot. And, hey, he may very well be those things, but not for what he said here. He actually seemed to be recognizing the very real problem of Facebook having too much power to make decisions that have a huge impact, and actually seemed to open up the idea of giving up some of that power to outside arbitrators, and doing so in a much more transparent way. Which is the kind of thing we should be encouraging.

And, instead, he gets attacked for it.

If that's what happens when he actually makes a potentially good suggestion that results in more transparency and due process, then why should he bother to keep trying? Instead, he can do what people keep demanding he do, and become an even more powerful middleman, with even less transparency and more control over everyone's data -- which he could now do in the name of "protecting your data."

So can we please get past this Facebook derangement syndrome where people are so quick to read the worst into everything Facebook does or Zuckerberg says that we actively discourage the good ideas and push him towards even worse ideas?

71 Comments | Leave a Comment..

More posts from Mike Masnick >>