Mike Masnick’s Techdirt Profile

mmasnick

About Mike MasnickTechdirt Insider

Mike is the founder and CEO of Floor64 and editor of the Techdirt blog.

He can be found on Twitter at http://www.twitter.com/mmasnick



Posted on Techdirt - 18 December 2018 @ 11:58am

As A Final Fuck You To Free Speech On Tumblr, Verizon Blocked Archivists

from the seriously-guys? dept

By now, of course, you're aware that the Verizon-owned Tumblr (which was bought by Yahoo, which was bought by Verizon and merged into "Oath" with AOL and other no longer relevant properties) has suddenly decided that nothing sexy is allowed on its servers. This took many by surprise because apparently a huge percentage of Tumblr was used by people to post somewhat racy content. Knowing that a bunch of content was about to disappear, the famed Archive Team sprung into action -- as they've done many times in the past. They set out to archive as much of the content on Tumblr that was set to be disappeared down the memory hole as possible... and it turns out that Verizon decided as a final "fuck you" to cut them off. Jason Scott, the mastermind behind the Archive Team announced over the weekend that Verizon appeared to be blocking their IPs:

On Sunday, Scott announced that the Archive Team has figured out a way to get past the blocks:

Still, this is a pretty fucked up thing for Verizon to do. It's one thing to decide to completely change the kind of content you host. That's their call. But, at the very least, allow the people who focus on archiving the internet for historical purposes the chance to actually do what they do best. Blocking the Archive Team is a truly obnoxious move, cementing Verizon's reputation as really not caring one bit about the damage the company does.

8 Comments | Leave a Comment..

Posted on Techdirt - 18 December 2018 @ 9:35am

YouTube's $100 Million Upload Filter Failures Demonstrate What A Disaster Article 13 Will Be For The Internet

from the your-future-internet dept

The entire Article 13 debate is a weird one. It appears that both the recording industry and the film industry are going for broke on this one. The lobbying on this started a few years back, with the rather clever but completely bogus idea of the "value gap."

In case you haven't followed it, the idea of the "value gap" is that (1) YouTube pays less to musicians record labels than Spotify and Apple Music do for streams. (2) YouTube's general purpose video hosting platform is protected by intermediary protection laws (DMCA 512 in the US, Article 14 of the E-Commerce Directive in the EU) allowing users to upload whatever they want, and YouTube only has to takedown infringing works upon notice. (3) Services like Spotify and Apple Music license all their works. (4) The "lower rates" that YouTube pays must be the result of the safe harbor, and the difference in payments is the "value gap." Article 13, then, is supposed to "fix" the value gap by completely removing any notice-and-takedown safe harbor for copyright-covered works.

Of course, almost all of this is bullshit. YouTube is used in very, very different ways from Spotify and Apple Music. While YouTube does have a competing music streaming service that is similar to Spotify/Apple Music, its payment rates there are equivalent. But on the general open platform, the rates are different. This is not because of the safe harbors, but because people use the platforms very, very differently. People use Spotify/Apple Music almost like radio -- to put on music that is constantly streaming playlists of songs. YouTube has all sorts of content, most of it not music, and while some may use it as a radio-style experience, that is fairly rare. And the recording industry has always received different rates based on different platforms and different kinds of usage.

Meanwhile, Article 13 will do nothing to solve the "problem" that all the "value gap" people keep insisting is a problem. That's because Article 13 will basically require an upload filter that will spot infringing works and block them before they get on the site (there's more to it than that, but that's a basic approximation of what the law will require in practice). Basically the only company that has actually done this already... is YouTube! YouTube has its ContentID system, which it has spent over $100 million developing, and which can block uploads and pull down content.

And... let's take a look at just how much damage such a system causes. Remember, YouTube has spent more on its filter than anyone else (by far) and it is considered easily the most sophisticated and advanced such filter.

And it sucks.

Last week, I saw musician Dan Bull (who wrote/performed the Techdirt podcast theme song) complaining that he had he had received a copyright claim on a video that was his own work, and from someone whose work was not in the video at all:

The issue, apparently, was that both Dan Bull and the claimant -- another independent rap artist -- had used used the same instrumental, that was available on a "non-exclusive" basis, meaning lots of artists could use it. That indie artist then tried to monetize their own work, but ContentID found anyone using the same properly licensed instrumental and gave people a strike.

And, even the way in which YouTube communicated with Dan Bull was ridiculous:

If you can't see that, YouTube told him that "your copyright dispute is currently being reviewed by the copyright owner," to which Dan Bull rightly pointed out that's bullshit, because it's not "the copyright holder" who is reviewing it.

A day later, someone else pointed me to a nearly identical situation with another super popular YouTube musician, The Fat Rat. A third party claimed copyright on his song and YouTube rejected The Fat Rat's dispute.

And, then, soon after that, he posts a similar frustration to Dan Bull when YouTube incorrectly tells him that this is in the hands of "the content owner," when that's not whose hands its in at all:

That's just two examples of YouTube's $100 million upload filter system claiming copyright on music incorrectly that I randomly came across in just the past few days. But it's not hard to find tons of other examples of this happening. Here's someone pointing out that each and every one of his videos is being copyright claimed one after another:

And that seems to happen quite often too. Here's an incredible video from a popular YouTuber explaining in great detail how every single one of his videos had been copyright claimed:

In case you can't watch that, basically the guy legally licensed some music to be used as his outro music (paid the musician $50 for it, as per their request, even though the musician later claims anyone is free to use the music, so long as they include a Spotify link to his stuff). But, then a bunch of companies get involved, including Distrokid and Audiam, who claimed basically every single one of his videos, and his email conversation with both companies is maddening, as they don't seem to understand or care about what's going on -- and basically tell him that every time this happens he would need to email them again and hope they remove the claims, but they refuse to whitelist his account.

And here's another person highlighting that all 3,000+ of his videos just got a ContentID copyright claim:

And here's someone who got a copyright claim on a private, unlisted video that the uploader needed for a class assignment:

Those are just the first few results that pop up for a quick search on Twitter of "Youtube" and "copyright." There are lots more. Indeed, there's an interesting one from the super popular YouTuber @Jack_Septic_Eye (Sean William McLoughlin) who explains that he accidentally caused a ton of bogus copyright claims by switching YouTube networks:

In short, the $100 million ContentID system is a total fucking mess.

Bring that back around to Article 13, which will create massive fines should YouTube fail to catch any copyright-covered works with (per the industry's wishes) no safe harbors for platforms, and this problem will only get significantly worse. Under Article 13, YouTube would be crazy to give anyone even the benefit of the doubt if they receive a copyright claim. Not only that, it would be effectively required to not just block all of these works, but to prevent many of them from ever being uploaded in the first place.

What the various people pushing for Article 13 don't seem to realize is that bogus copyright claims happen all the time. Or there are mixups around content that multiple people license. Or there are multiple middle men making claims on behalf of copyright holders with no knowledge of who has licensed and who has not. Article 13 doesn't take any of that into account. It just says "license everything." Of course, the big labels/studios don't give a fuck if this leads to censorship of you piddly little independents and amateurs. They just want YouTube to hand them a giant check to "license" their entire catalog. That's the real endgame here. That it would block out independents and amateurs (i.e., competitors) is just icing on the cake. And, of course, it ignores that if YouTube spent $100 million on this system and it already has so many problems, just imagine how poorly everyone else's mandatory filters will work.

Article 13 is a horrible solution to a "problem" that doesn't even exist. The fact that YouTube's $100 million ContentID system is so full of bogus claims shows just one of the many problems with filters -- and making them mandatory won't suddenly make YouTube pay. The whole thing is just designed to be leverage. The labels and studios want to use Article 13 as a weapon against YouTube, basically saying "give us all your money in a giant license or we'll sue you over and over and over again." And, of course, should they actually get that, the money is unlikely to make its way back to any of the actual artists.

Europe is doing a big thing badly, and Article 13 is an unmitigated disaster.

33 Comments | Leave a Comment..

Posted on Free Speech - 17 December 2018 @ 3:41pm

Why Is Congress Trying To Pass An Obviously Unconstitutional Bill That Would Criminalize Boycotts Of Israel?

from the don't-be-ridiculous dept

As we've noted in the past on articles discussing this topic, I recognize that people have very, very, very strong views on both Israel and the whole "BDS" movement, and (trust me) you're not going to convince anyone about the rightness or wrongness of those views in our comments. However, even if you support the Israeli government fully, and think the BDS movement is a sham, hopefully you can still agree that an American law criminalizing supporting the BDS movement is blatantly unconstitutional.

It is true, if horrifying, that a bunch of states have passed such laws, all of which are quite clearly unconstitutional as well. Challenges to the state laws in Kansas and Arizona have already been (easily) successful. There are other legal challenges against the other laws, and they will almost certainly be tossed out as well.

The impact of these laws is absolutely ridiculous as well, even barring Houston residents from receiving hurricane relief if they didn't sign a pledge promising not to boycott Israel. That's so plainly a First Amendment violation, it's amazing that so many states have followed suit. And it's depressing that Congress is looking to do the same:

Earlier versions of the Israel Anti-Boycott Act would have made it a crime — possibly even subject to jail time — for American companies to participate in political boycotts aimed at Israel and its settlements in the occupied Palestinian territories when those boycotts were called for by international governmental organizations like the United Nations. The same went for boycotts targeting any country that is “friendly to the United States” if the boycott was not sanctioned by the United States.

Last week, the ACLU saw an updated version being considered for inclusion in the spending bill (though this text is not publicly available). While Hill offices claim the First Amendment concerns have been resolved, and potential jail time has indeed been eliminated as a possible punishment, the bill actually does nothing to cure its free speech problems. Furthermore, knowingly violating the bill could result in criminal financial penalties of up to $1 million. Were this legislation to pass, federal officials would have a new weapon at their disposal to chill and suppress speech that they found objectionable or politically unpopular.

Boycotts are clearly a freedom of expression issue. The entire point of these kinds of boycotts are to express your views on something happening in the world. To say that it's illegal to support a boycott is crazy. And it's even crazier that the US would pass such a law banning the boycott of a foreign country. This is made even crazier by the fact that it's quite obviously legal to call for a boycott of a state within the US. The Intercept's recent article highlights the insanity of this situation using NY Governor Andrew Cuomo:

One of the first states to impose such repressive restrictions on free expression was New York. In 2016, Democratic Gov. Andrew Cuomo issued an executive order directing all agencies under his control to terminate any and all business with companies or organizations that support a boycott of Israel. “If you boycott Israel, New York State will boycott you,” Cuomo proudly tweeted, referring to a Washington Post op-ed he wrote that touted that threat in its headline.

As The Intercept reported at the time, Cuomo’s order “requires that one of his commissioners compile ‘a list of institutions and companies’ that — ‘either directly or through a parent or subsidiary’ — support a boycott. That government list is then posted publicly, and the burden falls on [the accused boycotters] to prove to the state that they do not, in fact, support such a boycott.”

[....]

What made Cuomo’s censorship directive particularly stunning was that, just two months prior to issuing this decree, he ordered New York state agencies to boycott North Carolina in protest of that state’s anti-LGBT law. Two years earlier, Cuomo banned New York state employees from all non-essential travel to Indiana to boycott that state’s enactment of an anti-LGBT law.

So, according to Cuomo, you must boycott North Carolina and Indiana, but it's a crime to boycott Israel. That's... messed up.

Again, even if you think that the BDS movement is really anti-Semitic, you should at least be able to understand the serious First Amendment problems with any such law. And the idea that Congress might try to slip something through during the lameduck session before the new Congress starts suggests even they know how ridiculous such a law would be.

62 Comments | Leave a Comment..

Posted on Techdirt - 17 December 2018 @ 12:03pm

NY Times Columnist Nick Kristof Led The Charge To Get Facebook To Censor Content, Now Whining That Facebook Censors His Content

from the karma-nick,-karma dept

We've talked in the past about NY Times columnist Nick Kristof, who is a bit infamous for having something of a savior complex in his views. He is especially big on moral panics around sex trafficking, and was one of the most vocal proponents of FOSTA, despite not understanding what the law would do at all (spoiler alert: just as we predicted, and as Kristof insisted would not happen -- FOSTA has put more women at risk). When pushing for FOSTA, Kristof wrote the following:

Even if Google were right that ending the immunity for Backpage might lead to an occasional frivolous lawsuit, life requires some balancing.

For example, websites must try to remove copyrighted material if it’s posted on their sites. That’s a constraint on internet freedom that makes sense, and it hasn’t proved a slippery slope. If we’re willing to protect copyrights, shouldn’t we do as much to protect children sold for sex?

As we noted at the time, this was an astoundingly ignorant thing to say, but of course now that Kristof helped get the law passed and put many more lives at risk, the "meh, no big deal if there are some more lawsuits or more censorship" attitude seems to be coming back to bite him.

You see, last week, Kristof weighed in on US policy in Yemen. The core of his argument was to discuss the horrific situation of Abrar Ibrahim, a 12-year-old girl who is starving in Yemen, and weighs just 28 pounds. There's a giant photo of the emaciated Ibrahim atop the article, wearing just a diaper. It packs an emotional punch, just as intended.

But, it turns out that Facebook is blocking that photo of Ibrahim, claiming it is "nudity and sexual content." And, boy, is Kristof mad about it:

Hey, Nick, you were the one who insisted that Facebook and others in Silicon Valley needed to ban "sexual content" or face criminal liability. You were the one who insisted that any collateral damage would be minor. You were the one who said there was no slippery slope.

Yet, here is a perfect example of why clueless saviors like Kristof always make things worse, freaking out about something they don't understand, prescribing the exact wrong solution. Moderating billions of pieces of content leads to lots of mistakes. The only way you can do it is to set rules. Thanks to laws like FOSTA -- again, passed at Kristof's direct urging -- Facebook has rules about nudity that include no female nudity/nipples. This rule made a lot of news two years ago when Facebook banned an iconic photo from the Vietnam War showing a young, naked, girl fleeing a napalm attack. Facebook eventually created a "newsworthy" exception to the rule, but that depends on the thousands of "content moderators" viewing this content knowing that this particular photo is newsworthy.

And, thanks to FOSTA, the cost of making a mistake is ridiculously high (possible criminal penalties), and thus, the only sane thing for a company like Facebook to do is to take that content down and block it. That's exactly what Nick Kristof wanted. But now he's whining because the collateral damage he shrugged off a year ago is himself. Yeah, maybe next time Nick should think about that before shrugging off what every single internet expert tried to explain to him at the time.

But hey, Nick, as someone once said, maybe the law you pushed for leads to an occasional frivolous takedown of important content about the impact of US policy on an entire population, but "life requires some balancing." Oh well.

24 Comments | Leave a Comment..

Posted on Techdirt - 17 December 2018 @ 9:37am

Want A Box At The Grammies With Two Bigshot Congressmen? That'll Be $5,000 (Entertainment Lobbyists Only)

from the soft-corruption dept

We've talked a lot in the past about the concept of soft corruption. These are the kinds of practices that are most likely legal, and possibly even common among the political class, but which absolutely stink of corruption to the average American. And that's a huge problem, not just because of the general ethical questions raised by such soft corruption, but because it creates a cynical American public that does not trust politicians to adequately represent their interests.

Here's just one example. It appears that a bunch of industry lobbyists have been receiving the following email:

If you can't read it, it says the following:

Subject: Chairman Jeffries Grammy Weekend Feb 8-10

Good afternoon!

Chairman Hakeem Jeffries and Ranking Member Nadler will be splitting a box for the upcoming Grammy Awards Feb 8-10. The room block is at the newly renovated Sheraton Grand Los Angeles, which is a short walk from the Staples Center. To access the invite and registration form, please CLICK HERE

Tickets are $5,000 each. If you need a second ticket then please let me know and I will put you in touch with Yuichi Miyamoto from Ranking Member Nadler's team.

Please let me know as soon as possible if you want to attend since we have a limited number of spots. We also request that you use the room block for the stay.

Thanks again!

The link then takes people to an official invite to hang out with Reps. Jeffries and Nadler at the Grammies. Just $5 grand a pop. They'll even book your hotel for you! What a deal! What a steal!

Jeffries and Nadler are both bigshots in Congress. Jeffries (who originally ran for Congress stating: "Washington is broken. Congress is dysfunctional.... We deserve more") was just elected to be the chair of the Democratic Caucus, which makes him an incredibly powerful Congressman. Nadler is the current "Ranking Member" on the powerful Judciary Committee, and once the new Congress begins, will become the Chair of the Judiciary Committee -- the very committee in Congress that is in charge of copyright law. Nadler has a long history of pushing horrific anti-public copyright bills. Back in 2012, he proposed what I jokingly referred to as the RIAA Bailout Act of 2012, as the entire point of the bill was to drastically increase the rates internet radio would have to pay the record labels. He's also mocked digital rights activists, by calling the idea that "you bought it, you own it" was "an extreme digital view."

So, it certainly does seem notable that both of these Congressional Reps (1) have "a box" at the Grammies and (2) they're actively asking industry lobbyists to give them $5,000 per ticket to hang out with the Congressmen at the Grammies.

Again, some may suggest that this is "how fundraising is done" in Congress (though, frankly, it's usually a bit less blatant). But, even so, is this how it should be done? Doesn't anyone in Jeffries' or Nadler's office think that going to a key recording industry event and asking the industry's biggest companies to pay them $5k to spend some time with them... looks really, really bad? And, relatedly, how the hell can we trust that the various copyright bills that are certain to be under Nadler's control over the next two years (at least) are actually written for the benefit of the public, as per the Constitutional requirement, rather than the benefit of his $5,000 paying "friends" who hang out with him at the biggest celebration of the industry over which Nadler gets to set key rules?

In the past, there have been ethical questions raised by Congressional Representatives hosting fundraisers targeting industries they have jurisdiction over regulating, but these things tend to get swept under the rug in Congress as part of "the way things are done."

But, it certainly stinks of the kind of "soft corruption" that makes the public distrust the government. Indeed, it's the kind of thing that makes people say: "Washington is broken. Congress is dysfunctional.... We deserve more."

Read More | 38 Comments | Leave a Comment..

Posted on Free Speech - 14 December 2018 @ 9:40am

Super Injunction Silences News About Vatican Official's Child Molestation Conviction, And That's Bullshit

from the blatant-censorship dept

We've written in the past about things like "super injunctions" in the UK and elsewhere that often put a huge and near absolute gag order on writing about a famous person enmeshed in some sort of scandal, and apparently Australia has such a thing as well -- and it's now scaring off tons of publications from writing about the fact that George Pell, the Vatican's CFO and often called the "3rd most powerful person in the Vatican" was convicted on all charges that he sexually molested choir boys in Australia in the 1990s. However, the press is barred from reporting on it based on one of those gag orders. The Herald Sun in Australia did post a brilliant, Streisand Effect-inducing front page display about how it was being censored from publishing an important story:

Though, if you click on the link in that tweet it now shows an error message reading Error 400 and "Content is deleted, expired or legal killed." Legal killed.

And here's the thing. Very few publications -- even those outside of Australia -- seem to be willing to pick up on the story. To their credit, the NY Post, owned by Australian Rupert Murdoch has posted about it as has Margaret Sullivan at the Washington Post, who included an impassioned plea for this kind of censorship to not be allowed to continue.

The secrecy surrounding the court case — and now the verdict — is offensive. That’s especially so because it echoes the secrecy that has always been so appalling a part of widespread sexual abuse by priests.

That has changed a great deal in recent years — in part because of the Boston Globe’s Pulitzer Prize-winning investigation in 2002 that broke open a global scandal and was the subject of the Oscar-winning film “Spotlight.” (Current Washington Post Executive Editor Martin Baron was executive editor at the Globe at that time.)

But clearly, it hasn’t changed entirely. And the news media shouldn’t be forced to be a part of keeping these destructive secrets.

Steven Spaner, Australia coordinator from the Survivors Network of Those Abused by Priests told the Daily Beast he felt frustrated and left “in the dark” because of the suppression of news about Pell.

“It’s hard to know if there are any shenanigans going on — things the church did that are illegal themselves,” he said. “There is always suspicion when you don’t know what is going on.”

The story itself was actually broken by The Daily Beast (first link up top), but as that site's editor told Sullivan at the Washington Post, they were a bit worried about doing so:

Editor in chief Noah Shachtman told me that he waded carefully into the dangerous legal waters.

“We understood there could be legal, and even criminal, consequences if we ran this story,” said. “But ultimately, this was an easy call. You’ve got a top Vatican official convicted of a horrific crime. That’s major, major news. The public deserves to know about it.”

Shachtman said the Daily Beast did its best to honor the suppression order, consulting with attorneys here and in Australia, and even “geo-blocking” the article so that it would be harder to access in Australia, and keeping headlines “relatively neutral.”

If you do look around, there are a bunch of news articles, including some in Australia, all published after the verdict, talking about how the Pope has "removed" Pell from his "inner circle" and hinting at "historical sexual offences" but not saying that he's been convicted. And even the news of the removal is made to sound rather benign:

A Vatican spokesman said Francis had written to the prelates “thanking them for the work they have done over these past five years”.

Or here's an article from the Australian again published after the conviction, but not mentioning a word of it, and making it sound like Pell's removal was merely his term being up:

The Vatican said it had written to Cardinal Pell and his two colleagues in late October, telling them their roles on the C9 council had expired at the end of their five-year tenure.

[....]

“In October, the Pope had written to three of the more ­elderly cardinals — Cardinal Pell from Australia, Cardinal ­Errazuriz from Chile and Cardinal Monsengwo of Congo — thanking them for their work,” he said.

“After a five-year term, these three have passed out for the ­moment.”

And the Washington Post's Editor, Marty Baron, has now had to defend publishing Sullivan's piece:

If you can't read that, it says:

This story is a matter of major news significance involving an individual of global prominence. A fundamental principle of The Washington Post is to report the news truthfully, which we did. While we always consider guidelines given by courts and governments, we must ultimately use our judgment and exercise our right to publish such consequential news. Freedom of the press in the world will cease to exist if a judge in one country is allowed to bar publication of information anywhere in the world.

It seems heavily implied by this statement that the Washington Post has been contacted about its story.

Some may argue that there is, in fact, a good reason for the suppression orders. Specifically, the idea is to have trials of prominent figures be "impartial" and not influenced by media coverage. And you can understand the basic reasoning for that -- though, in this case, there is already a conviction, and that seems obviously newsworthy. The response to that argument is that Pell is still facing more such charges in another trial. I'm sympathetic to these arguments, but only to the point that I understand the emotional position from which those arguments are made. I cannot, however, agree that they are good reasons. Yes, media sensationalism around a trial can be an issue, but in the US we've been able to deal with that fairly successfully over the years with the way courts treat jurors and order them not to read the press coverage. Is it a perfect system? Nope. Not at all. But it does mostly function. On the flip side, the ability to do damage through these gag orders is immense.

Among other things, it hides the details of what's happening at the trial, and those details can really matter, as Sullivan's article quote above makes clear. In addition, only being able to reveal details way after the fact very much dilutes or even totally destroys the impact of such stories. It is much harder to make people care about this news much later, after it has been suppressed, than when it first comes out.

On top of that, all of this relies on the idea that those issuing these gag orders always do so with the best of intentions, and that's a huge leap of faith. The opportunity for mischief here is great, as we've seen in the UK with some of its super injunctions.

This kind of thing is one of the reasons why we're so concerned here about encroachments on free speech by governments. The ability to order platforms to censor material is a massively slippery slope. Indeed, in searching for the news coverage about this, I couldn't find any of the actual coverage of the convictions on Google News. I could only find the stories about the much more tame "removed from the inner circle." It may be that Google News algorithms picked up on that story more prominently (in part because there are many more such stories) or it could be because Australia has told Google News not to post such stories. At the very least, it's ambiguous and concerning.

Having a free and open press is a pretty key aspect of democracy. Australia is making it clear that it doesn't buy into that, and tragically, it's leading to new publications around the world choosing not to report on a huge story with immense public impact.

39 Comments | Leave a Comment..

Posted on Techdirt - 13 December 2018 @ 2:39pm

No Agreement Made On EU Copyright Directive, As Recording Industry Freaks Out About Safe Harbors Too

from the maybe-dump-article-13 dept

Today was the latest set of "Trilogue" negotiations for the EU Copyright Directive, between the EU Council, the EU Commission and the EU Parliament. When the trilogues were first scheduled, this was the final negotiation and the plan was to hammer out a final agreement by today. As we've been reporting lately, however, it still appeared that there was massive disagreement about what should be in Article 13 (in particular). And so, today's meetings ended with no deal in place, and a new trilogue negotiation set for January 14th. As MEP Julia Reda reports, most negotiators are still pushing for mandatory upload filters, so there's still a huge uphill battle ahead -- but the more regulators realize how disastrous such a provision would be for the public, the better.

Also worrisome, Reda notes that after the Parliament rejected Article 13 back in July, MEP Axel Voss agreed to add an exception for small businesses that helped get the proposal approved in September. Yet, in today's negotiations, he agreed to drop that small business exception, meaning that if you run a small platform that accepts user generated content, you might need to cross the EU off your list of markets should Article 13 pass.

One other important thing. Earlier this week, we noted that the TV, film and sports legacy companies were complaining that if Article 13 included a basic safe harbor (i.e., rules that say if you do certain things to remove infringing content, you won't be liable), then they no longer wanted it at all -- or wanted it to just be limited to music content. That suggested there might be some separation between the film/TV/sports industries and the music industries. But, no. Right before the trilogues, the legacy recording industries released a similar letter:

The fundamental elements of a solution to the Value Gap/Transfer of Value remain, as acknowledged by all three institutions in their adopted texts, to clarify that UUC services now defined as Online Content Sharing Service Providers (“OCSSP”) are liable for communication to the public and/or making available to the public when protected works are made available and that they are not eligible for the liability privilege in Article 14 of the E-Commerce Directive as far as copyright is concerned. We continue to believe that only a solution that stays within these principles meaningfully addresses the Value Gap/Transfer of Value. Moreover, licensing needs to be encouraged where the rightsholders are willing to do so but at the same time not be forced upon rightsholders.

Therefore, proposals that deviate from the adopted positions of the three institutions should be dismissed.

Unfortunately, for a number of reasons, the text now put forward by the European Commission would need fundamental changes to achieve the Directive’s aim to correct the Value Gap/ Transfer of Value.

For example, solutions that seek to qualify or mitigate the liability of Online Content Sharing Service Providers should be considered with an abundance of caution to avoid the final proposal leaving rightsholders in a worse position than they are in now. Any “mitigation measures”, should they be offered to OCSSPs, must therefore be clearly formulated and conditional on OCSSPs taking robust action to ensure the unavailability of works or other subject matter on their services.

This is pretty incredible when you get past the diplomatic legalese. These music companies are flat out admitting that the entire goal of this bill is to hit internet companies with crippling liability that makes it literally impossible for them to host any user generated content. This isn't -- as they claim -- about a "value gap" (a made up meaningless term). Rather this is the legacy entertainment industry going all in on an attempt to change the internet from a platform for the public, to a locked up platform for gatekeepers. In short, they want to take the internet and turn it into TV. Europe should not let this happen.

43 Comments | Leave a Comment..

Posted on Free Speech - 13 December 2018 @ 9:33am

If You're Worried About Bad EU Internet Regulation, Just Wait Until You See The New Terrorist Regulation

from the bye-bye-free-speech-online dept

This seems to be the year for awful internet regulation by the EU. At least there were some redeeming qualities in the GDPR, but they were few and far between, and much of the GDPR is terrible and is creating real problems for free speech online, while simultaneously, undermining privacy and giving repressive governments a new tool to go after critics. Oh, and in the process, it has only made Google that much more dominant in Europe, harming competition.

And, then, of course, there's the still ongoing debate about the EU Copyright Directive, which will also be hellish on free speech. The entire point of Article 13 in that Directive is to wipe away the intermediary liability protections that enable websites to host your content. Without such protections, it is not difficult to see how it will lead to a widespread stifling of ideas, not to mention many smaller platforms for hosting content exiting the market entirely.

But here's the thing: both of those EU regulations are absolutely nothing compared to the upcoming EU Terrorist Regulation. We mentioned this a bit back in August, with the EU Commission pushing for the rule that all terrorist content must be taken down in an hour or face massive fines and possible criminal liability. Earlier this year, Joan Barata at Stanford wrote a compelling paper detailing just how extreme parts of the proposed regulation will go.

Among the many questionable bits of the Terrorist Regulation are that it will apply no matter how small a platform is and even if they're not in the EU, so long as the EU claims they have a "significant number" of EU users. Also, if a platform isn't even based in the EU, part of the proposal would require the companies to hire a "representative" in the EU to respond to these takedown demands. If the government orders a platform to take down "terrorist" content, a platform has to take it down within an hour and then set up "proactive measures" to stop the same content from ever being uploaded (i.e., mandatory filters).

Oh, and of course, this mechanism for rapid and permanent censorship based solely on the government's say so, has... a ridiculously vague "definition" of what counts as "terrorist content."

'terrorist content' means one or more of the following information:

(a) inciting or advocating, including by glorifying, the commission of terrorist offences, thereby causing a danger that such acts be committed;
(b) encouraging the contribution to terrorist offences;
(c) promoting the activities of a terrorist group, in particular by encouraging the participation in or support to a terrorist group within the meaning of Article 2(3) of Directive (EU) 2017/541;
(d) instructing on methods or techniques for the purpose of committing terrorist offences.

There are all sorts of problems with this, and as the IP-Watch site notes, this appears to be a recipe for private censorship on the internet.

Recently, a large group of public interest groups sent a letter to EU regulators laying out in great detail all of the problems of the regulation. I'm going to quote a huge chunk of the letter, because it's so thorough:

Several aspects of the proposed Regulation would significantly endanger freedom of expression and information in Europe:

  • Vague and broad definitions: The Regulation uses vague and broad definitions to describe ‘terrorist content’ which are not in line with the Directive on Combating Terrorism. This increases the risk of arbitrary removal of online content shared or published by human rights defenders, civil society organisations, journalists or individuals based on, among others, their perceived political affiliation, activism, religious practice or national origin. In addition, judges and prosecutors in Member States will be left to define the substance and boundaries of the scope of the Regulation. This would lead to uncertainty for users, hosting service providers, and law enforcement, and the Regulation would fail to meet its objectives.
  • ‘Proactive measures’: The Regulation imposes ‘duties of care’ and a requirement to take ‘proactive measures’ on hosting service providers to prevent the re-upload of content. These requirements for ‘proactive measures’ can only be met using automated means, which have the potential to threaten the right to free expression as they would lack safeguards to prevent abuse or provide redress where content is removed in error. The Regulation lacks the proper transparency, accountability and redress mechanisms to mitigate this threat. The obligation applies to all hosting services providers, regardless of their size, reach, purpose, or revenue models, and does not allow flexibility for collaborative platforms.
  • Instant removals: The Regulation empowers undefined ‘competent authorities’ to order the removal of particular pieces of content within one hour, with no authorisation or oversight by courts. Removal requests must be honoured within this short time period regardless of any legitimate objections platforms or their users may have to removal of the content specified, and the damage to free expression and access to information may already be irreversible by the time any future appeal process is complete.
  • Terms of service over rule of law: The Regulation allows these same competent authorities to notify hosting service providers of potential terrorist content that companies must check against their terms of service and hence not against the law. This will likely lead to the removal of legal content as company terms of service often restrict expression that may be distasteful or unpopular, but not unlawful. It will also undermine law enforcement agencies for whom terrorist posts can be useful sources in investigations.

The European Commission has not presented sufficient evidence to support the necessity of the proposed measures. The Impact Assessment accompanying the European Commission’s proposal states that only 6% of respondents to a recent public consultation have encountered terrorist content online. In Austria, which publishes data on unlawful content reports to its national hotline, approximately 75% of content reported as unlawful were in fact legal. It is thus likely that the actual number of respondents who have encountered terrorist content is much lower than the reported 6%. In fact, 75% percent of the respondents to the public consultation considered the internet to be safe.

And that's not all. The UN's Special Rapporteur on the Promotion and Protection of the Right to Freedom of Opinion and Expression (yup, that's the title), David Kaye, has also sent a letter warning of the problems of such a regulation on free speech. It's 14 pages long, but the key point:

...we wish to express our views regarding the overly broad definition of terrorist content in the Proposal that may encompass legitimate expression protected under international human rights law. We note with serious concern what we believe to be insufficient consideration given to human rights protections in the context of to the proposed rules governing content moderation policies. We recall in this respect that the mechanisms set up in Articles 4-6 may lead to infringements to the right to access to information, freedom of opinion, expression, and association, and impact interlinked political and public interest processes. We are further troubled by the lack of attention to human rights responsibilities incumbent on business enterprises in line with the United Nations Guiding Principles on Business and Human Rights.

In other words, yet another European regulation targeting internet companies (many of whom are not based in Europe) that will ultimately lead to (1) greater censorship (2) more consolidation by internet giants, as smaller platforms won't be able to compete, and (3) massive "unintended" consequences for the internet as a whole.

Maybe it's time we just kick the EU off the internet. Let them build their own.

Read More | 36 Comments | Leave a Comment..

Posted on Techdirt - 12 December 2018 @ 12:03pm

TV, Sports & Movie Companies Still Freaking Out That EU Copyright Directive Might Include A Safe Harbor For Internet Platforms

from the fine,-just-drop-Article-13 dept

Last week, as the last round of "trilogue" negotiations were getting underway in the EU on the EU Copyright Directive, we noted a strange thing. While tech companies and public interest groups have been speaking out loudly against Article 13, a strange "ally" also started complaining about it: a bunch of TV, movie and sports organizations started complaining that Article 13 was a bad idea. But... for very different reasons. Their concerns were that regulators had actually finally begun to understand the ridiculousness of Article 13 and had been trying to add in some "safe harbors" into the law. Specifically, the safe harbors would make it clear that if platforms followed certain specific steps to try to stop infringing works from their platform, they would avoid liability. But, according to these organizations, safe harbors of any kind are a non-starter.

Those same groups are back with a new letter that's even more unhinged and more explicit about this. The real issue is that they recently got a ruling out of a German court that basically said platforms are already liable for any infringement, and they're now afraid that Article 13 will "soften" that ruling by enabling safe harbors.

In a letter of 1 December we alerted the three EU institutions that the texts under discussion would undermine current case law of the Court of Justice of the European Union (CJEU) which already makes it clear that online content sharing service providers (OCSSPs) communicate to the public and are not eligible for the liability privilege of Article 14 E-Commerce Directive (ECD). The proposal would further muddy the waters of jurisprudence in this area in light of the pending German Federal Court of Justice (Bundesgerichtshof) referral to the CJEU in a case involving YouTube/Google and certain rightholders, addressing this very issue. The initial goal of Article 13 was to codify the existing case-law in a way that would enable right holders to better control the exploitation of their content vis a vis certain OCSSPs which currently wrongfully claim they benefit from the liability privilege of Article 14 ECD. Unfortunately, the Value Gap provision has mutated in such a way that it now creates a new liability privilege for big platforms and therefore even further strengthens the role of OCSSPs to the direct detriment of rightholders.

First of all, it is complete and utter bullshit to claim that Article 13 was "to codify existing case law." Article 13 was designed to create an entirely brand new liability regime that deliberately sought to avoid Article 14 of the E-Commerce Directive (ECD). The ECD functions somewhat akin to the DMCA's safe harbors in the US, in that they include intermediary liability protections for sites that comply with takedown notices in a reasonable manner. The entire point of Article 13 in the EU Copyright Directive was to take copyright out of the E-Commerce Directive and to remove those safe harbors. To claim otherwise is laughable.

It is, of course, hilarious that these companies are now pretending that just because they got a good ruling in their favor on this point, that they're suddenly freaking out that any safe harbor might exist for internet platforms, but here they're explicit about how against a safe harbor they are:

Last week, we proposed a balanced and sound compromise solution consisting in guidance on the issue of OCSSP liability with reference to the existing jurisprudence of the CJEU. This solution would ensure rightholder collaboration in furtherance of the deployment of appropriate and proportionate measures as well as addressing the potential liability of uploaders where the platform has concluded a license, without the creation of any new safe harbours for big platforms. We continue to believe that this reasonable approach would have broad support, including in the rightholders community and could at the same time conciliate different views of Member States and different political groups in the European Parliament, without the need to give powerful active platforms the gift of a new liability privilege which goes beyond the stated intent of the proposed copyright reform. We also indicated that if, on the contrary, any new safe harbour/”mitigation of liability” would be part of a final trilogue agreement, we want to be excluded from the entire value gap provision.

It's also hilarious that they refer to this as "the value gap provision." The "value gap" is a made up concept by some legacy copyright companies to complain that their business models aren't as all powerful as they used to be, and therefore the government must step in to force other companies to give them money.

Also note the messaging here: they don't talk about what would be best for the public. Just for "the rightsholder community."

Anyway, if they want to be "excluded" from Article 13 entirely, I think that's fine. The best solution here is the obvious one: the EU can drop Article 13 entirely.

Read More | 7 Comments | Leave a Comment..

Posted on Techdirt - 12 December 2018 @ 9:32am

Legacy Copyright Industries Lobbying Hard For EU Copyright Directive... While Pretending That Only Google Is Lobbying

from the because-of-course dept

Have you heard that all of the opposition to the EU Copyright Directive and its hugely problematic Articles 11 and 13 is really being driven by Google lobbying? Most of you probably realized this was nonsense, but it now turns out that not only was the lobbying almost entirely dominated by the legacy copyright players, but a key plank of their lobbying campaign was to falsely allege that all opposition was just Google.

If you've been paying attention at all to the crazy fights over the EU Copyright Directive, you may have heard some claims being passed around that it's somehow "Google" lobbying heavily against the bill. Indeed, all over Twitter, that's the talking point from tons of EU Copyright Directive supporters. After the EU Parliament put the brakes on the bill back in July, I even saw a former RIAA exec (who has since blocked me on Twitter, so I can't show it to you) tweet that this was a clear perversion of the "will of the people" by Google's corporate lobbying. Of course, it's hilarious for that to come from an ex-RIAA exec, who was heavily involved over the past 3 decades in pushing through all sorts of protectionist, anti-public, anti-musician legislation and trade agreements.

But... it's a talking point. And it's one that lots of people have jumped on. Digital Music News, who is always quick to restate the recording industry's talking points, claimed that Google spent more than $36 million lobbying over Article 13. Billboard Magazine published a similar claim. Various music industry groups, in what appeared to be closely coordinated messaging, all started blaming Google and "the tech giants" for any opposition to the EU Copyright Directive -- which, mind you, would change the fundamental ways in which the internet works. Yet, in their minds, all of the opposition came from the internet giants.

Here's Geoff Taylor from BPI:

The US tech lobby has been using its enormous reach and resources to try to whip up an alarmist campaign...

And here's Richard Ashcroft from PRS for Music:

the Internet giants... have whipped up a social media storm of misinformation about the proposed changes in order to preserve their current advantage.

And how about UK Music's Michael Dugher who really wants to blame Google for everything:

Some absolute rubbish has been written about the EU’s proposed changes on copyright rules.

Amongst the ludicrous suggestions from the likes of Google is the claim that the shake-up will mean the end of memes, remixes and other user-generated content. Some have said that it will mean ‘censorship’ and even wildly predicted it will result in the ‘death of the internet’.

This is desperate and dishonest. Whilst some of the myths are repeated by people who remain blissfully untroubled by the technical but crucially important details of the proposed EU changes, in the worst cases this propaganda is being cynically pedalled by big tech like Google’s YouTube with a huge vested and multi-million-pound interest in this battle.

MEP Axel Voss, who was the EU Parliament Member who lead the charge on the Copyright Directive, and who has long been seen as being in the pocket of the copyright interests, even put out a press release recently blaming the tech giants and their lobbying:

After the vote, rapporteur Axel Voss (EPP, DE) said, “I am very glad that despite the very strong lobbying campaign by the internet giants, there is now a majority in the full house backing the need to protect the principle of fair pay for European creatives.”

There's been a lot more like that. On Twitter, whenever I talk about Article 13, the same crew that has been falsely attacking my views for years immediately start attacking me, claiming that it's all Google lobbying against Article 13.

So... about that. The wonderful site Corporate Europe Observatory, has a very thorough and very in-depth write-up about just who is lobbying on the EU Copyright Directive. And, quite incredibly, it's almost entirely dominated by all of those legacy copyright industries:

Since November 2014 there were 765 declared encounters between lobbyists and the Commission with “copyright” as a subject 1 Over 93% of these were with corporate interests, but the list of main actors might be quite surprising: the lobbyists with the highest access were in fact not big tech, but the collecting societies, creative industries (including big film and music studios) and press publishers.

The most frequently listed names are: IFPI - Representing recording industry worldwide (37 meetings) whose members include Sony Music and Warner Music, followed by the Federation of European Publishers (27) which represents national associations of book publishers, and GESAC - the European lobby for collecting societies (25), whose members include big EU collecting societies such as PRS for Music, the US giant recording label Universal Music Group International (22), and the Society of Audiovisual Authors (22), which represents national collecting societies.

Of the top 20 lobbyists by meetings, only two represented tech interests – Google, ranking number seven, and one of the trade associations it belongs to, DIGITALEUROPE, ranking 18th – while one sole NGO, the independent consumer organisation BEUC, ranked 12th.

The real story is even worse than that. Because I have abundant free time, I went through the file that Corporate Europe Observatory linked to detailing these lobbying meetings, and I went through all 205 entities and added up how many lobbying meetings were held by legacy entertainment interests, tech interests or public interests. Here's the breakdown:

While Corporate Europe Observatory lists 765 meetings, their spreadsheet actually shows 784. Also, there were a few organizations/lobbyists/lawyers where it was not exactly clear who they were lobbying for or on what side of the debate. To be as fair as possible, I simply included all of the ones I was unsure of into the "lobbying for tech" list. So, uh, for all the talk that Google was the one lobbying here, over 80% of the lobbying efforts came from the legacy copyright industries. That's pretty stunning. Of course, it's also disappointing to see that only 6% of the lobbying came from civil society groups. As CEO notes:

The voices of civil society organisations, small platforms, libraries, academics, citizens and even the UN Special Rapporteur on Freedom of Opinion and Expression were the collateral damage of the dispute between competing big business lobbies. Lobbyists and groups with a vested interest dominated the debate, while citizens’ opinions and interests were crowded out of the discussion.

Very depressing. For what it's worth, I included the "small platform" lobbying efforts in the "tech" list above (again, to be as careful as possible), but many of them view this issue even more strongly than Google, and are more focused on the public interest arguments.

Anyway, that chart is just for lobbying the EU Commission. CEO looks at some other available data for lobbying the EU Parliament as well (tragically, not that much info is public), but based on what is public it comes to the same conclusion:

Overall, the limited information which is available about lobby meetings shows the intense level of lobbying taking place on the Copyright Directive, but it also interestingly exposes that the biggest lobbies were not in fact big tech companies and their associates, as many headlines claimed, but the publishers, creative industries and collecting societies.

Incredibly, the narrative that all the lobbying is coming from Google has caused EU regulators to flat out discount complaints from people warning about problems with the proposal, while treating petitions supporting the Directive very differently. For example, the CEO report notes that organizations supporting both sides of the Copyright Directive set up "email your MEP campaigns," but the reaction to them was... let's just say, uneven.

Email your MEP campaigns are a common tool used by civil society organisations who mobilise their communities and supporters to exercise their democratic rights to engage with their elected representatives. Identity verification standards vary according to the tool used, but in this case, it seems that the campaign did not require email verification. That means that technically speaking, people could lie and change their identity in those emails.

De Cock explained that when they started the campaign they did not expect that so many people would participate, and so they themselves were surprised by the strength and impact of the campaign. But even then, can it really be labelled a denial of service attack? Such attacks are generally quite hostile, and aim to shut a website down completely. This is a remarkably different objective from that of an Email Your MEP campaign, which aims to ensure that MEPs are aware that there is popular support for certain causes and issues.

Astoundingly, ALDE MEP Jean Marie Cavada even said in an interview:

“After having analyzed the platform from which all emails come, I realized that it does not require a valid email address from the “users” to send emails. Thus, as sometimes we receive dozens of emails per minute, we can conceive that it is actually robots that send all these emails, which luckily makes this movement lose credibility.”

Many MEPs simply wrote back to the senders to confirm their identity. De Cock, for instance, was then forwarded several of these exchanges between MEPs and their constituents. It seems Mr Cavada did not write back to the people who had contacted him.

It is worth noting that PRS for Music also created a tool to email MEPs in advance of the vote, with the call to action: “PRS Member – Take 90 seconds to influence the vote”. This tool did not even include a return address, and yet there were no claims that these emails were sent by bots. It is interesting that C4C seem to have been criticised simply because of the volume of emails, which arguably simply indicates the level of concern from constituents on the issue. However, both the C4C and PRS can be criticised for the loose use of internet tools to email MEPs.

In other words, because so many more people used the tools to say that they were against Articles 11 and 13, at least some MEPs derided them as fake, while saying nothing about the many fewer people emailing in favor of those articles. Hmm.

Oh, and what about that claim of $36 million spent lobbying by Google all against Article 13. Turns out that that number is also bullshit (and Billboard should really issue a retraction).

That same week, immediately before the JURI committee vote, the UK Music Industry body published a press release stating that “figures show Google’s €31m EU lobbying bid” on copyright. UK Music simply took the entire lobby budget declared by Google in 2017, €6 million, and added to that the budgets of all the organisations and think tanks it is a member of, declaring that the “The combined value of Google’s indirect lobbying of the EU amounts to €25.5m”.

This is a highly problematic and flawed interpretation of the Transparency Register. Google’s entire self-declared lobby budget does make it one of the EU’s highest spending lobby groups. However, only a portion of the declared €6 million would likely be spent on copyright, especially as Google is also fighting several other significant lobby battles in the EU (for example on the anti-trust law cases being brought against Android, digital tax, terrorist content, fake news etc). According to available meeting data, it looks like most of Google's lobby meetings were in fact on issues from the Copyright Directive, so it appears that this is not their priority at the moment.

The €31 million figure also assumes that all the associations and think tanks of which Google is a member focused their entire declared EU lobby spending on copyright in 2017. That would include, for example, BusinessEurope, the EU employers’ lobby, which are not necessarily active on the Copyright Directive, and if they are would only spend a marginal part of their budget on this issue. In most cases these groups (such as Friends of Europe, Konrad Adenauer-Stiftung, Bruegel etc) did little or nothing at all on the Copyright Directive, so these amounts should clearly not be included in the calculations.

Incredibly, CEO reports that much of the lobbying effort... was focused on blaming Google for too much lobbying. No, really.

In the copyright discussion, claims that GAFA, and particularly Google, were behind all opposition to Article 11 and 13 were again a strong message from publishers, and harder to counter. For instance, in the lobby newspaper produced by the news agencies, both the text from AFP’s editor Katz, and the ‘editorial’, mention what they call deceptive lobbying. Katz wrote that the “reform has been fiercely opposed by Facebook and Google, who have campaigned on a complete fabrication: a supposed threat to people’s free access to the internet”. He even declared that “I am convinced that the members of parliament who have been misled by deceptive lobbying now understand that non-paying access to the internet is not at risk.”

So, let's be clear: there are lots of corporate interests at play here, and tragically, the voice of the public is getting drowned out. But if anyone claims that any of this process has been driven by the big tech firms, they are flat out lying. Don't let them get away with it.

26 Comments | Leave a Comment..

Posted on Techdirt - 11 December 2018 @ 9:48am

While Everyone's Busy, Hollywood & Record Labels Suggest Congress Bring Back SOPA

from the guys,-give-it-a-fucking-rest dept

There are a million different things going on these days when it comes to preventing the powers that be from destroying the internet that we know and love. There are dozens of mostly bad ideas for regulating the internet here in the US, and of course, over in Europe, they're doing their best to destroy everything with the poorly thought out GDPR, the new Copyright Directive and the upcoming Terrorist Regulation (more on that soon). With all of that keeping everyone trying to protect the internet busy, it appears that the MPAA and the RIAA have decided that now would be a good time to re-introduce SOPA. No joke.

Every year, the US government's "IP Enforcement Coordinator" -- or IP Czar -- takes comments for its "Joint Strategic Plan for Intellectual Property," which is supposed to lay out the federal government's yearly plan for protecting Hollywood's profits. As questionable as that is already, this year, the comment submissions seemed to go a bit further than usual. The RIAA's submission, the MPAA's submission and the (almost so extreme as to be a parody) Copyright Alliance's submission all seemed to push a pretty consistent theme. Despite the incredible abundance of content creation happening these days, despite the myriad new ways to distribute, to build a fan base, to create new works and to make money from those works... these legacy gatekeepers all insist that the internet is truly a horrible attack on creativity and must be stopped.

And how to stop it? Well, how about widespread censorship in the form of outright site blocking. In short, these legacy gatekeepers want to bring back SOPA, the law that they tried to ram through seven years ago, only to be embarrassed when the internet stood up and said "no fucking way."

Let's start with the RIAA submission, which admits that, hey, the music business is pretty good these days, and almost all of that is because of innovations in technology that the RIAA fought at every freaking step (well, they don't admit that last part), but, my god, there are still some people out there who don't pay every single time they hear a song, and that must be stopped. And thus, they request changes to the law, including this:

With respect to website blocking, as one 2018 article states, “[s]tudies show that blocking regimes that target these large scale piracy sites (not sites that accidentally host pirated material) are an effective tool in reducing piracy and increasing the consumption of legal content and services.” Given the increasing ease for rogue infringing actors to access U.S. audiences while keeping all of their infrastructure off-shore, such as through the use of non-U.S. cctlds for their domain and bullet proof ISPs to host their services, there is a pressing need for additional tools to deter and stop this type of piracy harming U.S. consumers and businesses. As website blocking has had a positive impact in other countries without significant unintended consequences, the U.S. should reconsider adding this to its anti-piracy tool box.

They also suggest making the DMCA even worse, killing off its safe harbors and ramping up the penalties, but let's focus on the above paragraph. Because that is specifically a suggestion to bring back SOPA, whose main provisions were about site blocking. The idea that this is an "effective tool in reducing piracy" is laughable. Multiple studies (including our own have shown no evidence that greater enforcement reduces piracy over the long term (there is a short, but fleeting impact). Instead, focusing on innovation and providing good services is what decreases piracy. Second, the idea that site blocking is "without significant unintended consequences" is fucking laughable.

Let's just remember that this is coming from the very same RIAA who supplied false claims of infringement concerning multiple websites, leading them to be seized and held for many years. In those cases, the RIAA told ICE that these blogs had been posting infringing music, and yet could never provide ICE with any evidence to support it. Of course, ICE still held onto those sites for nearly five years, just because. But the RIAA says there have been no unintended consequences?

Okay, how about the time that Australia, which has site blocking under law, tried to take down one site, but actually took down 250,000. Oops. Or when Homeland Security took down 84,000 sites. Do those "unintended consequences" not exist?

At this point, the RIAA cannot be seen as a credible voice on this particular issue.

How about the MPAA's filing? Well, it's more of the same. It notes, correctly, that we're in the "golden age" of movies and TV -- much of that being driven by technological developments that the MPAA (who, again, once called the VCR "the Boston Strangler") fought at every turn. Then it says something that is laughable: "Respect for Copyright Drives Innovation and Competition." Uh, what? That's... not even close to true, in part because almost no one -- least of all the MPAA -- actually "respects" copyright.

But, when we get to the MPAA's suggestions. It is not as blatant as the RIAA in directly calling for bringing back site blocking, but it does request that DHS and the DOJ get more aggressive about criminal procedures against foreign sites (a la Megaupload) to try to prevent piracy. Why Hollywood expects the federal government to use its law enforcement abilities to stop civil violations is left unsaid. And then, the MPAA demands that basically the entire internet ecosystem be shifted to stop any piracy from ever happening.

The IPEC could do much to promote greater collaboration aimed at reducing these harms by endorsing voluntary initiatives, as it has in the past. For example, more online intermediaries should adopt “trusted notifier” programs, under which they accept referrals from the content community about entities using the intermediaries’ services in the aid of piracy and, after doing their own due diligence, take remedial action. In particular:

Domain name registrars and registry operators should agree to keep WHOIS data public, to the extent permitted by law; to suspend the domain names of referred sites; to freeze the domain name so it becomes unavailable to others; and to disclose the true name and address of pirate site operators, prevent that operator from re-registering, and agree not to challenge third-party application of court orders regarding domain name suspension in cases by rightsholders against pirate sites.

Hosting providers should filter using automated content recognition technology; forward DMCA notices to users, terminate repeat infringers after receipt of a reasonable number of notices, and prevent re-registration by terminated users; implement download bandwidth or frequency limitations to prevent high volume traffic for particular files; agree not to challenge third party application of court orders regarding suspension of hosting services in cases by rightsholders against pirate sites; remove files expeditiously; and block referral traffic from known piracy sites.

Reverse proxy servers should disclose the true hosting location of pirate sites upon referral; terminate identified pirate sites, and prevent these sites from re-registering; and agree not to challenge third party application of court orders regarding suspension of reverse proxy services in cases by rightsholders against pirate sites.

ISPs should forward Digital Millennium Copyright Act notices to users; terminate repeat infringers after receipt of a reasonable number of notices and prevent re-registration by infringers; expeditiously comply with document subpoenas for user information; and block sites subject to court order in the applicable jurisdiction.

Social media should remove ads, links, and pages dedicated to the promotion of piracy devices and terminate repeat infringers.

Got that? Basically every other company in the world should be required to police the internet for the MPAA so that a few stray infringements don't get through. Hilariously, the MPAA admits that some may be worried about the impact of such demands on free speech, but then proceeds to brush away such concerns as if combating infringement caused by the MPAA's own unwillingness to adapt its business model... is the same as stopping cybersecurity attacks.

Some argue there is tension between curbing illegal activity online and free expression. The argument is made far too broadly. Combating unlawful conduct like identity theft, unauthorized distribution of entire copyrighted works, cyberattacks, and illicit sale of opioids is no more a threat to free expression on the internet than it is in the physical world. In fact, curbing illegal activity promotes free expression by creating a safer environment where individuals feel comfortable to communicate and engage in commerce, and to create and lawfully access content

It is truly awe-inspiring how the MPAA turns its own industry's failures to adapt to innovation into a public crisis in which everyone else must change... while simultaneously raving about how it's a "golden age" for its own industry.

And, finally, we come to the Copyright Alliance's submission. This one focuses on supporting copyright trolling via a small claims copyright board, which is of questionable constitutionality, and which would clearly enable a massive increase in copyright shakedowns. But, then, of course, among the other suggestions there are a bunch focused on having intermediaries take down sites, including full site blocking:

“Over the last decade, at least 42 countries have either adopted and implemented, or are legally obligated to adopt and implement, measures to ensure that ISPs take steps to disable access to copyright infringing websites, including throughout the European Union, the United Kingdom, Australia, and South Korea.” Research shows such measures can have significant effect on shifting users toward legitimate services, with one study finding that “blocking 52 sites in 2014 caused treated users to increase their usage of legal subscription sites by 10% and legal ad-supported streaming sites by 11.5%.” In addition to learning what remedies are effective, much can be learned from other countries in ensuring such remedies are proportionate and do not result in overblocking or other unwanted consequences.

Again, we have already shown how widely site blocking DOES lead to overblocking and unwanted consequences. Furthermore, the evidence that site blocking increases the use of legitimate services is laughable, and not supported by the data at all.

In short, here are the major copyright industry representatives, knowing that everyone's busy off fighting other fires, making quiet inroads towards bringing back SOPA, despite the total clusterfuck it proved to be seven years ago. These guys will never stop in their quest to destroy the internet as we know it, and their push to turn the internet into a broadcast medium controlled by gatekeepers, rather than a communications medium for all of us.

Read More | 89 Comments | Leave a Comment..

Posted on Techdirt - 10 December 2018 @ 10:44am

Latest EU Copyright Proposal: Block Everything, Never Make Mistakes, But Don't Use Upload Filters

from the and-a-pony dept

As we've been discussing the "Trilogue" negotiations between the EU Commission, EU Council and EU Parliament over the EU's Copyright Directive have continued, and a summary has been released on the latest plans for Article 13, which is the provision that will make upload filters mandatory, while (and this is the fun part) insisting that it doesn't make upload filters mandatory. Then, to make things even more fun, another document on the actual text suggests the way to deal with this is to create a better euphemism for filters.

When we last checked in on this, we noted that the legacy film and television industry associations were freaking out that Article 13 might include some safe harbors for internet platforms, and were asking the negotiators to either drop those protections for platforms, or to leave them out of Article 13 altogether and only have it apply to music.

The latest brief description of the recommendations for Article 13 appear to be an attempt by bureaucrats who have no understanding of the nuances of this issue to appease both the legacy copyright industries and the tech companies. Notably absent: any concern for the public or independent creators. We'll dig in in a moment, but frankly, given the state of Article 13 demonstrated in this two-page document, it is horrific that these discussions are considered almost concluded. It is obvious that the vast majority of people working on this have no idea what they're talking about, and are pushing incredibly vague rules without any understanding of their impact. And rather than taking in the criticism and warning from knowledgeable experts, they're just adding in duct-taped "but this won't do x" for every complaint where people warn what the actual impact of the rules will be for the internet.

That's why, throughout this document, they keep insisting that there will be no mandate for filters. But, there's no way you can actually avoid liability without filters. Indeed, in order to appease the film and TV folks, the proposal now includes a notice-and-staydown provision. We've spent years explaining why a notice-and-staydown provision is not only unworkable, but would lead to tremendous amounts of non-infringing content being removed. Copyright is extremely context specific. The exact same content may be infringing in one instance, but protected in another. Yet a notice-and-staydown does not allow the protected versions. It requires they be blocked. That is outright censorship.

On to the document. It starts with seven "guidelines."

The Commission was requested to follow these guidelines indicated by the Rapporteur:

  • Platforms should follow high standards of duty of care;
  • Cooperation should not be unidirectional;
  • Non infringing content should remain on the platform online;
  • Automatic blocking, albeit non forbidden, should be avoided as much as possible
  • Existing measures should not be excluded
  • Platforms should not always be released from liability by merely applying content identification measures
  • Rightholders should not be in a worse position than they are currently. In this context the audiovisual sector was singled out. On this basis, the following ideas, which are based on a logical grouping of the above guidelines, are outlined for the consideration of the co-legislators:

This can be summed up as... all infringing content must disappear, but you don't have to use filters and you must make sure that non-infringing content remains online. This is the "nerd harder" approach to regulating. It is magic wand regulating: make the bad stuff go away, and magically don't have any collateral damage.

This is not sound policy making. This is technically illiterate bureaucrats trying to cover their asses. Because the liability requirements in the document will certainly lead to massive overblocking and widespread unintended consequences, these spineless technocrats are trying to avoid that by just tacking on "but let those consequences happen." They don't explain how this is possible. They just are going to put these rules out into the world, and tell the tech industry to wave a magic wand and make one type of content disappear without impacting other content (even if they're impossible to distinguish, and if the penalties for getting it wrong are dire).

From there, the document provides "details" never apparently recognizing just how contradictory the plans are:

High standard of duty of care and bilateral cooperation (1 + 2) Online content sharing service providers, as defined in the directive, are considered to communicate to the public and as such need to obtain licences from the relevant rightholders. Where no licences are granted, online content sharing service providers and rightholders should cooperate in good faith to prevent the availability of protected content online.

Cooperation should take place according to appropriate standards of professional diligence, which ought to take into account the size of the service, the number and type of works or other subject matter uploaded by users, the potential economic harm caused to rightholders, the availability of suitable and effective technologies and their cost for service providers. In practice, this means that the standards of cooperation should be particularly high for high value content. Cooperation should not lead to a general monitoring obligation as defined under the e-Commerce Directive.

Magic wand thinking: Either your entire platform needs to be licensed (i.e., no user-generated content) or you need to "prevent the availability" of any copyright-covered content with "good faith." But how? Well, the bureaucrats insist that this shouldn't require "general monitoring" (i.e., an upload filter). But... um... how do you prevent availability of copyright covered content if you're not monitoring? This is an impossible situation and either the bureaucrats know this and are just ignoring that they're demanding the impossible, or they don't understand this and shouldn't be allowed within 10 miles of any regulation over the internet.

Rightholders should provide content sharing service providers with specific information (e.g. metadata) allowing identification of their content.

The cooperation could include content identification measures (e.g. for high value content) but should not prevent other forms of cooperation if agreed by the parties (e.g. ex post content moderation for low value content, see also letter B).

When unauthorised content becomes available on their websites, content sharing service providers would in general not be liable if they have cooperated in good faith according to the relevant standards of professional diligence. However, within an adequate framework to ensure legal certainty, when despite such cooperation the availability of content online has caused significant economic harm to rightholders the Directive could consider the provider liable in any event, but at a reduced level taking into account the good faith of the provider. Alternatively, the Directive could allow rightholders to claim restitution of the benefits appropriated by the providers (e.g. using unjust enrichment claims under national law) (see point C below).

So, again, we see the general incomprehensibility of what is being pushed here. The first paragraph is an attempt to appease the platforms, basically saying "if copyright holders are going to demand takedowns, they should at least be required to supply the details of what content they actually hold a copyright over." That's reasonable given a plan to demand mandatory filters, because the only thing such metadata is actually useful for is... a filter.

The second paragraph is basically saying "okay, yes, we mean filters for loosely defined 'high value' content, but maybe loosely defined 'low value content' doesn't require filters." Again, this appears to be an attempt to split the baby. Who the hell is going to self-describe their own content as "low value content?" The whole concept of "high value" and "low value" is elitist claptrap from the legacy content industries who basically believe that anything that comes from the legacy recording, TV and film studios is "high value" and all that independent, amateur, and user-generated content is "low value." The paragraph here is supposed to be an attempt to say "well, okay, if your platform is just publishing garbage memes and stuff maybe it doesn't need a filter, but if you happen to include any of Hollywood's precious brilliance, you must put in place a filter."

The third paragraph is, yet again, an attempt to give special extra rights to the legacy recording, TV, and film companies. It basically says that if platforms try to "cooperate in good faith" (i.e., censor at the drop of a hat) then maybe they would be considered not liable... but only if it's that riff-raff low value content that slips through the filters (though we're not demanding filters!). If any content slips through the filters that "caused significant economic harm" (i.e., comes from the big copyright industries), well then, it doesn't fucking matter how much you tried to stop it, you're still liable.

In other words, if any internet platform makes a single mistake with Hollywood's content, no matter how hard they tried to stop it, too bad, you're liable.

And this is where there's such a massive disconnect between the framers (and supporters) of Article 13 and reality. When you're told that any mistake will lead to liability, you are put in a position of trying to prevent any mistakes. And the only ways to do that are to (1) stop accepting any user-uploaded content or (2) filter the hell out of all of it, and take down anything that even might possibly be considered infringing, meaning tons of perfectly legitimate content will get shut down.

No matter how many times these technocrats say "don't take down non-infringing works", it's totally meaningless if the only way to avoid liability is to take down tons of non-infringing works. Which brings us to the next part:

Non infringing content should remain online and automatic blocking to be avoided as much as possible (3+4)

Content that does not infringe copyright, for example because it is covered by exceptions, should stay on the services’ websites. In addition, the co-legislators could provide that minor uses of content by amateur uploaders should not be automatically blocked (in the context of the cooperation and professional diligence referred to under A) nor trigger the liability of the uploader. This should be without prejudice to the remedies under point C and the rules on liability of the providers and cooperation under A.

The need to allow legitimate content to remain available, should be strengthened through a robust redress mechanism which should ensure that users can contest measures taken against their legitimate uploads. The Commission already provided possible suggestions to the co-legislators which are currently under discussions in the trilogue process.

Again, this is setting up a laughable impossibility. First they say you're liable if you let anything through, and then they say "but don't accidentally take down stuff you shouldn't." How the hell do you do that? The rules don't say. Hollywood and Article 13's supporters don't care. It's great if they add a "redress mechanism" for bogus takedowns, but that only will apply to content that first gets up and then is taken down. It says nothing for content that is blocked from being uploaded in the first place due to overaggressive filters, which are only overaggressive due to the earlier parts of Article 13 that say you're liable if you let anything "high value" through.

This is the ultimate in cowardice from the EU regulators. Rather than address the actual problems that their own regulations will create, these regulators have decided to just append a bit to their regulation that says "and don't let this create the problems it will obviously create." That's fucking useless.

Rightholders should keep benefiting from existing measures; and platforms not released from liability by merely applying content identification technologies. Rightholders, notably audiovisual sector, not worse off (5+6+7)

Rightholders should in any event retain the ability to request removal of infringing content from the websites of the content sharing services. Building on and complementing the current ecommerce rules, rightholders should be allowed to request that unauthorised content is expeditiously removed and that best efforts are made to ensure that it stays down. As indicated in A, the co-legislators may provide for an additional safeguard for rightholders when despite the good faith cooperation the availability of content online causes significant economic harm to them.

There's something really big hidden in here. A "notice and stay down" requirement. That was not what was being pushed before. Notice and staydown creates all sorts of problems, in that by its very nature it obliterates the points in the previous paragraph. If you have a notice and staydown regime, you cannot allow content that is "covered by exceptions" because you've already designated all such content must stay down. And unless these bureaucrats in Brussels have magically invented a filter that can understand context and correctly judge whether or not something is covered by an exception (something that normally takes a years-long adversarial judicial process) it is difficult to see how this is possible.

Then we get to the other document, leaked earlier today by Politico, that attempts to wordsmith the actual language of Article 13. It's basically the same stuff we discussed above, but with an attempt to put it into actual legalese. Two things stand out in the document. First, they try to rebrand mandatory upload filters, by now discussing "suitable and effective technologies" to "ensure the non-availability on the websites of the service providers of unauthorised works or other subject matter..." How is that not a filter?

This document also includes some language "as an option" that would require "best effort to prevent their future availability." That's putting the notice-and-staydown into the law. I will note that there is no real language being discussed that explains how to prevent the blocking of non-infringing works. Just more hand waving and magical thinking about how it shouldn't block non-infringing works... even though it absolutely will.

This leaves me with two key takeaways:

  1. The bureaucrats putting this together are doing the worst kind of regulating. They appear to be utterly ignorant of what it is that they are regulating, how it works, and the inevitable impact of their new rules. And, rather than trying to take the time to actually understand the concerns, they are simply writing "but don't do that" into the law every time someone explains the impact. But you can't regulate internet platforms not to overblock when everything else in your law requires them to overblock or face crippling liability. This is like a law that says "you must immediately dump out the bathwater without looking to see what's in the bath... but don't throw out the baby with the bathwater." How do you do that? The law doesn't say because the regulators don't have the slightest clue. And they don't have the slightest clue because it's impossible. And, they don't seem to care about that because once they pass the law they can celebrate and the mess they create is left for the internet platforms (and the public) to deal with.
  2. Given the massive changes and broad and unclear mandates being tossed around, Article 13 is nowhere near a condition which should be put into a binding regulation. What's being debated now is so unclear, so vague and such a mess, that it would be practically criminal to put such nonsense into law. They are rushing to get this done (perhaps before the next EU Parliamentary elections next spring), and the fact that they're about to make massive changes to a fundamental part of society (the internet) without clearly comprehending what they're doing is incredibly frightening. This is like a bad first draft of a bad proposal. This is not just "this is a bad bill that went through a comprehensive process and I disagree with it." This is an utter mess. It keeps shifting, it has vague and contradictory definitions, it tells companies to wave magic wands, and tells companies not to let the very thing the law compels actually happen. This is not regulating. This is why the public hates regulators.
I'm still hopeful that common sense eventually shows up in the EU, but at this point the only way for common sense to survive is to simply dump Article 13 entirely.

Read More | 45 Comments | Leave a Comment..

Posted on Techdirt - 7 December 2018 @ 7:39pm

It's Been 50 Years: Take Some Time This Weekend To Watch Doug Engelbart's Mother Of All Demos

from the history-in-the-making dept

Normally, on the weekend, we look back at what we wrote about on Techdirt five, ten and fifteen years ago, but I'm going to pre-empt at least a bit of that with this post. Ten years ago, we wrote about the 40th anniversary of the famous and iconic "Mother of All Demos" by Doug Engelbart on December 9th, 1968. A little over five years ago, we wrote about it again, unfortunately on the occasion of Engelbart's passing.

But, Sunday will now mark the 50th anniversary of the demo, and there's a very impressive looking Symposium about it happening at the Computer History Museum in Mountain View, California.

It's interesting, in Silicon Valley, how much disdain some have for the past. After all, it's here that we're always talking about inventing the future. Engelbart's demo, 50 years ago, was exactly that. Before even the idea of a graphical user interface for a computer, or the concept of a wider internet, was conceived of, Engelbart was literally demoing a ton of ideas, products, concepts and services that we all use regularly today. Even the demo itself (let alone what he was demoing) was somewhat historic, as the demo showed what was happening on his computer on-screen, but part of it was done via teleconferencing and video sharing (again before most people even had the foggiest idea what that could mean). It demonstrated, for the first time, ideas like the computer mouse, a word process, windows, a graphical user interface, computer graphics, hypertext linking, collaborative editing, version control, dynamic linking and more.

I watch the entire 90 minutes every few years, and it's amazing how inspiring it is. How miraculous it is. Every time we link to it, it ends up moving around or appearing in different chunks online, but the Doug Engelbart Institute now has it in three separate parts (each about 30 minutes) on YouTube, so I'll post that version here:

Or, if you really don't want to watch the entire thing, there's a nicely done "interactive version" that breaks it down into sections and sub-sections, so you can just watch the clips that are of most interest to you (though, I still recommend watching the entire thing for context).

Part of what's so inspiring about the demo, of course, is that we're watching it in retrospect. We now know what transpired over the next 50 years. If none of what Engelbart had presented became common, the demo would probably just be seen as quirky nonsense, a la predictions of flying cars and moon bases. But, that's not what happened at all. Instead, we know that watching Engelbart's demo is watching real history in action.

It's watching the impossible, the magical, become reality. It's the very thing that has made Silicon Valley so much fun for the past 50 years. Making the impossible not just possible, but everyday. Enabling people to do amazing things.

Of course, we're living now in an age where the narrative on technology has shifted. People are recognizing that innovation and advancement isn't always all good for everyone. People are recognizing that it has consequences and creates problems -- sometimes serious ones. And those conversations are vital.

But as that narrative has shifted, I worry tremendously about throwing out all of the good things that have come with innovation in our rush to prevent any possible downsides. I'm glad that there's some level of reckoning happening, and people are proactively trying to think through the impact (both good and bad) of what they're creating these days. But, I worry that the narrative has shifted so far that in order to prevent "bad" we're going to end up tossing out much of the good that is set to come as well.

I'm not quite 50 years old yet, but the amount of technological change and innovation in my lifetime has been amazing -- and I'd argue that the vast majority of it has been good. It has opened up new worlds. It has enabled new ways to communicate. It has brought knowledge and information to far flung corners of the globe. It has enabled people all over the world to have an impact. And it continues to change as well.

Watching the Mother of All Demos once again lets us wonder about what will happen in the next 50 years. And it gives us a chance to appreciate all that has happened (and has been allowed to happen) over the past 50 years. Engelbart didn't lock up his ideas. He didn't block others from using them. There aren't stories of nasty patent fights (even if he had a bunch of patents). He shared these ideas for the world to see, and the world took these ideas and ran with them, built on them, improved on them and created the amazing world we now live in. This should not be the end of the history of innovation, but a sign of what happens when people do allow for great innovation, and seek to make the impossible, possible.

12 Comments | Leave a Comment..

Posted on Techdirt - 7 December 2018 @ 12:05pm

After Getting FOSTA Turned Into Law, Facebook Tells Its Users To Stop Using Naughty Words

from the the-morality-police dept

Well, well. As we've covered for a while now, FOSTA became law almost entirely because Facebook did an about-face on its position on the law -- which only recently was revealed to have happened because COO Sheryl Sandberg decided it was important to appease Congress on something, even against the arguments of Facebook's own policy team. As we pointed out at the time, this was Facebook basically selling out the internet, and we wondered if Facebook would then help clean up the collateral damage it causes?

The early indications are that, not only will it not help clean up the mess it caused, it's leaning in on this new puritanical internet that it wants to create. We've already noted that Facebook has been sued under FOSTA by someone arguing that it has helped facilitate sex trafficking. And now, just days after Tumblr's weird pivot away from sex, Facebook has put up a bunch of new guidelines in its "community standards" document, under the head of "sexual solicitation" that ban a wide variety of things from naughty words to expressing a sexual preference.

Among the banned:

  • Vague suggestive statements, such as “looking for a good time tonight”
  • Sexualized slang
  • Using sexual hints such as mentioning sexual roles, sex positions, fetish scenarios, sexual preference/sexual partner preference, state of arousal, act of sexual intercourse or activity (sexual penetration or self-pleasuring), commonly sexualized areas of the body such as the breasts, groin, or buttocks, state of hygiene of genitalia or buttocks
  • Content (hand drawn, digital, or real-world art) that may depict explicit sexual activity or suggestively posed person(s).
  • Got that. Expressing sexual partner preference may now be deemed as "sexual solicitation" and thus not allowed on Facebook, as, you know, it might violate FOSTA. A law that Facebook actively fought for under Sheryl Sandberg's direction.

Obviously, this is Facebook's platform and it can make whatever stupid rules it wants, but it's not difficult to see how this is likely to impact all kinds of perfectly acceptable content on its site. It also seems quite hypocritical, given that the early versions of Facebook were... very much about helping college students hook up with one another.

We warned that FOSTA would lead to widespread censorship online, and that seems to be exactly what's happening. And this should be especially troubling for sex positive people, or people who have, historically, used the internet and Facebook to discover like-minded groups, or to better understand themselves and their own preferences. We did warn, very early on, that one of the groups that was most vocal in lobbying for FOSTA was going off script and admitting that -- contrary to the public arguments made by politicians supporting the bill, that it was about stopping child sex trafficking -- that the bill was really designed to end online pornography. It seems to be taking some steps towards that goal.

Of course, as we noted earlier this week, the bill also failed to decrease sex trafficking or even sexual ads online. It's just made it harder for police to actually track down and find traffickers.

Now, some might argue that this is fine. That the "mainstream properties" like Facebook and Tumblr should get rid of all this stuff, and let it live in the dark corners of the internet. But, considering how broad these rules are, and the kind of content we're already seeing banned from Tumblr, what we're ending up with on the "mainstream" internet is losing what has always made the internet special -- that you could explore all kinds of topics, meet all kinds of people and learn about all different ideas. In the two and a half decades that the internet has been "mainstream," there has always been an effort by some to falsely describe it as the "wild west" that needed "taming." This has always been ridiculous. What they wanted was an internet controlled by gatekeepers -- turning a communications medium with anything you want -- into a broadcast medium, where anything can be sold.

Unfortunately, it looks like those forces are finally winning.

81 Comments | Leave a Comment..

Posted on Techdirt - 7 December 2018 @ 9:48am

When A 'Trade War' Involves Seizing And Imprisoning Foreign Execs, It's No Longer Just About Trade

from the it-kinda-leads-to-actual-wars dept

For years we've been writing about the weird US government infatuation with the Chinese telco equipment firm, Huawei. The company has built a widely successful business, but going back many years there's been a loud whisper campaign that the company's equipment would send information back to the Chinese government. Of course, when our own government investigated this, it could find no evidence at all that this was true. It also seems notable that Huawei itself asked for this investigation, claiming that it would clear the company's name, since it wasn't doing anything that people were accusing it of doing. This doesn't mean that the company isn't doing something nefarious, but such claims should have some sort of evidence to back them up, and so far they've been lacking.

Of course, this may have been one of those situations where people assumed that whatever we would do to others, others must be doing to us, because what we do know, is that the NSA broke into Huawei's computers and grabbed a bunch of emails and source code. That bit seems to get left out of all the fear mongering reporting about Huawei. Oh, and it later came out that much of the whisper campaign about Huawei spying for the Chinese government... originated from the US firm Cisco, which was seeing its market share eroded by Huawei.

So we've long taken the claims about Huawei with a large grain of salt, even as most in the media have been willing to repeat the allegations about Huawei without mentioning the lack of evidence, Cisco's involvement, or the fact that the US government swiped a bunch of stuff from Huawei, even though all of those things seem kinda relevant.

By now, of course, you've probably heard that Canadian officials arrested Huawei's CFO, Meng Wanzhou, who also happens to be the daughter of the founder, and there are plans to try to extradite her to the US. While no charges have been revealed, most people claim it has to do with violating US sanctions on Iran by shipping US made equipment to Iran. The details here will matter, but it's still incredibly unusual to have a friendly country arrest a top exec and then try to extradite them.

Even if the official charges have nothing to do with the ongoing trade war with China, as nearly everyone is pointing out, there's no way this doesn't create massive blowback on any new trade agreement. Remember it was just a few days ago (was it really just a few days?) that President Trump announced that he'd agreed to end the senseless trade war he'd started (which has created a massive import tax on American businesses and consumers). Of course, when the Chinese gave their version of the story, it sounded remarkably different than Trump's version.

But, at least it sounded like progress was being made, and maybe we could end the insanity. But, of course, by having an ally arrest a top exec, it's thrown everything up in the air. Imagine, for example, if Sheryl Sandberg was on a trip to Pakistan, and was arrested by authorities there and extradited to China to face criminal charges. That's kind of the equivalent of what the US has just done via Canada.

Then, take it a step further. White House officials have told the press that they believe Meng "could be used as leverage with China in trade talks," and you realize this has fuck all to do with Iranian sanctions. No, that's the White House more or less admitting that they've taken a hostage in a trade war. That's hellishly dangerous. Because China will not hesitate to retaliate. If I were an American business exec, I'd stay far away from China or any of its allies right about now.

Arresting an executive over such a thing, and then admitting you want to use her as "leverage," just as you're negotiating a complex trade deal is... the kind of thing that turns a trade war into an actual war. It's an incredibly dangerous move that should concern everyone.

44 Comments | Leave a Comment..

Posted on Techdirt - 6 December 2018 @ 12:04pm

What Do Pot And Software Have In Common? Stupid Patent Thickets Based On A Lack Of Patented Prior Art

from the stupid-patent-rules dept

Recently Reuters had a fascinating article all about the new patent thicket in pot that is appearing, thanks to legalization efforts in the US and around the globe.

With marijuana now fully legal in Canada and at least partially legalized in the majority of U.S. states, companies are rushing to patent new formulations of the age-old botanical. This year, the U.S. Patent and Trademark Office has issued 39 patents containing the words cannabis or marijuana in their summaries, up from 29 in 2017 and 14 in 2016.

And, of course, with patents come the inevitable lawsuits:

The first U.S. case is now winding its way through the courts. In a July lawsuit, Colorado-based United Cannabis Corp accused Pure Hemp Collective Inc of infringing its patent covering a liquid formulation with a high concentration of CBD, a non-psychoactive cannabis ingredient touted for its health benefits.

One of the key issues in this case and others, experts say, is whether the patent is overly broad or obvious in light of “prior art,” the existing level of science or technology against which an invention’s novelty can be judged.

Basically, there hasn't been that much official prior art because pot was considered illegal for so many years, and no one was rushing to patent anything. And, of course, patent examiners are somewhat limited in what they're set up to research regarding prior art, and they often rely on earlier patents and scientific articles as the basis for prior art searches. And, with pot, there aren't so many of those.

Of course, this is actually quite reminiscent of the mess that came with software patents. For a long time, most people didn't consider most software to be patentable (this is not entirely accurate, as there are software patents going back many decades, but many people considered it limited to a few special cases of software). However, in 1998, we got the State St. Bank case, in which the Court of Appeals for the Federal Circuit basically threw open the doors on patenting almost any software. And those doors remained completely wide open until the Alice v. CLS Bank decision in 2014 (which hasn't totally cleaned up the mess of the State Street ruling, but has certainly helped dial back the insanity).

But, for nearly two decades after the State Street ruling, the US Patent Office was patenting software willy nilly -- often despite much of it having tons of prior art or being completely obvious. A big part of the problem was that examiners, again, focused on mainly looking at earlier patents and scientific journals for evidence of prior art. But because so many people didn't think that most software was patentable, there were very few patents to look at, and it's pretty rare for anyone to write up the details of software in scientific journals (they just make the damn software).

That resulted in tons of broad software patents that covered things that had been done for decades or that were entirely obvious. And thus, we had huge patent thickets and massive patent fights that cost billions of dollars, caused innovative companies to go out of business, and generally were a massive tax on innovation, where almost all of the proceeds went into a few patent lawyers' pockets. To this day it is a huge black mark on how the patent system works, and how it actually did significantly more to harm innovation than to help it.

I'm reminded of this mess in reading about the situation with patents around pot. While the situations are not entirely the same -- the reasons for a lack of earlier patents are quite different -- the overall impact is similar. The lack of earlier patents is creating an open field where things that have been done for years, or that are considered obvious, are still getting through the patent office with a stamp of approval. And it's only going to create a pretty big mess with lawsuits. You would have hoped that the USPTO would have caught on by now, but apparently not.

39 Comments | Leave a Comment..

Posted on Techdirt - 5 December 2018 @ 1:27pm

Good For The World, But Not Good For Us: The Really Damning Bits Of The Facebook Revelations

from the anti-competitive,-anti-consumer-issues dept

As expected, UK Parliament Member Damian Collins released a bunch of documents that he had previously seized under questionable circumstances. While he had revealed some details in a blatantly misleading way during the public hearing he held, he's now released a bunch more. Collins tees up the 250 page release with a few of his own notes, which also tend to exaggerate and misrepresent what's in the docs, and many people are running with a few of those misrepresentations.

However, that doesn't mean that all of these documents have been misrepresented. Indeed, there are multiple things in here that look pretty bad for Facebook, and could be very damaging for it on questions around the privacy protections it had promised the FTC it would put in place, as well as in any potential anti-trust fight. It's not that surprising to understand how Facebook got to the various decisions it made, but the "move fast and break things" attitude also seems to involve the potential of breaking both the law and the company's own promises to its users. And that's bad.

First, the things that really aren't that big a deal: a lot of the reporting has focused on the idea that Facebook would give greater access to data to partners who signed up to give Facebook money via its advertising or other platforms. There doesn't seem to be much of a bombshell there. Lots of companies who have APIs charge for access. This is kind of a standard business model question, and some of the emails in the data dump show what actually appears to be a pretty thoughtful discussion of various business model options and their tradeoffs. This was a company that recognized it had valuable information and was trying to figure out the best way to monetize it. There isn't much of a scandal there, though some people seem to think there is. Perhaps you could argue that allowing some third parties to have greater access Facebook has a cavalier attitude towards that data since it's willing to trade access to it for money, but there's no evidence presented that this data was used in an abusive way (indeed, by putting a "price" on the access, Facebook likely limited the access to companies who had every reason to not abuse the data).

Similarly, there is a lot of discussion about the API change, which Facebook implemented to actually start to limit how much data app developers had access to. And the documentation here shows that part of the motivation to do this was to (rightfully) improve user trust of Facebook. It's difficult to see how that's a scandal. In addition, some of the discussions involve moving specific whitelisted partner to a special version of the API that gives them access to more data... but in a way that the data is hashed, providing better privacy and security to that data, while still making it useful. Again, this approach seems to actually be beneficial to end users, rather than harmful, so the attempts to attack it seem misplaced -- and yet take up the vast majority of the 250 pages.

The bigger issues involve specific actions that certainly appear to at least raise antitrust questions. That includes cutting off apps that recreate Facebook's own features, or that are suddenly getting a lot of traction (and using the access they had to users' phones to figure out which apps were getting lots of traction). While not definitively violating antitrust laws, that's certainly the kind of evidence that any antitrust investigator would likely explore -- looking to see if Facebook held a dominant position at the time of those actions, and if those actions were designed to deliberately harm competitors, rather than for any useful purpose for end-users. At least from the partial details released in the documents, the focus on competitors does seem to be a driving force. That could create a pretty big antitrust headache for Facebook.

Of course, the details on this... are still a bit vague from the released documents. There are a number of included charts from Onavo included, showing the popularity of various apps, such as this:

Onavo was a data analytics company that Facebook bought in 2013 for over $100 million. Last year, the Wall Street Journal broke the story that Facebook was using Onavo to understand how well competing apps were doing, and potentially using that data to target acquisitions... or potentially to try to diminish those competing apps' access. The potential "smoking gun" evidence is buried in these files, but there's a short email on the day that Twitter launched Vine, its app for 6-second videos, where Facebook decides to cut off Twitter's access to its friend API in response to this move, and Zuckerberg himself says "Yup, go for it."

Now... it's entirely possible that there's more to this than is shown in the documents. But at least on its face, it seems like the kind of thing that deserves more scrutiny. If Facebook truly shut down access to the API because it feared competition from Vine... that is certainly the kind of thing that will raise eyebrows from antitrust folks. If there were more reasons for cutting off Vine, that should come out. But if the only reason was "ooh, that's a potential competitor to our own service," and if Facebook was seen as the dominant way of distribution or access at the time, it could be a real issue.

Separately, if the name Onavo sounds familiar to you, that might be because earlier this year, Facebook launched what it called a VPN under the brand name Onavo... and there was reasonable anger over it because people realized (as per the above discussion) that Onavo was really a form of analytics spyware that charted what applications you were using and for what. It was so bad that Apple pulled it from its App Store.

The other big thing that comes out in the released documents is all the way at the end, when Facebook is getting ready to roll out a Facebook app update on Android that will snoop on your SMS and call logs and use that information for trying to get you to add more friends and for determining what kinds of content it promotes to you. Facebook clearly recognized that this could be a PR nightmare if it got out, and they were worried that Android would seek permission from users, which would alert them to this kind of snooping:

That is bad. That's Facebook knowing that its latest snooping move will look bad and trying to figure out a way to sneak it through. Later on, the team is relieved when they realize, after testing, that they can roll this out without alerting users with a permission dialog screen:

As reporter Kashmir Hill points out, it's notable that this "phew, we don't really have to alert users to our sketchy plan to get access to their logs" came from Yul Kwon, who was designated as Facebook's "privacy sherpa" and put in charge of making sure that Facebook didn't do anything creepy with user data. From an article that Hill wrote back in 2015:

The face of the new, privacy-conscious Facebook is Yul Kwon, a Yale Law grad who heads the team responsible for ensuring that every new product, feature, proposed study and code change gets scrutinized for privacy problems. His job is to try to make sure that Facebook’s 9,199 employees and the people they partner with don’t set off any privacy dynamite. Facebook employees refer to his group as the XFN team, which stands for “cross-functional,” because its job is to ensure that anyone at Facebook who might spot a problem with a new app — from the PR team to the lawyers to the security guys — has a chance to raise their concerns before that app gets on your phone. “We refer to ourselves as the privacy sherpas,” says Kwon. Instead of helping Facebook employees scale Everest safely, Kwon’s team tries to guide them safely past the potential peril of pissing off users.

And yet, here, he seems to be guiding them past those perils by helping the team hide what's really going on.

This is also doubly notable for Kashmir Hill who has been perhaps the most dogged reporter on the creepy levels to which Facebook's "People You May Know" feature works. Facebook has a history of giving Hill totally conflicting information about how that feature worked, and these documents reveal, at least, the desire to secretly slurp up your call and SMS records in order to find more "people you might know" (shown as PYMK in the documents).

One final note on all of this. I recently pointed out that Silicon Valley really should stop treating fundamental structural issues as political issues, in which they just focus on what's best for the short-term bottom line, and should focus on the larger goals of doing what's right overall. In a long email included in the documents from Mark Zuckerberg, musing thoughtfully on various business model ideas for the platform, one line stands out. Honestly, the entire email (starting on page 49 of the document) is worth reading, because it really does carefully weigh the various options in front of them. But there's also this line:

If you can't read that, it's a discussion of how it's important to enable people to share what they want, and how enabling other apps to help users do that is a good thing, but then he says:

The answer I came to is that we’re trying to enable people to share everything they want, and to do it on Facebook. Sometimes the best way to enable people to share something is to have a developer build a special purpose app or network for that type of content and to make that app social by having Facebook plug into it. However, that may be good for the world but it’s not good for us unless people also share back to Facebook and that content increases the value of our network. So ultimately, I think the purpose of platform – even the read side – is to increase sharing back into Facebook.’

I should note that in Damian Collins' summary of this, he carefully cuts out some of the text of that email to frame it in a manner that makes it look worse, but the "that may be good for the world, but it's not good for us" line really stands out to me. That's exactly the kind of political decision I was talking about in that earlier post. Taking the short term view of "do what's good for us, rather than what's good for the world" may be typical, and even understandable, in business, but it's the root of many, many long term and structural problems for not just Facebook, but tons of other companies as well.

I wish that we could move to a world where companies finally understood that "doing good for the world" leads to a situation in which the long term result is also "good for us," rather than focusing on the "good for us" at the expense of "good for the world."

Read More | 16 Comments | Leave a Comment..

Posted on Techdirt - 5 December 2018 @ 10:44am

Rudy Giuliani's Paranoid Nonsense Tweet Is A Good Reminder That We Need Actual Cybersecurity Experts In Government

from the what-the-actual-fuck dept

Rudy Giuliani may have built up a reputation for himself as "America's Mayor" but the latest chapters in his life seem to be a mad dash to undo whatever shred of goodwill or credibility he might have left. Politics watchers will know that he's been acting as the President's lawyer, in which (as far as I can tell) his main job is to go on TV news programs and reveal stuff no lawyer should reveal. But, we shouldn't forget Giuliani's previous jobs. His earlier firm, Giuliani Partners, had a subsidiary called Giuliani Security that at least at one time claimed to do "cybersecurity." Of course, when the press explored what that actually meant, it was fairly limited.

"If you hired them on a cyber engagement, they are going to tell you what your legal obligations are and how to manage the legal risk related to cyber," a cybersecurity executive in New York who has experience with Giuliani Security and Safety and requested to remain anonymous told Motherboard. "Basically, not to prevent a Target [breach], but how to prevent a Target CEO being fired."

Still, a lot of heads spun around when Giuliani himself was named as Trump's cybersecurity advisor, because, as basically everyone recognized, he does not appear to know anything about cybersecurity.

Yesterday, Giuliani made clear just how incredibly ignorant he is of the basic functioning of the internet. As I type this these tweets are still up, but I'll post a screenshot on the assumption that someday, someone with actual knowledge will get to Giuliani and convince him to take these tweets down:

There's a lot going on here, so if you haven't been following all of this, it may take a bit to unpack. The first tweet references Mueller's recent filings against Paul Manafort, Trump's former campaign boss, for lying (again) to the Special Counsel's Office. Giuliani is making a weird unfounded claim that Mueller is specifically timing his indictments to times when the President is about to leave town for international gatherings. Considering the number of indictments that Mueller drops -- most of which don't happen when Trump is about to travel to meet foreign world leaders -- this already feels like ridiculous conspiracy mongering.

Within that tweet, Giuliani appears to make a few typos -- specifically forgetting to put a space after the period of a couple of sentences. The first time this happened, the sentence ended with "G-20." The next sentence begins "In". However, because (1) the lack of a period mushes these together as "G-20.In" and (2) because ".in" is the top level domain for India, Twitter interpreted that as a link to the website g-20.in. Some bright, enterprising person then registered such a website and posted an anti-Trump message to it, specifically this:

Whoever set up that site has since added a news update concerning Mueller's recent sentencing recommendations for Trump's former National Security Advisor Michael Flynn, who was among the first brought down by Mueller.

Lots of people were mocking supposed "cybersecurity expert" Giuliani for accidentally posting such a link and opening himself up to such a thing. But last night Giuliani decided to take the nonsense to extreme levels of nonsense, accusing "cardcarrying anti-Trumpers" at Twitter of allowing "someone to invade" his tweet to insert that link. His "evidence" for this was the fact that the second time in that same tweet where he made the same "no space after a period" typo -- creating "Helsinki.Either" -- it did not turn into a link. And... as basically anyone who has even the most rudimentary understanding of the internet (clearly not including cybersecurity expert Rudy Giuliani), the reason there is no link for that is because ".either" is not (yet) a top level domain, and thus Twitter's systems don't see it as a link and don't automatically link it.

The rest of the internet has been having lots of fun with this, mocking Giuliani, and I'm amazed that the tweet has stayed up for as long as it has. Twitter was even forced to issue a statement denying any foul play:

A spokesperson told Fortune that the company’s “service worked as designed.” The spokesperson added that whenever someone tweets a Web address, a clickable link is automatically created.

“Any suggestion that we artificially injected something into the user’s account is false,” the spokesperson said.

And while it may be fun to mock such utter incompetence put on display for the world, this really does highlight a serious problem. The lack of knowledgeable people about real online security issues in the government -- especially when computer security issues are so vital to almost everything these days -- is a real problem. We can laugh about "cybersecurity advisor" and "expert" Rudy Giuliani not understanding how top level domains and links work, but then we should be terrified to think that... who the hell is actually advising the administration on very serious issues regarding internet security, at a time when tons of entities, from lowly criminals to aggressive nationstates, are using the network to mount various attacks.

And, yes, there are actually a number of other people in the government who do truly understand this stuff. But over and over again it appears that the people appointed to the highest levels concerning these things have no clue. And that's a big deal, because computer security issues aren't something you just pick up with a crash course. They're complex and challenging and require a pretty deep level of knowledge to actually understand both the threats and the possible remedies. And, when the administration's top cybersecurity adviser freaks out because he doesn't know what a top level domain is... that should worry us all.

62 Comments | Leave a Comment..

Posted on Techdirt - 5 December 2018 @ 9:23am

Tumblr's New 'No Sex' Rules Show The Problems Of FOSTA And EU Copyright Directive In One Easy Move

from the intermediary-liability-protections-matter dept

As you may have heard by now, on Monday, Tumblr announced that in just a couple weeks it will be banning porn from its platform as part of a change to its rules. Now, of course, Tumblr has every right to run its platform however it sees fit, but it does seem notable that it wasn't all that long ago that Tumblr openly defended the fact that Tumblr hosts a bunch of "Not Safe For Work" content, explaining that they supported free speech, and didn't want to be in the business of carefully determining whether or not something was "artful" photography or just porn.

Of course, that was before Verizon bought Yahoo (which had previously bought Tumblr). And it was before FOSTA became law. As Wired points out, this move to ban all porn comes just weeks after Apple banned the Tumblr app from the App Store over some illegal images (even after Tumblr was alerted and took those images down). It's not hard to see how some execs at Verizon might have looked at all of this as a headache that just isn't worth it -- especially given the potential criminal liability that comes from FOSTA. Remember, a few months back, we noted that a bunch of online trolls were deliberately targeting women they didn't like on various platforms claiming (often without evidence) that they were engaged in prostitution. Many of those targeted used Tumblr. It's not difficult to see how Verizon just decided to rid itself of this whole headache.

But, beyond demonstrating the censorship problems of FOSTA, this move by Tumblr is also doing a bang up job demonstrating why mandatory filters, such as those pushed for in Article 13 in the EU Copyright Directive will be so harmful. Filters are notoriously terrible at accurately taking down only the content they're supposed to take down. Amusingly, one of the key talking points of Article 13 filter defenders is that "well, these platforms do a great job stopping porn, so clearly they can stop infringement." This is wrong on multiple levels, starting with the fact that the determination of what is "infringing" is entirely different from what is "porn." But, more to the point: the porn filters don't work very well at all.

Buzzfeed has a hilarious list of Tumblr posts that have been flagged as being adult content, that clearly... are not. Here are just a few:

Some are even claiming that reblogging Tubmlr's own announcement resulted in flags:

But for examples of flags that are perhaps even more relevant for those of us here on Techdirt, law professor Sarah Burstein, who runs a Tumblr and Twitter feed all about design patents (often highlighting how ridiculous those design patents are) found that a bunch of her design patent images resulted in flags as inappropriate content. I am not kidding:

Believe it or not, those are just a few examples from a much longer list of flagged posts about design patents for apparently violating Tumblr's "porn" filter.

While this is a complete travesty on a variety of levels, it demonstrates the utter futility of believing that filters will work and won't make a huge number of mistakes, pulling down perfectly reasonable content. Those working on laws (especially over in the EU) such as the EU Copyright Directive's Article 13 would do well to actually heed this message.

47 Comments | Leave a Comment..

Posted on Techdirt - 4 December 2018 @ 10:44am

Latest On EU Copyright Directive: No One's Happy With Article 13, So Maybe Let's Drop It?

from the don't-wreck-the-net dept

Over the last few weeks, the so-called trilogue negotiations between the EU Council, the EU Commission and the EU Parliament on the EU Copyright Directive have continued, and it appears to have created quite a mess. As you'll recall, because the Council, the Commission, and the Parliament all passed somewhat different versions of the Directive, they now have to go through this process to come up with a version that they all agree on -- and based on some of the proposals and discussions that have come out, it's been a total mess. And specifically on Article 13 -- the provision that will mandate upload filters -- the current situation is an total mess.

Seriously, it's so bad that basically no one wants it any more. And, yes, that includes some of the copyright extremists from the legacy copyright industries. Over the weekend, a group of entertainment organizations -- including the MPAA's international branch, the MPA, the Independent Film & Television Alliance (IFTA) and the notoriously aggressive copyright litigant, the Premier League, all got together to send a letter complaining about Article 13 and the direction it's gone in. Hilariously, they're not complaining that it's over-aggressive -- rather they're whining that Article 13 might actually have been made fairer as the negotiations have gone on. Specifically, they're upset that there are now safe harbors proposed for platforms to help them avoid liability. These entertainment groups apparently think safe harbors are some sort of damn loophole:

Recall that the initial goal of Article 13 was to codify the existing case-law in a way that would enable right holders to better control the exploitation of their content vis a vis certain OCSSPs which currently wrongfully claim they benefit from the liability privilege of Article 14 E-Commerce Directive.

However, unfortunately, the Value Gap provision has mutated in such a way that it now strengthens even further the role of OCSSPs to the direct detriment of right holders and completely undermines the status quo in terms of the EU liability regime. Some of the options proposed for discussion at trilogue level indeed wrongfully undermine current law and weaken right holders’ exclusive rights by, among others: creating a new liability privilege for certain platforms that have taken specific steps to avoid the availability of infringing copyright content on their services (but have failed to do so effectively), and conditioning protection of copyright online on right holders bearing the full burden of identifying and notifying copyright infringing content to platforms. These would constitute gifts to already powerful platforms, and would de facto constitute the only real change to the current status quo in legal terms, thus improving the position of platforms, but not of right holders.

Much of this complaint is complete bullshit. Article 13 has never been about "codifying existing case-law." It has always been about upending case law in Europe (and elsewhere) to completely gut intermediary liability protections, end user-generated content platforms, and turn the internet into a TV-like broadcast system, where the legacy company have "control" again (i.e., they get to extract monopoly rents as gatekeepers). The fact that the trilogue negotiations have introduced safe harbors should be seen as a good thing, but obviously not to the signatories of this letter.

Incredibly, the signers of the letter actually ask the negotiators to drop Article 13, or, at the very least limit it to merely applying to musical works. That would still be a problem, but would certainly stop most of the collateral damage that Article 13 would cause in its present state.

Meanwhile, many other companies are recognizing just how damaging Article 13 would be. Reddit has started alerting all of its EU users (and pointing them to our very own DontWreckThe.Net website), pointing out how disastrous the EU Copyright Directive would be for everyone who uses Reddit:

The problem with the Directive lies in Articles 11 (link licensing fees) and 13 (copyright filter requirements), which set sweeping, vague requirements that create enormous liability for platforms like ours. These requirements eliminate the previous safe harbors that allowed us the leeway to give users the benefit of the doubt when they shared content. But under the new Directive, activity that is core to Reddit, like sharing links to news articles, or the use of existing content for creative new purposes (r/photoshopbattles, anyone?) would suddenly become questionable under the law, and it is not clear right now that there are feasible mitigating actions that we could take while preserving core site functionality. Even worse, smaller but similar attempts in various countries in Europe in the past have shown that such efforts have actually harmed publishers and creators...

Accordingly, we hope that today's action will drive the point home that there are grave problems with Articles 11 and 13, and that the current trilogue negotiations will choose to remove both entirely. Barring that, however, we have a number of suggestions for ways to improve both proposals. Engine and the Copia Institute have compiled them here at https://dontwreckthe.net/. We hope you will read them and consider calling your Member of European Parliament (look yours up here). We also hope that EU lawmakers will listen to those who use and understand the internet the most, and reconsider these problematic articles. Protecting rights holders need not come at the cost of silencing European internet users.

Also, the massive video streaming site Twitch has now started alerting users to the harms of Article 13 as well:

Article 13 changes the dynamic of how services like Twitch have to operate, to the detriment of creators.

Because Article 13 makes Twitch liable for any potential copyright infringement activity with uploaded works, Twitch could be forced to impose filters and monitoring measures on all works uploaded by residents of the EU. This means you would need to provide copyright ownership information, clearances, or take other steps to prove that you comply with thorny and complicated copyright laws. Creators would very likely have to contend with the false positives associated with such measures, and it would also limit what content we can make available to viewers in the EU.

Operating under these constraints means that a variety of content would be much more difficult to publish, including commentary, criticism, fan works, and parodies. Communities and viewers everywhere would also suffer, with fewer viewer options for entertainment, critique, and more.

So, at this point, we have the internet platforms calling out how the Copyright Directive will harm all sorts of creators by making the platforms they use impossible. We have the film and sports industries complaining that there might actually be some safe harbors included in Article 13, which would apparently ruin the whole point for them (!?!?!?!?!?). The only one who still thinks Article 13 is a good thing apparently is the legacy recording industry who has been fairly open in that the entire point of Article 13 is to force YouTube to pay them more (even though it wouldn't actually do that).

So, hey, maybe it's time to scrap Articles 11 and 13 and not try to rush through copyright proposals that will have a massive impact on how the internet works, done by bureaucrats who clearly don't understand the impacts of what they're proposing, while in backroom negotiations?

68 Comments | Leave a Comment..

More posts from Mike Masnick >>