Mike Masnick’s Techdirt Profile


About Mike MasnickTechdirt Insider

Mike is the founder and CEO of Floor64 and editor of the Techdirt blog.

He can be found on Twitter at http://www.twitter.com/mmasnick

Posted on Techdirt - 17 July 2018 @ 9:26am

Copyright As Censorship: FIFA's Overaggressive Copyright Takedowns Target Fans Celebrating And Pussy Riot Protesting

from the copyright-censors dept

One of the talking points we heard in the run up to the EU Parliament's vote over the EU Copyright Directive was the laughable claim that Article 13 -- which would require mandatory upload filters for many sites -- could not possibly lead to censorship. Here was what UK collection society PRS for Music had to say about that issue:

...the argument is flawed for the simple reason that it assumes creators and producers are incentivised to block access to their works.

Centuries of copyright have proven this is not the case. Indeed, one of the core principles of copyright is that it incentivises the licensing of works. Requiring online platforms to obtain a licence will not lead to mass-scale blocking of copyright works online.

We talked about how silly this was in response (and pointed to dozens of articles we've written in the past about how copyright is used for censorship), but let's add another one to the pile. As you know, the World Cup just ended this past weekend, and FIFA, which certainly has some history being over aggressive on the "intellectual property" side of things, apparently was working overtime getting videos taken down from various platforms.

This resulted in lots of outraged fans especially over insane situations like when Kathryn Conn posted a 5 second video of her 7-year-old son celebrating a goal. She posted it to Twitter, where it was promptly taken down thanks to a highly questionable DMCA notice from FIFA. It is positively bizarre that anyone could possibly think that this video infringed on anyone's copyright, or that it somehow should require "licensing" from FIFA to show your 7-year-old celebrating a goal.

But, it's not just taking down what some might consider "inconsequential" videos of fans celebrating. As you may have heard, the well known collective Pussy Riot staged a protest by having some of its members run onto the field during the final between France and Croatia. All of those involved have been arrested and thrown in jail for 15 days and banned from visiting any sports event for three years.

And... you guessed it, it appears that FIFA decided to take the matter into its own hands and sent a DMCA takedown to have the video disappeared:

Oh, but no, copyright is never used for censorship, is it? It's just magically taking down videos of political protests... because it's an incentive to license the material, right?

20 Comments | Leave a Comment..

Posted on Techdirt - 13 July 2018 @ 9:20am

Misleading Subscription Practices At The Financial Times

from the come-on,-eileen dept

We've spent years highlighting how ISPs especially tend to really screw customers over with things like hidden fees or (a personal least favorite) "low introductory prices" that hide the price jump you'll face at the end of the term. Broadband providers can often get away with those practices thanks to absentee overseers at the FCC/FTC and importantly, the lack of competition. But it's absolutely insane to see those in competitive or struggling organizations pulling the same kinds of stunts. Right now there's all this concern out there about media business models, and lots of publications are pushing people to sign up for their subscription plans. There are lots to choose from, and playing stupid games is not a good idea. That's why I was a bit flabbergasted by the following story, which comes from Hersh Reddy, who co-hosts the Techdirt Podcast. He shared with me this following chat he had with the Financial Times.

You can read the whole insane thing below, in which it appears that FT's policies are designed to trick people (i.e., it's not at all the fault of the poor woman he's speaking to). Specifically, it appears that FT has two "cheap" offers to try to get people: one that is $1 for the first 4 weeks, and another that says a full subscription is $144/year. Here's how it looks on the site:

Notice that on the trial part it says: "Not sure which package to choose? Try full access for 4 weeks." That certainly implies that at the end of the 4 weeks (or within an hour of signing up as you'll see below), you should then be able to "choose" another "package" from this page. But that's not what happened with Hersh, as you'll see:

Eileen (9:17:19 AM): How may I help you?

Eileen (9:17:27 AM): Hi. My name is Eileen. It's a pleasure meeting you on chat. How may I assist you?

Me (9:17:51 AM): I just started a trial subscription and the billing page says the subscription rate is $60+ once it goes onto the regular rate.

Me (9:18:15 AM): But when I look on the subscription page there seems to be cheaper options for pure digital.

Eileen (9:18:28 AM): I would be glad to check your query.

Me (9:18:33 AM): Why does the trial convert at such an expensive rate?

Eileen (9:19:03 AM): Trial offer is available for the premium level of access.

Me (9:19:33 AM): So if I use the trial I can't access the other subscription rates?

Eileen (9:19:48 AM): After 4 weeks, the subscription will automatically renewe in to premium, unless you make changes or stop the auto renewal within 4 weeks.

Me (9:20:20 AM): Is it possible to set my desired subscription after the trial period now, or do I have to wait till after the trial period?

Eileen (9:22:27 AM): We can make changes now, and the change will take effect after the trial period.

Eileen (9:22:33 AM): I will need to ask you a couple of questions in order to locate your account and assist you, is that okay?

Me (9:22:59 AM): ok

Eileen (9:23:03 AM): Thank you, can I start by getting your name, email address and phone number?

Me (9:23:13 AM): Hersh Reddy

Me (9:23:18 AM): REDACTED

Me (9:23:22 AM): REDACTED

Eileen (9:23:26 AM): And lastly, I want to make sure that I am looking at the right account, may I have the payment instrument you use for the FT subscription?

Me (9:23:36 AM): A visa credit card

Eileen (9:24:19 AM): Thank you Hersh. Let me pull up your account now.

Eileen (9:25:24 AM): I can see that your trial subscription ends on Aug 7th.

Eileen (9:25:34 AM): This will renew automatically on that day.

Eileen (9:25:48 AM): We have a digital subscription which is USD 36.00 a month.

Eileen (9:26:08 AM): Digital subscription allows you to have access to online articles and app, except for premium content like lex, epaper, em squared and ft confidential research.

Me (9:26:18 AM): I see that there is a Digital subscription on the FT page that is $2.77/week

Eileen (9:27:13 AM): That will be the equivalent weekly cost of digital annually.

Me (9:27:35 AM): Ok, I would like that.

Eileen (9:27:49 AM): USD 335.40 orUSD 6.45.

Eileen (9:27:57 AM): USD 6.45 per week.

Me (9:28:10 AM): It says 2.77/week on the web page

Me (9:28:47 AM): I'm looking at this web page https://www.ft.com/products

Eileen (9:28:49 AM): How much is the annual cost on line?

Eileen (9:29:10 AM): Let me check the link Hersh.

Me (9:29:50 AM): $144 is the annual cost

Eileen (9:30:56 AM): Can you forward the email to me?

Eileen (9:31:13 AM): I mean can you screenshot the page and email that to me.

Eileen (9:31:33 AM): Can I send you an email instead, and please reply with the screenshot?

Me (9:32:20 AM): ok

Eileen (9:33:21 AM): Thank you.

Eileen (9:33:23 AM): Done.

Me (9:34:08 AM): sent it

Me (9:35:25 AM): I also sent a screenshot of the link I clicked on the subscription page

Eileen (9:35:39 AM): Thank you.

Eileen (9:35:46 AM): Allow me to check these further.

Eileen (9:36:08 AM): Right.

Me (9:36:30 AM): You can click through the subscription to the billing page yourself

Eileen (9:36:40 AM): I can go ahead and apply the USD 144.00 rate to your subscription for Digital access after the trial period ends.

Me (9:36:55 AM): Thanks, I appreaciate your help.

Me (9:37:18 AM): Can you send me an email to confirm that you will do that, so I have it for my records.

Eileen (9:37:36 AM): Sorry, when did you see the offer?

Me (9:37:44 AM): Its on the page right now

Eileen (9:37:44 AM): The USD144.00 offer?

Me (9:37:52 AM): I'm looking at it right now

Eileen (9:38:01 AM): Can you also send me the front page withe the date?

Me (9:38:05 AM): yes

Me (9:38:27 AM): There's no date on the front page

Me (9:38:48 AM): FT.com doesn't have a date.

Me (9:39:16 AM): Also, you can verify the rate yourself by just going to FT.com and clicking the subscription page and then clicking the subscription options.

Eileen (9:39:20 AM): That's what I am doing right now.

Eileen (9:39:28 AM): Please hold on for a few minutes.

Me (9:41:36 AM): If you need help, its this page: https://www.ft.com/products

Me (9:41:50 AM): That gives the list of subscription types

Me (9:42:11 AM): Then if you click the second product it will take you to the subscription billing page. At the bottom of the page is the annual rate.

Eileen (9:42:20 AM): Let me access this now.

Me (9:42:40 AM): If you have a problem try clearing your cache and loading the page again.

Eileen (9:43:32 AM): Hersh, what I will do is to verify the offer first with our marketing team.

Eileen (9:43:42 AM): Can I email you back once it is confirmed?

Eileen (9:43:49 AM): Apologies for the delays.

Me (9:43:52 AM): Sure. But don't you see it there on the web page?

Eileen (9:44:27 AM): No.

Eileen (9:44:42 AM): That is the reason I need to confirm the offer with our marketing team.

Me (9:44:46 AM): Did you click the link I sent you?

Me (9:44:48 AM): https://www.ft.com/products

Me (9:45:17 AM): If you have your own subscription it won't show you the subscription page.

Me (9:45:36 AM): You have to use a browser where you are not logged in with your own account to FT.com

Eileen (9:46:04 AM): i know how to do that, Hersh.

Eileen (9:46:23 AM): The thing is, we do not see the USD 144.00 offer on our website.

Me (9:46:40 AM): look I'm sending you another screenshot

Eileen (9:46:41 AM): That is why we need help of our relevant team.

Eileen (9:46:51 AM): Sure,.

Me (9:47:35 AM): Where are you located? Is it possible you get a different page than what I get in the USA?

Me (9:48:00 AM): This is rather extraordianry

Eileen (9:48:03 AM): It has to be USA.

Me (9:48:17 AM): I am in the USA ... are you abroad?

Eileen (9:48:50 AM): FT is global company Hersh.

Eileen (9:49:22 AM): I am not in the USA but I can access the subscription page and we should be aware if there is ongoing offer.

Me (9:49:23 AM): I know. Is the webpage different in different countries? I mean do you have different subscription offer pages based on location?

Eileen (9:49:28 AM): No.

Me (9:49:44 AM): How is it you can't see the same subscription page as me?

Me (9:49:55 AM): Can you use a proxy to browse from the USA?

Eileen (9:50:10 AM): We need confirmation by our relevant team.

Eileen (9:50:29 AM): If you can hold the line that would be better, is that okay Hersh?

Me (9:50:37 AM): yeah, I will hold

Eileen (9:51:27 AM): Thank you very much Hersh.

Eileen (9:51:46 AM): I am consulting them right now.

Eileen (9:55:23 AM): Almost done, Hersh.

Eileen (9:55:27 AM): Please hold on.

Me (9:55:31 AM): yep

Eileen (9:56:14 AM): Thank you for holding.

Eileen (9:56:26 AM): I got a confirmation from the marketing team.

Eileen (9:56:45 AM): The $144.00 annual digital rate is available for new subscribers in the US.

Me (9:56:45 AM): How come you were getting a different web page than me?

Eileen (9:57:41 AM): What I can do is to switch youu subscription to digital and give you a 25% discounted rate.

Me (9:58:13 AM): So you can't do the trial and then the $144 subscription?

Me (9:58:44 AM): do you mean a 25% discount on the $144 rate?

Me (9:58:58 AM): or on the $360 rate?

Eileen (9:59:11 AM): 25% discounted rate is USD 249.08.

Me (9:59:40 AM): I thought you just said the rate was confirmed as $144?

Me (9:59:57 AM): I'm a new subscriber in the USA

Me (10:00:05 AM): I literally just tried to sign up today

Me (10:00:29 AM): In fact, I signed up right before I initiated this chat with you

Eileen (10:00:35 AM): I am sorry, however, you already have a trial subscription.

Me (10:00:41 AM): HAHAHAHAHAHA

Me (10:00:51 AM): Okay, okay

Me (10:00:54 AM): understood

Eileen (10:01:35 AM): If you wish, I can still get an approval if we could apply any discounted rate to your digital subscription that is closest to USD 144.00.

Me (10:01:36 AM): So, Iet me get this. I clicked the link for the trial at $1 for 4 weeks. As a result of this I'm not a new subscriber. So I can't get the $144/year.

Me (10:01:54 AM): Even though ... I JUST signed up

Eileen (10:01:55 AM): That is correct.

Me (10:02:15 AM): Is it possible to cancel my trial membership right now and redo it as the year subscription?

Eileen (10:02:30 AM): Even if you click the link yourself online, it won't allow you to subscribe to it because you already have a trial subscription right now.

Eileen (10:02:39 AM): Technically, you are a subscriber to the FT.

Me (10:03:06 AM): Technically, I get it. But I mean, non-technically, can we cancel my current trial and just put me on the annual subscription.

Eileen (10:03:15 AM): Yes.

Me (10:03:28 AM): At the $144 rate?

Eileen (10:03:42 AM): You can cancel your trial subscription online via my account or click this link https://myaccount.ft.com/

Me (10:04:00 AM): And then redo it at the $144 rate?

Eileen (10:04:19 AM): Closest to $144 rate, but I will ask approval first and I will get back to you on that.

Me (10:05:23 AM): You need to get approval to cancel the trial membership I signed up for just prior to this chat, to convert it to the current annual subscription rate advertised on your newspaper web page?

Eileen (10:05:54 AM): I need to get an approval if I can honour the discounted rate to your subscription.

Me (10:05:57 AM): https://www.youtube.com/watch?v=oc-P8oDuS0Q


Me (10:06:13 AM): hahaha

Eileen (10:06:29 AM): However, it won't be USD 144.00, but I can check if there is a discounted rate closest to USD 144.00.

Me (10:06:47 AM): Come on ... that is ridiculous

Me (10:07:23 AM): So because I clicked the link that said get 4 weeks for $1/week, now I can't get the annual subscription at $144?

Eileen (10:07:33 AM): That is correct.

Me (10:08:14 AM): Ok, take care. At the very least I'll get some karma on Reddit.

Me (10:08:28 AM): You don't need to ask your manager. I'll just do the subscription through the website.

Me (10:08:29 AM): Cheers.

Eileen (10:08:53 AM): Are you sure you do not need me to get an approval for this Hersh?

Me (10:09:42 AM): I'm sure. Because I don't want to subscribe at a rate more than $144/year, just on principle. I'll just do WSJ for this year.

Me (10:09:57 AM): Take care

Eileen (10:10:21 AM): Well, if you will see that USD 144.00 rate on the subscription page of the FT.com, you should also see, I believe that is written on the website, that this is available for new subscribers that is why it is on the subscription landing page.

Eileen (10:10:33 AM): Thank you, too Hersh.

Me (10:10:39 AM): Of course. Not your fault. FT's fault.

Eileen (10:10:55 AM): Apologies for the confusion.

Eileen (10:11:24 AM): Hersh, I will still make a request for approval and I will send you an email for the progress within 24 hours.

Eileen (10:11:35 AM): I will keep the case open.

Eileen (10:11:35 AM): May I know if there will be any more assistance I can offer today?

Me (10:11:50 AM): That's it

Eileen (10:12:04 AM): Thank you for contacting Financial Times Customer Services. Have a great day! Goodbye now, Hersh.

As a post script, Eileen did eventually call him and say that she got approval to offer him a rate of $149 -- which is still $5 more than it should be. Also, as a clarification, while she claims in the story that the signup page for the $144 price says it's for new customers only that is not true. It's too big to stick in the post, but you can see it here if you'd like.

This is the kind of shady bait-and-switch practices that broadband companies try to get away with. It's pretty shameful to see FT trying it as well. Especially in a time where newspapers are desperate for subscribers. It certainly seems like a damn good reason not to give any money to the FT. Their reporting may be good, but these practices are sketchy.

19 Comments | Leave a Comment..

Posted on Techdirt - 12 July 2018 @ 1:29pm

Court Won't Rehear Blurred Lines Case, Bad News For Music Creativity

from the unfortunate dept

Back in March we wrote about the terrible decision by the 9th Circuit to uphold the also awful lower court ruling that the Pharrell/Robin Thicke song "Blurred Lines" infringed on Marvin Gaye's song "Got To Give It Up." If they had actually copied any of the copyright-protected elements of the original, this case wouldn't be a big deal. But what was astounding about this ruling is that nowhere is any copyright-protected expression of Gaye's shown to have been copied in Blurred Lines. Instead, they are accused of making the song have a similar "feel." That's... bizarre. Because "feel" or "groove" is not protectable subject matter under copyright law. And yet both the lower court and the appeals court has upheld it. And now, the 9th Circuit has refused to rehear the case en banc, though it has issued a slightly amended opinion, removing a single paragraph concerning the "inverse ratio rule" of whether or not greater access to a song means you don't have to show as much "substantial similarity."

Again, this is a ruling that should greatly concern all musicians (even those who normally disagree with us on copyright issues). This is not a case about copying a song. This is a ruling that now says you can't pay homage to another artist. It's a case saying that you can't build off of another artist's general "style" or to create a song "in the style" of an artist you appreciate. This is crazy. Paying homage to other artists, or writing a song in the style of another artist is how most musicians first learn to create songs. It does no harm to the original artist, and often introduces more people to their work.

Pharrell and Thicke can (and perhaps will?) ask the Supreme Court to hear an appeal, but, as always, it's pretty rare to get the Supreme Court to do so. And, on top of that, as long as Ruth Bader Ginsburg remains on the court, the court has a terrible record on getting copyright cases right (and, yes, it's almost always Ginsburg writing the awful copyright rulings).

As we noted last year, this case is already having chilling effects on musicians and songwriters who are literally afraid to even name check their influences for fear of a lawsuit. And, similar lawsuits are rapidly being filed. Indeed, Ed Sheeran is dealing with a lawsuit over whether or not his song "Thinking Out Loud" is too close to Marvin Gaye's "Let's Get It On." The songs do have the same chord progression, but are pretty different. Of course, having the same chord progression allowed Sheeran to sometimes easily perform a mashup of the two songs at concerts. But again, that's a tribute, but it's now being used against him.

Of course, that case has taken a really weird turn in that a new "party" has entered the fray. An organization called "Structured Asset Sales" wants to be a plaintiff too. And because you probably don't recall Structured Asset Sales last big chart topping hit, it's apparently an operation that "securitized" future earnings of various musicians (remember Bowie Bonds?). And one of the artists using Structured Asset Sales is Ed Townsend Jr., a co-author of "Let's Get It On". The Hollywood Reporter link above has a lot more details on what's going on in that case (which is wacky). In short, SAS tried to get into an earlier case filed by Townsend's heirs. That attempt to join the lawsuit was rejected by the courts, and while that's being appealed, it has filed a new lawsuit.

And all this because two songs have the same general chord progression. And, I realize for some non-music nerds, having the same chord progression may suggest copying, I'd suggest you watch the following few videos to disabuse you of that notion:

Watch both of those videos, and then recognize how all those songs could potentially be infringing under the Blurred Lines ruling, which tragically will stand thanks to the 9th Circuit's failure to correct its horrible mistake. Hopefully the Supreme Court will actually weigh in, but that's both unlikely and... potentially not helpful.

25 Comments | Leave a Comment..

Posted on Techdirt - 12 July 2018 @ 11:59am

SCOTUS Nominee Brett Kavanaugh Problematic Opinion On Anti-SLAPP Laws

from the bad-for-free-speech dept

So Tim Cushing has just taken a peek at Supreme Court nominee Brett Kavanaugh's 4th Amendment rulings and Karl already looked at his questionable opinion concerning net neutrality (in which he argued (bizarrely) that what blocking content and services on a network is a 1st Amendment "editorial" decision by broadband providers). Of course, that's just one of his 1st Amendment cases. I wanted to look over some of Kavanaugh's other free speech related opinions. Ken "Popehat" White has done a pretty good job covering most of them, noting that for the most part, Kavanaugh takes a fairly strong First Amendment approach in the cases that come to him, and seems unlikely to upset the apple cart on First Amendment law in any significant way (if you want to see more of his opinions, this is a good place to start).

As Ken notes, there really isn't that much to comment on on most of those decisions, and Karl already wrote about the weird net neutrality one, but I did want to focus in on another First Amendment-adjacent case where I think Kavanaugh was incorrect: on the question of whether or not state anti-SLAPP laws apply in federal court. To be clear, by itself, this is really not a First Amendment question on its own, it's a question about what laws apply where. The case is Abbas v. Foreign Policy Group and Kavanaugh wrote the majority opinion which said that DC's anti-SLAPP law can not be used in federal court.

Ken is correct that this ruling does not suggest that Kavanaugh is not interested in protecting First Amendment rights. But, that still does not mean that Kavanaugh's ruling is correct. Ken notes that some other judges have agreed with Kavanaugh, but it's also worth pointing out that even more judges have disagreed with Kavanaugh. Indeed, most other circuits that have taken up this issue have ruled in the other way, and said that state anti-SLAPP laws can be used in federal court. The debate over this does not come down to a First Amendment issue, but rather the issue of whether or not an anti-SLAPP law is mainly "substantive" or "procedural." Substantive state laws apply in federal court, while procedural ones do not. Anti-SLAPP laws have elements of both procedural and substantive laws, which is why there are arguments over this. But for a variety of reasons, it seems clear to us (and to many other judges) that the substantive aspects of most anti-SLAPP laws mean they're perfectly valid in federal court.

If you read Kavanaugh's ruling, his explanation for his reasoning is... minimal. He calls the arguments in favor of the other side "creative," and some of them were. But on the meat of the question -- is DC anti-SLAPP law more procedural or substantive -- he basically just says he disagrees with courts that found otherwise, and agrees with the judges that agree with him:

...the defendants cite some other courts that have applied State anti-SLAPP acts’ pretrial dismissal provisions notwithstanding Federal Rules 12 and 56. See, e.g., Godin v. Schencks, 629 F.3d 79, 81, 92 (1st Cir. 2010); Henry v. Lake Charles American Press, L.L.C., 566 F.3d 164, 168-69 (5th Cir. 2009); United States ex rel. Newsham v. Lockheed Missiles & Space Co., 190 F.3d 963, 973 (9th Cir. 1999); see generally Charles Alan Wright et al., 19 Federal Practice & Procedure § 4509 (2d ed. 2014). That is true, but we agree with Judge Kozinski and Judge Watford that those decisions are ultimately not persuasive.

Yes, but why? Kavanaugh does not really explain. And that's too bad, because the reasoning in those other courts is something that I do find pretty damn persuasive. Anti-SLAPP laws do have a procedural component, but they are primarily substantive in protecting the First Amendment rights of speakers. In particular, the Godin v. Schenks ruling gets into the weedy details of why the anti-SLAPP statute in that case does not bump up against or contradict federal procedures, while the Henry v. Lake Charles American Press ruling goes even further in highlighting the importance of protecting free expression:

Anti-SLAPP statutes such as Article 971 aim to curb the chilling effect of meritless tort suits on the exercise of First Amendment rights, and as the Supreme Court stated in Elrod v. Burns, 427 U.S. 347, 373, 96 S.Ct. 2673, 49 L.Ed.2d 547 (1976), "The loss of First Amendment freedoms, for even minimal periods of time, unquestionably constitutes irreparable injury." Indeed, the Supreme Court has time and again emphasized the importance of First Amendment rights. See, e.g., Curtis *181 Publ'g Co. v. Butts, 388 U.S. 130, 165, 87 S.Ct. 1975, 18 L.Ed.2d 1094 (1967) (Warren, C.J., concurring in the result) (noting "the fundamental interests which the First Amendment was designed to protect").... Article 971 thus provides for the avoidance of a trial that would imperil a substantial public interest. Indeed, as Article 971 embodies a legislative determination that parties should be immune from certain abusive tort claims that have the purpose or effect of imperiling First Amendment rights, "there is little room for the judiciary to gainsay its `importance.'"

Again, multiple courts have ruled this way as well.

At best, Kavanaugh argues that anti-SLAPP laws basically cover the same ground as federal procedure rules concerning motions to dismiss and motions for summary judgment. As he summarizes:

Federal Rules 12 and 56 answer the same question as the D.C. Anti-SLAPP Act, and those Federal Rules are valid under the Rules Enabling Act. A federal court exercising diversity jurisdiction therefore must apply Federal Rules 12 and 56 instead of the D.C. Anti-SLAPP Act’s special motion to dismiss provision.

But in the Godin case, the 1st Circuit does (what I believe is) a much more thorough analysis of the (admittedly different, but still similar) anti-SLAPP law in that case, and its relationship to Federal Rules 12 and 56, basically noting that the anti-SLAPP law covers different ground, and doesn't displace federal procedure:

Federal Rules 12(b)(6) and 56 are addressed to different (but related) subject-matters. Section 556 on its face is not addressed to either of these procedures, which are general federal procedures governing all categories of cases. Section 556 is only addressed to special procedures for state claims based on a defendant's petitioning activity. In contrast to the state statute in Shady Grove, Section 556 does not seek to displace the Federal Rules or have Rules 12(b)(6) and 56 cease to function. Cf. Morel, 565 F.3d at 24. In addition, Rules 12(b)(6) and 56 do not purport to apply only to suits challenging the defendants' exercise of their constitutional petitioning rights. Maine itself has general procedural rules which are the equivalents of Fed.R.Civ.P. 12(b)(6) and 56. See Me. R. Civ. P. 12; Me. R. Civ. P. 56. That fact further supports the view that Maine has not created a substitute to the Federal Rules, but instead created a supplemental and substantive rule to provide added protections, beyond those in Rules 12 and 56, to defendants who are named as parties because of constitutional petitioning activities.

Crucially, as the Godin ruling notes, anti-SLAPP laws change the burden of proof, and that is "substantive," meaning should be allowed in federal court:

Neither Fed.R.Civ.P. 12(b)(6) nor Fed. R.Civ.P. 56 determines which party bears the burden of proof on a state-law created cause of action. See, e.g., Coll v. PB Diagnostic Syst., Inc., 50 F.3d 1115, 1121 (1st Cir.1995). And it is long settled that the allocation of burden of proof is substantive in nature and controlled by state law. Palmer v. Hoffman, 318 U.S. 109, 117, 63 S.Ct. 477, 87 L.Ed. 645 (1943); Am. Title Ins. Co. v. E.W. Fin. Corp., 959 F.2d 345, 348 (1st Cir.1992).

Further, Section 556 provides substantive legal defenses to defendants and alters what plaintiffs must prove to prevail. It is not the province of either Rule 12 or Rule 56 to supply substantive defenses or the elements of plaintiffs' proof to causes of action, either state or federal.[16]

Because Section 556 is "so intertwined with a state right or remedy that it functions to define the scope of the state-created right," it cannot be displaced by Rule 12(b)(6) or Rule 56

Even in the recent 10th Circuit ruling that says that New Mexico's anti-SLAPP law shouldn't apply in federal court (which Cathy Gellis argues convincingly was incorrectly decided), that case was very specific to the language of New Mexico's fairly weak anti-SLAPP law -- which didn't shift the burden of proof (taking away one of -- though not all -- of the key arguments that the crux of the anti-SLAPP is substantive rather than procedural).

Admittedly, this is deep deep into the weeds on issues around federal procedure, but it is still disappointing that Kavanaugh went the other direction on the case and seems to wave off the fairly persuasive arguments that other judges have made by suggesting that somehow anti-SLAPP laws replace federal procedure. They do not.

Of course, the best away around even having this question be an open question is to have a federal anti-SLAPP law, but tragically Congress has so far failed to even seriously explore that whenever such bills have been introduced (and President Trump has certainly shown absolutely no interest in signing such a bill should it pass). As Ken notes in his piece, Kavanaugh does seem generally appreciative of anti-SLAPP laws in general, but feels that he can't allow DC's to be used in federal court for procedural reasons. That doesn't suggest that he is bad on free speech -- indeed, in that very same ruling he upholds the dismissal (with prejudice) of the defamation case at issue, just using the 12(b)(6) motion to dismiss process, rather than the DC anti-SLAPP rule.

And thus, I disagree with Kavanaugh's ruling on using DC's anti-SLAPP in federal court (as I disagree with his ruling on the 1st Amendment's applicability to net neutrality), but neither of those appear to diminish his general record as being strong on First Amendment issues.

26 Comments | Leave a Comment..

Posted on Techdirt - 11 July 2018 @ 9:24am

Shocker: DOJ's Computer Crimes And Intellectual Property Section Supports Security Researchers DMCA Exemptions

from the say-what-now? dept

Well here's a surprise for you. The DOJ's Computer Crime and Intellectual Property Section (CCIPS) has weighed in to support DMCA 1201 exemptions proposed by computer security researchers. This is... flabbergasting.

In case you don't know, Section 1201 of the Digital Millennium Copyright Act (DMCA) is the "anti-circumvention" part of the law. It's the part of the law that makes it infringement to get around any "technological measure" to lock down copyright covered material, even if breaking those locks has nothing whatsoever to do with copyright infringement. It's a horrible law that has created all sorts of negative consequences, including costly and ridiculous lawsuits about things having nothing to do with copyright -- including garage door openers and printer ink cartridges. In fact, Congress knew the law was dumb from the beginning, but rather than dump it entirely as it should have done, a really silly "safety valve" was added in the form of the "triennial review" process.

The triennial review is a process that happens every three years (obviously, per the name), in which anyone can basically beg the Copyright Office and the Librarian of Congress to create exemptions for cracking DRM for the next three years (an exemption -- stupidly -- only lasts those three years, meaning people have to keep reapplying). Over the years, this has resulted in lots of silliness, including the famous decision by the Librarian of Congress to not renew an exemption to unlock mobile phones a few years back. Many of the exemption requests come from security researchers who want to be able to crack systems without being accused of copyright infringement -- which happens more frequently than you might think.

Historically, law enforcement has often been against these exemptions, because (in general) they often appear to dislike the fact that security researchers find security flaws. This is, of course, silly, but many like to take a "blame the messenger" approach to security research. That's why this new comment from the DOJ's CCIPS is so... unexpected.

Many of the changes sought in the petition appear likely to promote productive cybersecurity research, and CCIPS supports them, subject to the limitations discussed below.

Incredibly, CCIPS even points out that those who are opposed to these cybersecurity research exemptions are misunderstanding the purpose of 1201, and that it should only be used to stop activity that impacts copyright directly. This is the kind of thing we've been arguing for years, but many companies and government agencies have argued that because 1201 helps them, no exemptions should be granted. But here, the DOJ explains that's not how it works:

Some comments opposing removal of any existing limitation on the security research exemption suggest, implicitly or explicitly, that the DMCA’s security research exemption itself poses a danger merely because it fails to prohibit a type of research to which the commenter objects. However, the purpose of the DMCA is to provide legal protection for technological protection measures, ultimately to protect the exclusive rights protected by copyright. As critically important as the integrity of voting machines or the safety of motorized land vehicles are the American public, the DMCA was not created to protect either interest, and is ill-suited to do so. To the extent such devices now contain copyrighted works protected by technological protection measures, the DMCA serves to protect those embedded works. However, the DMCA is not the sole nor even the primary legal protection preventing malicious tampering with such devices, or otherwise defining the contours of appropriate research. The fact that malicious tampering with certain devices or works could cause serious harm is reason to maintain legal prohibitions against such tampering, but not necessarily to try to mirror all such legal prohibitions within the DMCA’s exemptions.

There's a lot more in the comment, but... I'm actually impressed. Of course, the letter does note that part of the reason it wants this exemption is to enable security researchers to figure out how to crack into encrypted phones, but that's actually a reasonable position for the DOJ to take. Far better than seeking to backdoor encryption. Finding flaws is fair game.

All in all, this is a welcome development, having the DOJ's CCIPS recognize that security research is useful, and that it shouldn't be blocked by nonsense copyright anti-circumvention rules.

Read More | 55 Comments | Leave a Comment..

Posted on Techdirt - 10 July 2018 @ 3:44pm

Fake News Is A Meaningless Term, And Our Obsession Over It Continues To Harm Actual News

from the forget-fake-news dept

Many people forget now, but in the wake of the 2016 election, it was mainly those opposed to Donald Trump who were screaming about "fake news." They wanted an explanation for what they believed was impossible -- and one thing that many, especially in the journalism field focused on, were the made up stories that got shared wildly on Facebook. At the time, we warned that nothing good would come from so many people blaming "fake news" for the election, and I think it's fair to say we were correct on that. President Trump quickly co-opted the phrase and turned it into a mantra directed at any news story about him or his administration that he didn't like.

And, of course, the term was always meaningless. It encompassed such a broad spectrum of things -- from completely made up stories, to stories with bad sourcing or an error, to stories that were spun in a way people didn't like or found misleading, to stories with a minor mistake, to just stories someone didn't like. But each of those is very, very different, and the way that different news organizations respond to these issues can be very different as well. For example, professional publications that make mistakes will publish corrections when they discover they've made an error. Sometimes they don't do so well, and they don't always do a very good job of publicizing the correction -- but they do strive to get things right. That's different than publications that simply put up purely fake stuff, just for the hell of it. And there really aren't that many such sites. But by lumping them all in as fake news, people start to blur the distinctions, and think that basically everyone is just making shit up all the time.

That culminates in a new report claiming (though I question the methodology on this...) that 72% of Americans surveyed believe that traditional news sources "report news they know to be fake, false, or purposely misleading." The breakdown by political affiliation is that 53% of Democrats think this happens "a lot" or "sometimes," 79% of Independents, and 92% of Republicans. Of course, if you dug into the numbers, I'm guessing that the Democrats would point to Fox News as their proof, while the Republicans would point to MSNBC, CNN and maybe the NY Times/Washington Post.

Of course, most of this is silly. Some of it is the fact that the vast majority of news consumers don't know the difference between the hard news divisions of these news organizations and the "commentary" side of these organizations, with the latter being more in the entertainment, bomb throwing side of things, and who stake out ridiculous positions because that's what they're paid to do. The actual news orgs all do actually tend to want to do good reporting. They aren't always good at that -- in fact, they're often bad at it. But that's very, very different than deliberately spreading "fakes, false or purposely misleading" news.

However, simply lumping mistakes or a spin you dislike on coverage as "fake news" doesn't help. It just makes things more ridiculous and gets people up in arms more. And, again, just as we predicted, with the push to clamp down on "fake news," the end result is actually suppressing news. Facebook -- which was the main target of the whining from the anti-Trump world on "fake news" -- basically threw up its hands and said it would decrease all the news that people saw. And that means that every publication that was heavily relying on Facebook for traffic (i.e., nearly every publications except for us at Techdirt who ignored Facebook), is now getting slammed.

Slate tried to get news orgs to talk about how much their Facebook traffic dropped and no one would talk, so it revealed its own traffic decline from Facebook, dropping from 28 million clicks in January 2017 (about 1/3 of its total traffic) down to less than 4 million in May 2018 (now representing 11% of its traffic) -- a drop of 87%. The site claims Facebook traffic has dropped 55% alone in 2018. Again, we deliberately avoided "playing the Facebook game" over the last decade, so the site has never been a significant source of traffic. However, for comparison purposes, I checked, and Facebook represented 2.7% of our own traffic in January of 2017, and 2.4% of our traffic in May of 2018 -- basically no different, but also close to a rounding error.

But really, what this comes down to is that the whole "fake news" claim has always been silly and the calls to "do something" about fake news have really only served to make things worse. Using such a non-descriptive term has given lots and lots of people an excuse to mock or ignore any news or news organizations they dislike. And it's given an excuse to Facebook to step back from the news business altogether. None of that makes the public better news consumers or more media literate. All it does is keep people in their silos getting angry at each other.

93 Comments | Leave a Comment..

Posted on Techdirt - 10 July 2018 @ 9:51am

A Numerical Exploration Of How The EU's Article 13 Will Lead To Massive Censorship

from the it's-not-good-folks dept

One of the key talking points from those in favor of Article 13 in the EU Copyright Directive is that people who claim it will lead to widespread censorship are simply making it up. We've explained many times why this is untrue, and how any time you put in place a system for taking down content, tons of perfectly legitimate content gets caught up in it. Some of this is from malicious takedowns, but much of it is just because algorithms make mistakes. And when you make mistakes at scale, bad things happen. Most of you are familiar with the concept of "Type 1" and "Type 2" errors in statistics. These can be more simply described as false positives and false negatives. Over the weekend, Alec Muffett decided to put together a quick "false positive" emulator to show how much of an impact this would have at scale and tweeted out quite a thread, that has since been un-threaded into a webpage for easier reading. In short, at scale, the "false positive" problem is pretty intense. A ton of non-infringing content is likely to get swept up in the mess.

Using a baseline of 10 million piece of content and a much higher than reality level of accuracy (99.5%), and an assumption that 1 in 10,000 items are "bad" (i.e., "infringing") you end up with a ton of legitimate content taken down to stop just a bit of infringement:

So basically in an effort to stop 1,000 pieces of infringing content, you'd end up pulling down 50,000 pieces of legitimate content. And that's with an incredible (and unbelievable) 99.5% accuracy rate. Drop the accuracy rate to a still optimistic 90%, and the results are even more stark:

Now we're talking about pulling down one million legitimate, non-infringing pieces of content in pursuit of just 1,000 infringing ones (many of which the system still misses).

Of course, I can hear the howls from the usual crew, complaining that the 1 in 10,0000 number is unrealistic (it's not). Lots of folks in the legacy copyright industries want to pretend that the only reason people use big platforms like YouTube and Facebook is to upload infringing material, but that's laughably wrong. It's actually a very, very small percentage of such content. And, remember, of course, Article 13 will apply to basically any platform that hosts content, even ones that are rarely used for infringement.

But, just to humor those who think infringement is a lot more widespread than it really is, Muffett also ran the emulator with a scenario in which 1 out of every 500 pieces of content are infringing and (a still impossible) 98.5% accuracy. It's still a disaster:

In that totally unrealistic scenario with a lot more infringement than is actually happening and with accuracy rates way above reality, you still end up pulling down 150,000 non-infringing items... just to stop less than 20,000 infringing pieces of content.

Indeed, Muffett then figures out that with a 98.5% accuracy rate, if a platform has 1 in 67 items as infringing, at that point you'll "break even" in terms of the numbers of non-infringing content (147,000) that is caught by the filter, to catch an equivalent amount of infringing content. But that still means censoring nearly 150,000 pieces of non-infringing content.

This is one of the major problems that people don't seem to comprehend when they talk about filtering (or even human moderating) content at scale. Even at impossibly high accuracy rates, a "small" percentage of false positives leads to a massive amount of non-infringing content being taken offline.

Perhaps some people feel that this is acceptable "collateral damage" to deal with the relatively small amount of infringement on various platforms, but to deny that it will create widespread censorship of legitimate and non-infringing content is to deny reality.

31 Comments | Leave a Comment..

Posted on Techdirt - 9 July 2018 @ 1:36pm

Blaming The Messenger (App): WhatsApp Takes The Blame In India Over Violence

from the this-doesn't-help dept

You may have heard over the past few weeks that there's been some mob violence in India in response to totally false information that is being spread. But if you've heard about it, it's almost certainly in conjunction with a lot of finger pointing not at the people spreading the misinformation, or those, you know, lynching people based on false information. Instead, the blame is being squarely placed... on the app where the misinformation is being spread: WhatsApp.

A mob in India lynched five people after rumors spread by WhatsApp messages prompted suspicion that they were child abductors, the latest in a spate of violent crimes linked to the messaging service.

The victims were killed in Dhule district of the western state of Maharashtra on Sunday morning after locals accused them of being part of a gang of "child lifters," police said.

It was the fourth time in recent weeks that WhatsApp messages have inspired deadly attacks in India.

This has resulted in many, many calls for WhatsApp (and its parent company, Facebook) to "do something" about this. Indeed, the Indian government has more or less demanded that WhatsApp stop "false messages" from being spread on its app. Of course, that's... not easy. It's not easy for a variety of reasons, both technical and cultural. On the technical side, WhatsApp is (famously, and for very good and helpful reasons) using end-to-end encryption. So no one at WhatsApp/Facebook can see what's in those messages. That's a good thing (especially for everyone whining about how Facebook sucks up too much data about us). No one should want WhatsApp to backdoor that encryption in any way, because that just creates even more problems.

And then of course, there's the cultural side of this. Even if WhatsApp could read the messages, how could it possibly know what was legit and what was not. And how could it determine that fast enough to stop a mob from going nuts.

WhatsApp has tried to explain all of this to the Indian government -- and rather than understanding these issues, many people seem to be screaming about how this is Facebook/WhatsApp "ignoring" its responsibility.

That doesn't mean things can't be done. Nikhil Pahwa wrote up a thoughtful analysis of how to best tackle the problem noting (correctly) upfront that "This is a complex problem with no single solution: there is no silver bullet here." Importantly, Pahwa notes that many of the "solutions" are not dependent on WhatsApp doing anything, but rather better law enforcement, counter speech efforts, user education and more. He does have some suggestions for how WhatsApp could make a few changes that would create a level of friction for public messages and publicly sharing content -- including tagging public messages with a unique ID tied to the original message creator.

But... there are also potential unintended consequences with these approaches. And others reasonably point out that activists and dissidents could potentially be seriously hurt by some of the proposed suggestions:

And, WhatsApp does appear to be trying to do something. A new version has apparently included a "suspicious link detector." If you're wondering how that's possible with end-to-end encryption, it works locally on your phone. Of course, that also probably limits its effectiveness. It appears to at least notice "suspicious" characters that are designed to mimic more standard characters to fake more well known sites. But it's unclear how much that will actually help.

Thankfully, at least some are pointing out that blaming WhatsApp makes no sense, and the country's own government really has itself to blame.

The fact that such misinformation not only fuels citizens’ paranoia, but also causes them to take matters into their own hands in droves, is indicative of a lack of faith in the machinery meant to maintain law and order in the country, a lack of understanding of the consequences of participating in these activities, and an inability to find truth beyond the realm of their messaging inbox.

That article, at The Next Web, by Abhimanyu Ghoshal points out that rather than the Indian government demanding WhatsApp fix the problem, it might want to consider using WhatsApp to try to counter the narrative:

Instead of blaming WhatsApp, India’s government needs to tackle the larger issues that are making its people paranoid and vulnerable to the viral spread of lies. Hell, it could even use WhatsApp to do that.

Last year, the Bharatiya Janata Party, which is currently in power in the country, was reportedly working to set up roughly 5,000 WhatsApp groups to spread its campaign messaging for the 2018 assembly elections across the southern state of Karnataka, which is home to some 61 million people.

For starters, it should launch a campaign to encourage people to question the veracity of information they receive via social media and messaging platforms. It also needs to remind people about the laws that they must adhere to within the country’s borders.

It's obviously problematic that misinformation is leading to such violence and death. And, obviously, there's a lot of interest in how these messages are spreading so rapidly using apps like WhatsApp. But we shouldn't get so focused on the shiny new thing as the actual point of failure. There are much larger societal and governmental issues at play. Blaming the app may be politically convenient, but it is not accurate, and is unlikely to help in either the short or the long run.

7 Comments | Leave a Comment..

Posted on Techdirt - 9 July 2018 @ 10:44am

Yes, Privacy Is Important, But California's New Privacy Bill Is An Unmitigated Disaster In The Making

from the not-how-to-do-it dept

We've talked a little about the rush job to pass a California privacy bill -- the California Consumer Privacy Act of 2018 (CCPA) -- and a little about how California's silly ballot initiatives effort forced this mad dash. But a few people have asked us about the law itself and whether or not it's any good. Indeed, some people have assumed that so many lobbyists freaking out about the bill is actually a good sign. But, that is not the case. The bill is a disaster, and it's unclear if the fixes that are expected over the next year and a half will be able to do much to improve it.

First, let's state the obvious: protecting our privacy is important. But that does not mean that any random "privacy regulation" will be good. In a future post, I'll discuss why "regulating privacy" is a difficult task to tackle without massive negative consequences. Hell, over in the EU, they spent years debating the GDPR, and it's still been a disaster that will have a huge negative impact for years to come. But in California they rushed through a massive bill in seven days. A big part of the problem is that people don't really know what "privacy" is. What exactly do we need to keep private? Some stuff may be obvious, but much of it actually depends quite heavily on context.

But the CCPA takes an insanely broad view of what "personal info" is covered. Section 1798.140(o)(1) defines "personal information" to mean... almost anything:

“Personal information” means information that identifies, relates to, describes, is capable of being associated with, or could reasonably be linked, directly or indirectly, with a particular consumer or household. Personal information includes, but is not limited to, the following:

(A) Identifiers such as a real name, alias, postal address, unique personal identifier, online identifier Internet Protocol address, email address, account name, social security number, driver’s license number, passport number, or other similar identifiers.
(B) Any categories of personal information described in subdivision (e) of Section 1798.80.
(C) Characteristics of protected classifications under California or federal law.
(D) Commercial information, including records of personal property, products or services purchased, obtained, or considered, or other purchasing or consuming histories or tendencies.
(E) Biometric information.
(F) Internet or other electronic network activity information, including, but not limited to, browsing history, search history, and information regarding a consumer’s interaction with an Internet Web site, application, or advertisement.
(G) Geolocation data.
(H) Audio, electronic, visual, thermal, olfactory, or similar information.
(I) Professional or employment-related information.
(J) Education information, defined as information that is not publicly available personally identifiable information as defined in the Family Educational Rights and Privacy Act (20 U.S.C. section 1232g, 34 C.F.R. Part 99).
(K) Inferences drawn from any of the information identified in this subdivision to create a profile about a consumer reflecting the consumer’s preferences, characteristics, psychological trends, preferences, predispositions, behavior, attitudes, intelligence, abilities, and aptitudes.

So, first off, note that it's not just associated with any individual, but also a "household." And, again, in the list above, there are lots of items and situations where it totally makes sense that that information should be considered "private." But... there are other cases where that might not be so obvious. Let's take "information regarding a consumer's interaction with an internet web site." Okay. Yes, you can see that there are reasonable privacy concerns around a company tracking everything you do on a website. But... that's also generally useful information for any website to have just to improve the user experience -- and basically every website has tended do some form of user tracking. It's not privacy violating -- it's just understanding how people use your website. So if I'm tracking how many people flow from the front page to an article page... now suddenly that's information impacted by this law. Perhaps the law is intended to mean tracking people on other websites through beacons and such... but the law appears to make no distinction between tracking people on your own website (a feature that's built into basically every webserver) and tracking people elsewhere.

Similarly, "preferences" is private information? Sure, on some sites and for some reasons that makes sense. But, I mean, we do things like let users set in their preferences whether or not they want to see ads on our site at all (in case you don't know, you can turn off ads on this site in the preferences, no questions asked). But... in order to make sure we don't show you ads, we kinda have to keep track of those preferences. Now, I think many of you will recognize that in removing ads, we're actually helping you protect your privacy. But under this law, we're now incentivized not to keep such preferences because doing so is now its own legal liability.

And that leads into the "costs" vs. the "benefits" of such a law. Again, lets be clear: many internet companies have been ridiculously lax in how they treat user data. That's a real problem that we don't, in any way, mean to diminish. But, the costs of this law seem very, very, very likely to outweigh a fairly minimal set of benefits. On the benefits side: yes, a few companies who have abused your data will face some pretty hefty fines for continuing to do so. That's potentially good. But the costs are going to be massive. For this part, I'll borrow from Eric Goldman's analysis of the bill, which is well worth reading. It's long, but he outlines just some of the likely "costs":

How Much Will This Cost? (part 1) Regulated companies–i.e., virtually every business in California–will need to spend money on compliance, including building new processes to deal with the various consumer requests/demands. Adding up all of the expenditures across the California economy, how much will this cost our society? It’s not like these expenditures come from some magic pot of money; the costs will be indirectly passed to consumers. Are consumers getting a good deal for these required expenditures?

How Much Will This Cost? (part 2) Lengthy statutes seem like they are detailed enough to eliminate ambiguity, but it actually works in reverse. The longer the statutes, the more words for litigators to fight over. This law would give us 10,000 different bases for lawsuits. One of the current tussles between the initiative and the bill is whether there is a private right of action. Right now, the bill attempts to limit the private causes of action to certain data breaches. If the private right of action expands beyond that, SEND YOUR KIDS TO LAW SCHOOL.

How Much Will This Cost? (part 3) The bill would create a new “Consumer Privacy Fund,” funded by a 20% take on data breach enforcement awards, to offset enforcement costs and judiciary costs. Yay for the bill drafters recognizing the government administration costs of a major new law like this. Usually, bill drafters assume a new law’s enforcement costs can be buried in existing budgets, but here, the bill drafters are (likely correctly) gearing up for litigation fiestas. But exactly how much will these administration costs be, and will this new fund be sufficient or have we written a blank check from the government coffers to fund enforcement? Most likely, I expect the Consumer Privacy Fund will spur enforcement, i.e., enforcement actions will be brought to replenish the fund to ensure it has enough money to pay the enforcers’ salaries–a perpetual motion machine.

Let's dig into that first part, because it's important. It's important to remind people that this bill is not an "internet" privacy bill. It's an everyone privacy bill. Any business that does business in California more or less is impacted (there are some limitations, but for a variety of reasons -- including vague terms in drafting -- those limitations may be effectively non-existent). So now, in order to comply, any company, including (for example!) a small blog like ours, will have to go through a fairly onerous process to even attempt to be in compliance (though, as point two in Eric's list above shows, even then we'll likely have no idea if we really are).

An analysis by Lothar Determann breaks out some of what we'd need to do to comply, including setting up entirely new processes to handle data access requests, including a system to verify identity and authorization of individuals, and ways of tracking requests and blocking the chance that anyone who opts out of certain practices is offered a chance to opt-back in. So... to use the example above, if someone sets a preference on our site not to see ads, but then makes a data privacy request that we not track that data, we then would likely need to first verify the person and that they are the one making the request, and then we'd need to set up a system to (get this!) make sure we somehow track them well enough so that they... um... can't "opt-in" to request that we no longer show them ads again.

Think about that. In order to stop letting someone opt out of ads on our site "for privacy purposes" we'd have to set up a system to track them to make sure that they aren't even offered the possibility of opting out of ads again. It's mind boggling. Also, this:

Consider alternative business models and web/mobile presences, including California-only sites and offerings, as suggested in Cal. Civ. Code §1798.135(b) and charges for formerly free services to address the complex and seemingly self-contradictory restrictions set forth in Cal. Civ. Code §1798.125 on a company's ability to impose service charges on California residents who object to alternate forms of data monetization.

Okay, so I'm all for businesses exploring alternative business models. It's what we talk about here all the time. But... requiring that by law? And, even requiring that we offer a special "California-only" site that we charge for?

I'm having difficulty seeing how that helps anyone's privacy. Instead, it seems like it's going to cost a ton. And... for limited to negative benefit in many cases. Just trying to figure out what this would cost us would mean we'd probably have to let go of multiple writers and spend that money on lawyers instead.

And that leaves out the cost to innovation in general. Again, this is not to slight the fact that abusive data practices are a real problem. But, under this law, it looks like internet sites that want to do any customization for users at all -- especially data-driven customization -- are going to be in trouble. And sure, some customization is annoying or creepy. But an awful lot of it is actually pretty damn useful.

An even larger fear: this could completely cut off more interesting services and business models coming down the road that actually would serve to give end users more control over their own data.

Also, as with the GDPR, there are serious First Amendment questions related to CCPA. A number of people have pointed out that the Supreme Court's ruling in Sorrell v. IMS Health certainly suggests some pretty serious constitutional defects with the CCPA. In Sorrell, the Supreme Court struck down a Vermont law that banned reporting on certain prescription practices of doctors as violating the First Amendment. It's possible that CCPA faces very similar problems. In an analysis by professor Jeff Kosseff, he explains how CCPA may run afoul of the Sorrell ruling:

CCPA is more expansive than the Vermont law in Sorrell, covering personal information across industries. Among its many requirements, CCPA requires companies to notify consumers of the sale of their personal information to third parties, and to opt out of the sale. However, CCPA exempts “third parties” from coverage if they agree in a contract to process the personal information only for the purposes specified by the company and do not sell the information. Although CCPA restricts a wider range of third-party activities than the Vermont statute, it still leaves the door open for some third parties to be excluded from the disclosure restrictions, provided that their contracts with companies are broadly written and they do not sell the data. For instance, imagine a contract that allows a data recipient to conduct a wide range of “analytics.” Because the recipient is not selling the data, the company might be able to disclose personal information to that recipient without honoring an opt-out request.

Under Sorrell, such distinctions might lead a court to conclude that CCPA imposes a content-based restriction on speech. Moreover, the findings and declarations section of the bill cites the revelations about Cambridge Analytica’s use of Facebook user data, and states “[a]s a result, our desire for privacy controls and transparency in data practices is heightened.” This could cause a court to conclude that the legislature was targeting a particular type of business arrangement when it passed CCPA.

I highly recommend reading the rest of Kosseff's analysis as well. He notes that he's generally in favor of many internet regulations -- and has been advocating for cybersecurity regulations and didn't think amending CDA 230 would be that bad (I think he's wrong on that... but...). And his analysis is that CCPA is so bad it cannot become law:

[M]y initial reaction was that nothing this unclear, burdensome, and constitutionally problematic could ever become law.

He then goes on to detail 10 separate serious issues under the law -- and notes that those are just his top 10 concerns.

While it's nice to see lots of people suddenly interested in privacy issues these days, the mad dash to "deal" with privacy through poorly thought out legislation where the target and intent is unclear other than "OHMYGOD INTERNET COMPANIES ARE DOING BAD STUFF!!!" isn't going to do any good at all. There is little in here that suggests real protection of privacy -- but plenty of costs that could significantly change the internet many of you know and love, in large part by making it nearly impossible for small businesses to operate online. And, frankly, ceding the internet to the largest providers who can deal with this doesn't exactly seem like a way to encourage smaller operations who actually are concerned about your privacy.

25 Comments | Leave a Comment..

Posted on Techdirt - 9 July 2018 @ 9:15am

More Police Admitting That FOSTA/SESTA Has Made It Much More Difficult To Catch Pimps And Traffickers

from the i-mean,-who-could-have-predicted-it... dept

Prior to the passage of SESTA/FOSTA, we pointed out that -- contrary to the claims of the bill's suppporters -- it would almost certainly make law enforcement's job much more difficult, and thus actually would help human traffickers. The key: no matter what you thought of Backpage, it cooperated with law enforcement. And, law enforcement was able to use it to track down traffickers using online services like Backpage. Back in May we noted that police were starting to realize there was a problem here, and it appears that's continuing.

Over in Indianapolis, the police have just arrested their first pimp in 2018, and it involved an undercover cop being approached by the pimp. The reporter asks why there have been so few such arrests, and the police point the finger right at the shutdown of Backpage:

The cases, according to Sgt. John Daggy, an undercover officer with IMPD’s vice unit, have just dried up.

The reason for that is pretty simple: the feds closed police’s best source of leads, the online personals site Backpage, earlier this year.

“We’ve been a little bit blinded lately because they shut Backpage down,” Daggy said. “I get the reasoning behind it, and the ethics behind it, however, it has blinded us. We used to look at Backpage as a trap for human traffickers and pimps.”

Got that? Just as we noted, Backpage was an incredibly useful tool for police to find human traffickers and pimps. And... thanks to do gooders insisting that Backpage was to blame, now Backpage is gone, and the police can't find the traffickers and pimps any more.

This does not seem like the way to stop trafficking. It seems like the way to make it more difficult for law enforcement to stop it.

“With Backpage, we would subpoena the ads and it would tell a lot of the story,” Daggy said. “Also, with the ads we would catch our victim at a hotel room, which would give us a crime scene. There’s a ton of evidence at a crime scene. Now, since [Backpage] has gone down, we’re getting late reports of them and we don’t have much to go by.”

The article is quite long and detailed -- and, somewhat incredibly -- even gets Sgt. Daggy to admit that he used to complain about Backpage, and then realized how useful it was as a police tool:

Shortly after Indianapolis hosted the Super Bowl, Daggy was invited to give a presentation at the Conference of Attorneys General.

“I was badmouthing Backpage big time,” he said, “because, you know, we were getting all of our arrests off there. We made over 60 arrests and caught four human trafficking cases during the Super Bowl.”

After he presented, Daggy says the website’s lawyer came up to speak to him.

“She came up to me and said, ‘You know, if we shut down, the ads will go offshore and someone else will pick them up,’” Daggy said.

That’s when Daggy started viewing Backpage as a trap – a useful tool for police trying to find victims who rarely self-report, and perpetrators who rarely come out in the open.

Of course, I'm still waiting to hear what all those people who supported SESTA/FOSTA have to say about all of this. Where is Amy Schumer, who put out a PSA in favor of SESTA/FOSTA, now that police are admitting that it's putting women's lives at risk, and that they're no longer able to track down and stop traffickers. Where are all the moralizing people who just happened to also be magically connected to the Hollywood studios who have always wanted to attack CDA 230, but suddenly found a "cause" to use in saying they needed to open up CDA 230 to stop sex trafficking. You guys made a problem much, much worse.

35 Comments | Leave a Comment..

Posted on Techdirt - 5 July 2018 @ 3:33pm

Kim Dotcom Loses Latest Round In Extradition Fight, Will Try To Appeal Again

from the this-case-will-never-end dept

Kim Dotcom's ongoing legal saga continues. The latest is that the New Zealand Court of Appeal has rejected his appeal of earlier rulings concerning whether or not he can be extradited to the US. Dotcom and his lawyers insist that they will appeal to the Supreme Court, though there seems to be some disagreement about whether or not that will even be possible. The full ruling is worth a read, though much of it is dry and procedural.

And, I know that many people's opinion of this case is focused almost exclusively on whether they think Kim Dotcom and Megaupload were "good" or "bad," but if you can get past all of that, there are some really important legal issues at play here, especially concerning the nature of intermediary liability protections in New Zealand, as well as the long-arm reach of US law enforcement around the globe. Unfortunately, for the most part it's appeared that the courts have been much more focused on the whole "but Dotcom is obviously a bad dude..." and then used that to rationalize a ruling against him, even if it doesn't seem to fit what the law says.

As Dotcom and his lawyers have noted, this has meant that, while there are now three rulings against him on whether or not he can be extradited, they all come to different conclusions as to why. A key issue, as we've discussed before, is the one of "double criminality." For there to be an extraditable offense, the person (or people) in question need to have done something that is a crime in both the US and New Zealand. As Dotcom has argued over and over again, the "crime" that he is charged with is effectively criminal secondary copyright infringement. And that's a big problem, since there is no such thing as secondary criminal copyright infringement under US law. Since Megaupload was a platform, it should not be held liable for the actions of its users. But the US tries to wipe all of that away by playing up that Dotcom is a bad dude, and boy, a lot of people sure infringed copyright using Megaupload. And all of that may be true, but it doesn't change the fact that they should have to show that he actually broke a law in both countries.

Indeed, the lower court basically tossed out the copyright issue in evaluating extradition, but said he could still be extradited over "fraud" claims. Dotcom argued back that without the copyright infringement, there is no fraud, and thus the ruling didn't make any sense.

The Court of Appeal comes to the same conclusion, but for somewhat different reasons. It appears that Dotcom's lawyers focused heavily on what some might consider technical nitpicking in reading of the law. Pulling on a tactic that has been tried (not successfully...) in the US, they argued that reading through the text of the copyright shows that it only applies to "tangible" copies -- i.e., content on a physical media -- rather than on digital only files. In the US, at least, the Copyright Act is written in such a way that a plain reading of the law says that copyright also only applies to physical goods, rather than digital files. But, as has happened here, US courts have not been willing to accept that fairly plain language in the statute because it would mess up the way the world views copyright. It's no surprise that the New Zealand court came to the same end result. While it would be better if the law itself were fixed, the courts seem pretty united in saying that they won't accept this plain reading of the statute, because that would really muck things up. Unfortunately, in focusing on that nitpicking, it may have obscured the larger issues for the court.

Over and over again in the ruling, the court seems to bend over backwards to effectively say, "look, Dotcom's site was used for lots of infringement, so there's enough evidence that he had ill intent, and therefore we can hand him over to the US." That seems like a painfully weak argument -- but, again, par for the course around Dotcom. So, basically, even though it has other reasons than the lower court, this court says there's enough here to extradite:

We have departed from the Judge in our analysis of s 131.260 But the Judge’s conclusions on ss 249 and 228 (the latter of which we will turn to shortly) were not affected by his conclusion on s 131. Each of the ss 249 and 228 pathways depended on dishonesty, as defined, and the other elements of the actus rei of those offences. Inherent in the Judge’s finding was that dishonesty for the purpose of s 249 (and s 228) did not require proof of criminal conduct under s 131. With that conclusion we agree. It is plainly sufficient that for the purposes of s 217 that the relevant acts are done without belief in the existence of consent or authority from the copyright owner. It does not need to amount to criminal conduct independently of s 249. Put another way, “dishonestly” as defined in s 217 is not contingent on having committed another offence, but is instead simply an element of the offence.

That may be a bit confusing, but basically they're saying it doesn't much matter whether or not there was actual criminal copyright infringement or not, because there was enough "dishonesty" to allow Dotcom to be extradited on other issues.

Again, none of this is that surprising, but it does again feel like the courts reacting to how they perceive Dotcom himself, rather than following what the law actually says. That should worry people. At this point, it seems highly likely that Dotcom's attempts to appeal to the Supreme Court will fail and that he will be extradited. Of course, then there would still need to be legal proceedings in the US -- though the judge assigned to his case has already shown little interest in understanding the nuances of copyright and intermediary liability law, so it's likely to be quite a mess here as well.

Whatever you think of Kim Dotcom, many of the legal arguments against him seem almost entirely based on the fact that people want to associate him with the actions of his users, and the fact that he didn't seem to much care about what the legacy entertainment industry thought of him. Maybe he deserves to be locked up -- but it's hard to argue that the process has been fair and based on what the law actually says.

Read More | 123 Comments | Leave a Comment..

Posted on Techdirt - 5 July 2018 @ 11:58am

The Death Of Google Reader And The Rise Of Silos

from the the-changing-web dept

I've been talking a lot lately about the unfortunate shift of the web from being more decentralized to being about a few giant silos and I expect to have plenty more to say on the topic in the near future. But I'm thinking about this again after Andy Baio reminded me that this past weekend was five years since Google turned off Google Reader. Though, as he notes, Google's own awful decision making created the diminished use that allowed Google to justify shutting it down. Here's Andy's tweeted thread, and then I'll tie it back to my thinking on the silo'd state of the web today:

Many people have pointed to the death of Google Reader as a point at which news reading online shifted from things like RSS feeds to proprietary platforms like Facebook and Twitter. It might seem odd (or ironic) to bemoan a move by one of the companies now considered one of the major silos for killing off a product, but it does seem to indicate a fundamental shift in the way that Google viewed the open web. A quick Google search (yeah, yeah, I know...) is not helping me find the quote, but I pretty clearly remember, in the early days of Google, one of Larry Page or Sergey Brin saying something to the effect of how the most important thing for Google was to get you off its site as quickly as possible. The whole point of Google was to take you somewhere else on the amazing web. Update It has been pointed out to me that the quote in question most likely is part of Larry Page's interview with Playboy in which he responded to the fact that in the early days all of their competitors were "portals" that tried to keep you in with the following:

We built a business on the opposite message. We want you to come to Google and quickly find what you want. Then we’re happy to send you to the other sites. In fact, that’s the point. The portal strategy tries to own all of the information.

Somewhere along the way, that changed. It seems that much of the change was really an overreaction by Google leadership to the "threat" of Facebook. So many of Google's efforts from the late 2000s until now seemed to have been designed to ward off Facebook. This includes not just Google's multiple (often weird) attempts at building a social network, but also Google's infatuation with getting users to sign in just to use its core search engine. Over the past decade or so, Google went very strongly from a company trying to get you off its site quickly to one that tried to keep you in. And it feels like the death of Reader was a clear indication of that shift. Reader started in the good old days, when the whole point of an RSS reader was to help you keep track of new stuff all over the web on individual sites.

But, as Andy noted above, part of what killed Reader was Google attempting desperately to use it as a tool to boost Google+, the exact opposite of what Google Reader stood for in helping people go elsewhere. I don't think Google Reader alone would have kept RSS or the open web more thriving than it is today, but it certainly does feel like a landmark shift in the way Google itself viewed its mission: away from helping you get somewhere else, and much more towards keeping you connected to Google's big data machine.

35 Comments | Leave a Comment..

Posted on Techdirt - 5 July 2018 @ 9:05am

EU Parliament Votes To Step Back From The Abyss On Copyright For Now

from the even-against-sir-paul's-support dept

The last few days (and weeks) we've had plenty of articles about the EU's attempt to undermine the fundamental aspects of the internet with its Copyright Directive, including a snippet tax and the requirement of upload filters. Supporters of the Directive have resorted to ever-increasing levels of FUD in trying to get the EU Parliament to move the directive forward without changes -- and they did this despite quietly making the directive much, much worse and only revealing those changes at the last minute. It became quite obvious that the intent of this legislative effort was to fundamentally change the internet, to make it much more like TV -- with a set of gatekeepers only allowing carefully selected and licensed content online.

As the drumbeat got louder from (quite reasonably) concerned people around the world, supporters of the effort kept trying different strategies in support of this nonsense -- including a letter claiming to be written by Sir Paul McCartney.

I have some serious doubts as to whether or not McCartney actually understands these issues. The fact that the letter uses the RIAA's exact talking points -- including the made up phrase "value gap" (not to mention the American English spelling of "jeopardizes" over "jeopardises") -- certainly hints at someone else writing this up and asking McCartney to sign. It certainly reflects pretty poorly on someone as beloved as McCartney (who, in the past, has actually embraced the open internet to more directly connect with fans) that he would weigh in on the wrong side of such an issue.

Either way, the good news is that even with McCartney's silly letter, the EU Parliament voted against moving the current version forward by a narrow tally of 318 to 278.

As noted in that tweet, this means that these issues will be up for amendments and more specific votes in September -- meaning we've got at least a few more months of fighting to save the open internet ahead of us. And you can be sure that, despite the weak efforts by those in favor of these changes over the past few weeks, they'll bulk up their offense as well.

In particular, expect a lot more claims from the recording industry that this is all about "helping artists," which is the same nonsense they pushed around SOPA/PIPA back in the day. But anyone who's actually taken the time to understand Article 13 (and Article 11) will understand that this is the opposite of that. It is not designed to help artists. It will seriously harm many of the platforms that those artists rely on. Sites like Bandcamp and Etsy and Redbubble and Kickstarter would be at risk. This effort is little more than a misguided attempt to force Google to give record labels more money -- and the proposals' backers really don't seem to give a shit if the end result is that smaller internet players are removed from the playing field and Google is put in an even more dominant position. They're just so focused on the (misguided) idea that Google somehow owes them more money, they'll take down the rest of the internet in pursuit of that obsession.

Either way, today's vote is historic. It's extremely rare for a legislative effort that has left committee to get reopened by the wider EU Parliament. As the good folks at EDRi note, this only happened because so many of you spoke up, contacting Members of the EU Parliament and spreading the news about this online (using many of the platforms that this legislation would harm). But, there's still a lot to be done:

We'll continue to report on this and keep people aware, but sites like SaveYourInternet.eu and EDRi will have lots of news, as (I'm sure) will MEP Julia Reda, who has been leading the charge to help preserve the open internet in the EU.

40 Comments | Leave a Comment..

Posted on Techdirt - 4 July 2018 @ 9:00am

For July 4th, Make Sure To Order Your NSA-Approved T-Shirt

from the it's-the-right-thing-to-do dept

A month ago, we took a bunch of public domain/FOIA'd NSA "security posters" from the 1950s, 60s and 70s, and turned them into some pretty terrific retro style t-shirts. We're not publishing today as it's July 4th, and we thought: what better way to celebrate July 4th than to order some NSA-approved t-shirts (or mugs or hoodies)? They're real conversation starters. You can see the whole collection at our Teespring store.

Of course, we've heard from some people that they're not sure which NSA poster they want on a t-shirt or mug -- so I thought for the holiday, I'd share some information on which ones are most popular so far. At the top of the list we've got the groovy "Secure All Classified Material" design:

In close second is my favorite, "Security for the Seventies." Don't be left behind.

Those two are by far the most popular. After that, far behind those two, we have a cluster of another 5 designs that people seem to like. There's "Up Tight and Out of Sight" which looks more like an album cover than a security poster:

There's "Be Sure to Vote Security" -- which you can't really argue with:

Another popular styling one is the "Lock Before You Leave" footprint:

There's certainly something practical about the recommendation to "Tighten Security Practices":

And finally, who can pass up this good advice: "Do Not Discuss Classified Business Outside Authorized Areas". I have to imagine this shirt must make for quite the conversation starter...

Check them out, get a nice t-shirt, mug or hoodie -- and support Techdirt in the process.

Leave a Comment..

Posted on Techdirt - 3 July 2018 @ 7:39pm

EU Parliament's Legislative Affairs Committee Is Now Misleading Members Of Parliament In Effort To Fundamentally Alter The Internet

from the misinformation-at-work dept

We've had a bunch of posts today alone (and in the past few weeks) about the absolutely terrible EU Copyright Directive that the wider EU Parliament will vote on this Thursday. The version that will receive a vote on Thursday was only just released and it shows that the legislative affairs committee, JURI, that voted for it a few weeks ago actually took a really bad proposal and made it significantly worse. As more and more people have woken up to this fact and started calling it out, it appears that JURI is going on the offensive. And I mean "offensive" in both definitions of the word.

JURI sent sent the attached document to Members of Parliament, trying to defend its position on Articles 11 and 13. The email it sent reads as follows:

Dear Colleagues,

Before Thursday's vote on the mandate of the copyright file, you will find attached an update on the content of the text adopted in JURI, accompanied, with regard to explanations, by the text passages of the corresponding compromises. This to try to answer, once again, the massive disinformation campaign that we are experiencing.

Thank you for taking note.

Kind regards,

There is only one "massive disinformation" campaign going on, and it's by those in favor of Articles 11 and 13, and JURI is a key player in it, judging from this complete nonsense document. Let's dig in:


The paragraph 1 of the article makes it clear that the remuneration of press publishers is only an option:

“1.Member states shall provide publishers of press publications with the rights provided for in article 2 and article 3(2) of directive 2001/29/ec so that they may obtain fair and proportionate remuneration for the digital use of their press publications by information society service providers.”

This gives a lot of flexibility to the application of this provision.

Actually, it does not. Remember the original point of the EU Copyright Directive was to "harmonize" copyright laws across the EU, because trying to comply with many different copyright laws was harming the ability to produce and release content in the EU. Under the terms of Article 11, all member states now need ("shall provide") to create a brand new right for publishers. And while the directive gives "a lot of flexibility," that's because (despite requests from many!), the drafters decided to ignore pleas to give some direction on what this right would apply to. It could have only applied to works covered by copyright -- which would require more than a minimal snippet and also would require an element of creativity. But the EU Commission and JURI, bizarrely, refused to include that. Instead, they leave it up to the member states to implement as they want. That "flexibility" means that any member state can put a snippet tax on the use of a single word.

And, then, because the whole point of the freaking directive was to allow a harmonization so that works could be published across the EU, whichever EU country comes up with the most ridiculous, and most limiting publishers' rights will "win" and everyone will have to live down to that standard in order to avoid infringing on this new right. So that "flexibility" actually argues against JURI here, because it's a large part of what makes the snippet/link tax so incredibly dangerous. Without putting any real effort towards protecting the rights of users, but allowing the flexibility for states to create rights that harm the public, it pretty much guarantees that result.

Moreover, it is important to note that Member States shall ensure that authors receive an appropriate share of the additional revenues that press publishers receive for the use of a press publication by information society service providers.

I'm curious if JURI has done any research on how corrupt Collection Societies have been over time. The idea that the money will flow from publishers to authors is laughable. For years we've been collecting stories of how collection societies -- often "controlled" by large legacy industry players -- collect lots of money for copyright licenses, but magically seem to have trouble doling it out to actual creators. Creating a new such collection society and a new right on top of existing rights doesn't change any of that.

In order to answer those who are worried about consequences on social networks:

NO, hyperlinks are not included in this article, and it is very clear in the text:

“2a. The rights referred to in paragraph 1 shall not extend to acts of hyperlinking.”

We already discussed the whole addition of the "shall not extend to acts of hyperlinking" text this morning. It's meaningless. The rest of the Article makes it clear that states can implement this in a way that will clearly impact hyperlinks, in part because most hyperlinks contain a snippet. And, again, JURI disregarded requests by many to make it clear that snippets should have to be more than just a single word or phrase -- thus leaving that open.

NO, there will be no impact on individual users since private and non - commercial uses of press publications are not covered by the article.

“1a. The rights referred to in paragraph 1 shall not prevent legitimate private and non-commercial use of press publications by individual users.”

This is the most ridiculous part. Most "legitimate private and non-commercial" users of press publications are using platforms to share links. So, of course it will impact them. Even beyond that, it will clearly limit what news and information people are able to find online. Remember, Spain implemented this kind of snippet tax, and a comprehensive study showed that it significantly harmed small publishers. So, uh, does JURI think it can just ignore the evidence? It certainly appears to be the case.

In addition, the right established by paragraph 1 of Article 11 only applies to press publications used by “information society service providers, which are defined in the text, and not to individual users that are excluded in the paragraph 1 (a) of Article 11.

“1.Member states shall provide publishers of press publications with the rights provided for in article 2 and article 3(2) of directive 2001/29/ec so that they may obtain fair and proportionate remuneration for the digital use of their press publications by information society service providers.”

“1a. The rights referred to in paragraph 1 shall not prevent legitimate private and non-commercial use of press publications by individual users.”

Same exact point I made above. The fact that it doesn't apply directly to end users is meaningless, since those end users pretty much all rely on the platforms -- the "information society service providers" under the directive -- that the law will impact. Who actually takes JURI seriously here?


It aims to make platforms accountable, but not all platforms. Article 13 needs to be seen in conjunction with article 2 of the draft directive.

“Article 2 (4a) ‘online content sharing service provider’ means a provider of an information society service one of the main purposes of which is to store and give access to the public to copyright protected works or other protected subject-matter uploaded by its users, which the service optimises. “

“Services acting in a non-commercial purpose capacity such as online encyclopaedia, and providers of online services where the content is uploaded with the authorisation of all concerned rightholders, such as educational or scientific repositories, should not be considered online content sharing service providers within the meaning of this directive. Providers of cloud services for individual use which do not provide direct access to the public, open source software developing platforms, and online market places whose main activity is online retail of physical goods, should not be considered online content sharing service providers within the meaning of this directive.”

Only those that are active, so that optimize the content posted online.

I've seen a few people -- including MEP Axel Voss, who is responsible for this monstrosity -- keep making this point, and it's so ridiculous as to make me question if any of these people have ever actually used the internet. This definition will absolutely apply to a ton of online platforms. That they carved out a few, very narrow and very specific exceptions, after a few organizations complained, does not mean that Article 13 is not a bulldozer coming for a large part of the open internet.

Part of what makes the internet valuable is that it's a communications medium, by which anyone can communicate with anyone. That's the wonder of user-generated content platforms -- and all of those will pretty much qualify under Article 13, because if they accept input from users, that input is going to be covered by copyright. Even the idea that "cloud services" are carved out is laughable, because note the caveat on those: it only applies if they "do not provide direct access to the public." Can you name a cloud service provider that does not include a "share" button? That's what makes the cloud valuable. If it's just to store my personal stuff, why not just park a drive in my closet?

Also, no general filtering measures are included in Article 13. The text even emphasizes that this practice is prohibited:

1.b members states shall ensure that the implementation of such measures shall be proportionate and strike a balance between the fundamental rights of users and rightholders and shall in accordance with article 15 of directive 2000/31/ec, where applicable not impose a general obligation on online content sharing service providers to monitor the information which they transmit or store.”

This is the "plausible deniability" clause similar to Article 11's "but this doesn't apply to hyperlinks" nonsense. You can say that article 13 doesn't create a requirement for upload filters all you want, but when there's literally no conceivable way to suggest you're complying without installing an upload filter, it's a meaningless assertion. Besides, the very next claim completely debunks this one:

However, active platforms need to put in place measures in cooperation with rightholders when they alert platforms about the public availability of infringing content.

1a. Member states shall ensure that the online content sharing service providers referred to in the previous sub-paragraphs shall apply the above mentioned measures based on the relevant information provided by rightholders.”

So... there are no mandatory upload filters... but "active platforms need to put in place measures for dealing with rightsholders. That... certainly sounds like a requirement for upload filters.

Finally, Article 13 will not lead to censorship of the entire internet.

It does not threaten freedom of expression or fundamental rights.

Who are you going to believe on this one? An EU Parliamentary committee that has already shown a fundamental inability to understand how the internet works... or David Kaye, the UN's special rapporteur for freedom of expression, who wrote JURI a long and detailed report explaining exactly how Article 13 threatens freedom of expression and fundamental rights? I'm going to have to side with the UN's free speech expert on that one.

The meme, mash-up, the gifs are already allowed and included in an existing exception and will still be after the adoption of this directive (article 5, directive 2001/29/EC

3. Member States may provide for exceptions or limitations to the rights provided for in Articles 2 and 3 in the following cases: (k) use for the purpose of caricature, parody or pastiche

I'd like to highlight an important sneaky bit here. Note how earlier, all of the stuff about online platforms used the word "shall" for implementing these upload filters? Now look at the text JURI highlights here claiming that there are "existing exceptions." See the different word? It's not "shall," it's "may provide." May is different than shall. And not every EU state has provided for such user rights.

But there's a larger point here. We all know that determining what is considered non-infringing as "caricature, parody or pastiche" is not something that is done easily. It's certainly not something done by an algorithm. In many cases it takes years long trials and appeals, with lots of disagreement. Yet, the text of Article 13, and apparently the geniuses on JURI seem to think that online platforms can put in place effective measures to make those determinations (1) in a split second and (2) without any chance of getting it wrong and (3) without a likelihood of taking down protected, non-infringing speech.

If whoever wrote up this nonsense for JURI actually believes that, then let them create such a filter, because it doesn't -- and cannot -- exist.

A provision was even added to ensure a complete protection of users’ data, even though GDPR naturally applies to all legislation:

2.2 Moreover, in accordance with Directive 95/46/EC, Directive 2002/58/EC and the General Data Protection Regulation, the measures referred to in paragraph 1 should not require the identification of individual users and the processing of their personal data.

Again, this shows a rather stunning level of technical ignorance. First they demand that all content be tracked to make sure it's not infringing... and then at the same time, they insist that such a system can't make use of individual data. This is, effectively, JURI telling internet platforms "you are required to base your servers on the sun and... you're not allowed to transmit data through space." These requirements are written by people who have no idea what they're talking about.

Small and medium-sized enterprises

Any platform is covered by Article 13 if one of their main purposes is to give access to copyright protected content to the public.

It cannot make any difference if it is a “small thief” or a “big thief” as it should be illegal in the first place.

Small platforms, even a one-person business, can cause as much damage to right holders as big companies, if their content is spread (first on this platform and possibly within seconds throughout the whole internet) without their consent.

This may be my favorite part of this nonsense. Remember how just a few paragraphs above this JURI was insisting that it wouldn't impact individuals and that everyone was ignoring that Article 13 only applied to a tiny subset of internet sites? Now, here, it's arguing the exact opposite, saying straight up that it must apply to basically all internet sites, even ones that are run by a single person. And they use "one-person business" ignoring the fact that tons of individuals will, say, post ads or donation links on their personal websites, just to pay for the hosting. But that will suddenly turn them into being "commercial" businesses under the umbrella of the censorship requirements of Article 13.

So, which is it JURI? Does Article 13 apply to these platforms or not? Ah, it totally does:

In view of such a small business potentially causing such a tremendous damage to right holders, the compromise text does not foresee any exemption for SMESs.

Seriously, it feels like whoever wrote this portion of the document apparently has never met whoever wrote the earlier part of the document trying to play down how many sites it would impact. Someone should introduce them to each other.

However, the text provides safeguards that will benefit SMEs. Measures must be appropriate and proportionate.

We cannot demand the same thing from an SME as from Youtube.

Since the measures may be very different in nature, from the content recognition system to a simple notification system, there are many possibilities for SMEs to find measures corresponding to their means and size.

Okay, so I run a small platform. You tell me that the measures must be "appropriate and proportionate." I have no freaking clue what that means for me. I don't need to implement ContentID, which is good because ContentID cost more to build than Techdirt probably makes in a century. But... as a small site I'm left with zero understanding of what I need to do, other than block the EU or hire some very, very expensive lawyers who probably still can't stop me from getting dragged into court.

This kind of uncertainty is going to be a massive drag on smaller sites.

Finally, solutions compatible with the Directive already exist on the market, are affordable for SMEs and the market will continue to develop in this direction.

This is what should be known as "the Audible Magic lie." Supporters of Article 13 love to point to Audible Magic -- who makes a filter platform for music -- to claim that there are products on the market. There are a few major problems with this claim. First of all, the idea that they are "affordable" is laughable. As I've noted in the past, we spoke with a smaller platform who noted that Audible Magic quoted them a price of approximately $50,000 per month. That's over half a million dollars a year. And this was not a large site. Smaller sites don't have an extra half a million dollars lying around to hand off to some company to provide a tool that doesn't work very well and which serves no real purpose other than to annoy its users and drive them elsewhere.

Second, such filters may exist for music (and possibly video), but that's not the case for lots of other content. Photos? Not really. Text? Nope. Yet, Article 13 applies to everything.

JURI's attempt to salvage the horrible internet-destroying directive it passed a few weeks ago is confused, ignorant and disingenuous. Hopefully MEPs don't buy it. If you haven't yet, NOW is the time to tell MEPs to #SaveYourInternet. Because if they don't, we're going to have a very, very different internet in the near future. And the public isn't going to like it.

Read More | 85 Comments | Leave a Comment..

Posted on Techdirt - 3 July 2018 @ 10:35am

Axel Voss, MEP Behind Awful Internet Destroying EU Copyright Directive, Tries To Defend His Plan

from the and-fails-miserably dept

Axel Voss, the German Member of the EU Parliament in charge of pushing through the absolutely awful EU Copyright Directive is apparently (finally) feeling some of the heat from people speaking up about just how terrible Articles 11 (link tax) and 13 (mandatory upload filters) will be for the internet. He's put out a video attempting to defend the plan. Even if you don't speak German, I'd recommend watching the video to see his smirk throughout the whole thing. He does not seem to care, nor does he seem to understand the actual implications of what he's doing. Considering that many have tried to explain this to him already, I doubt that we will change his mind, but it's worth exploring just how clueless he appears to be on this issue, and how that should worry Europeans about the future of their internet.

We have at the moment an extreme imbalance in the whole copyright system, on the platforms everything is up loaded, but there is no remuneration of the concerned author, this is getting more by number and that is why I think it is urgently necessary to adapt the copyright to the digital environment.

What, exactly, is the "imbalance"? This is an important question, because the answer is that there is no "imbalance." We live in a time when more content than ever before in history is being created. But it goes beyond that: thanks to the internet, more content creators than ever before in history are able to create and release their own content. And more creators than ever before in history are able to make money from their creations. The idea that "there is no remuneration" isn't just nonsense, it suggests a level of ignorance so high on the part of Voss that he shouldn't be allowed anywhere near questions related to copyright. Tons of different platforms -- the very platforms which will be hit hardest by this nonsense -- are instrumental in building new audiences for artists and helping them get paid.

Bandcamp, Patreon, Kickstarter, Vimeo, Medium, Etsy, DeviantArt, Shapeways, Wordpress, Wix, Lulu, PledgeMusic, Artistshare, Blurb, Scribd, Smashwords, Redbubble, CreateSpace. These are all platforms that are helping a variety of creators create, build up an audience and make money. Yet all of them will be hit hard by Article 13's ridiculous mandatory upload filters, which will in turn do tremendous harm to artists, taking away or greatly limiting their ability to use these platforms. The idea that there's some "imbalance" that Voss can magically fix is nonsense. He's been hearing too much from a few old school legacy businesses that failed to adapt to the internet and now blame Google. And his response is to saddle tons of internet platforms with an unworkable system that will harm all sorts of content creators. Meanwhile, his main target, Google, already has in place a system that has paid out billions of dollars to creators. Yet he continues to lie and claim there is no "remuneration"?

How does anyone take him seriously?

Meanwhile, nothing in the Copyright Directive is likely to get actual artists paid, so whatever false imbalance he sees isn't "cured" by his plan. Google already has an upload filter for YouTube, so nothing changes there. The link tax has already been proven to be an utter failure in Germany and Spain and hasn't gotten legacy publishers any more money -- but has taken away some of their traffic (making them lose money).

So, again, why can't Axel Voss explain what this mythical "imbalance" is?

After that, Voss talks specifically about upload filters, and claims that people complaining about them are engaged in "fake news." Seriously.

This 'nice' fake news campaign that is done by the big platforms, with key words such as 'censor machine' or upload filter, etc so that everybody jumps up without ever having done a reflection on our systematic we established here. If you really look at it, it only concerns platforms, 1 to 5 per cent at all from the global internet and also only those that actually publish copyright protected content, by the clicks they earn money without dripping a single contribution. That we try now is to establish a recognition software for copyright protected content.

We already discussed much of this in responding to PRS's silly attempt at "myth busting" claims about these upload filters. The fact that the law only targets platforms doesn't mean that it doesn't impact the users of those platforms.

Again, a filter would be a massive expense for many of the platforms I named above, and for many, it would make no sense at all. Yet, under the law, many would be obligated to try to build or buy such a filter. Others might simply stop operating in the EU altogether.

As for his attempt to downplay who it impacts, again, this shows just how incredibly ignorant Voss is of how the internet works. As we've mentioned in the past, under Article 13, a site like Tinder would be required to invest in an expensive copyright filter to scan all the images people upload. After all, it publishes lots of copyright-protected content, and it earns money on those images as a commercial business. But is anyone actually going to Tinder for copyright infringement? Of course not, but Voss doesn't seem to care or notice.

And, thanks to the way copyright is now, every platform that allows for user generated content involves publishing "copyright protected content" and assuming it has ads or collects money in anyway, it's commercial. So, this includes forum sites, for example. Are they going to have to pre-scan every comment to make sure it doesn't infringe on some copyright? Voss seems to think only in terms of the largest platforms, and doesn't even give a second thought to the fact that much of the internet is for communicating with one another - and that will involve platforms where users upload content as part of communicating with others. All of that content needs to be scanned?

Voss then responds to other MEPs speaking out against the Directive, calling it "nonsense."

To 99,9 per cent do I assume that they have not once read article 2 and article 13 together. Then I have to say if you don’t pay attention to what we do here, then I have to declare it as nonsense. When you take a look at what we try to achieve, to apply after all copyright, if we want to abandon this or if the German colleagues want to abandon this all, so they need to say so, but only to harp on about it, because it is now ‘en vogue’ to use that word, this I think is wrong.

This is both insulting and ridiculous. The complaints are well thought out and argued and point out just how little Voss actually understands about the damage he's about to do to the internet, and he insults them and brushes it off pretending that "99.9%" haven't even read the Directive. This is not about what's "en vogue," this is about stopping Voss from fundamentally mucking up key parts of how the internet works because a few legacy industries went crying to him over their own inability to adapt to the changing world.

And that brings us to the whole link tax bit:

Neighbouring rights should in principle support the press work, the work of journalists is in a certain way accompanied, so to say the analogue platform for their content; it is legally and factual secured and there is an economical responsibility.

This now completely put into question, due to the fact that there are platforms that use simply the achievements of the publisher without paying anything. And by this the whole press system in a democratic structure becomes dependent from platforms. This is something we don’t want to accept this. Therefore we need to ask ourselves the question if we want an independent press in our democratic society, also financially independent and does it represent a value. And if we believe in this value we should look into ensuring it.

This... not only misunderstands how the internet works, but also how media works. Again, the target here is Google News, which Voss incorrectly seems to think is somehow responsible for many newspapers' economic struggles. But that's nonsense. Google News drives traffic to these websites. And, if they don't think it's valuable, it literally takes 5 seconds for any publisher to opt-out of having its stories appear in Google News. But they won't do that, because they know it's valuable. Destroying Google News won't save news publishers.

He continues, arguing that the link tax isn't really a link tax:

Everybody that wants to create link for a private purpose can continue to do so, this part we excluded precisely, but those who gain again money through this, by a commercial use, they should ask if they can do it or not.

I don’t understand the whole excitement, this is common and part of our value and justice system and to transfer it to the internet and to get this impression that nothing is allowed now, no single person won’t be touched by this. The contrary, we will make it possible that each one can up load content, can use it without being in a legal obligation and this reform brings this positive aspect.

Got that? If you make money by linking, you need to ask permission first. That's... insane. It completely and fundamentally changes the way the internet works. The whole freaking idea behind the internet is you can link to anywhere on the internet... and Voss just wants to completely toss that out he window. And then complains that he "doesn't understand the whole excitement."

Voss doesn't seem to understand that lots of "amateur" and "hobby" sites put up advertisements, or take donations or crowdfunding, just to fund the site. But, those sites "gain money" and thus are "commercial" under his confused view of the world -- and therefore can no longer link without first getting permission. Again, this is so mind-blowingly backwards, and goes against everything that the internet itself was built around. And for what purpose? Because a few old school newspapers can't adapt and are demanding Google pay them money.

What becomes abundantly clear is that Axel Voss has no business being in charge of copyright reform in the EU. He doesn't understand copyright. He doesn't understand the internet. And yet, he's very, very close to fundamentally changing the thing he doesn't understand, based on ideas he doesn't seem to comprehend.

On Thursday, the EU Parliament will vote on this awful plan, and it's incredibly important to contact MEPs to let them know to not to let Voss's ignorance ruin the internet.

31 Comments | Leave a Comment..

Posted on Techdirt - 3 July 2018 @ 9:23am

Latest Text Of EU Copyright Directive Shows It's Even Worse Than Expected: Must Be Stopped

from the the-internet-is-at-risk dept

One of the oddities of the vote a few weeks back in the EU Parliament's JURI Committee concerning the proposed EU Copyright Directive, was that there was no published text of what they were voting on. There were snippets and earlier proposals released, but the full actual text was only just released, and it's not in the most readable of formats. However, what is now clear is that the JURI Committee not only failed in its attempts to "fix" the many, many, many problems people have been raising about the Directive, but it actually makes many of the problems worse -- including saying that online platforms become legally responsible for any copyright infringement on their platforms. This new text effectively says that the internet should only be a broadcast medium, and no longer allow for open user platforms.

MEP Julia Reda has an excellent analysis of what's problematic about the released text, but I want to focus in on a few of the more bizarre changes in particular. First, on the whole Article 11 "link tax" bit, JURI apparently thought they could quiet down the protests by adding a line that says:

2a. The rights referred to in paragraph 1 shall not extend to acts of hyperlinking.

So now supporters of Article 11 will point to this new line and say "see?!? it's not about a link tax." Which would be great... if the rest of the text actually lived up to that. Unfortunately, basically every bit of the rest of Article 11 undermines that. Because it still creates a license requirement on a snippet of any length, and most URLs these days include a "snippet" of the headline of an article within the URL itself. Unless everyone starts stripping the text that includes such snippets -- making URLs significantly less useful -- those links will still run afoul of this licensing/tax requirement. Thus, them declaring that a hyperlink is not covered is meaningless if the rest of the directive can only be read in a manner that would include nearly all links.

Once again, it appears this amendment was written by someone who has no functional understanding of how the internet works, and thus does not realize how badly drafted this proposal is. It's the kind of thing a non-technically-inclined lawyer would write in response to people calling this a "link tax." "Oh," they would say, "well, let's just say it doesn't apply to links," even if any reading of the directive would mean it absolutely must apply to most links -- especially any that use any sort of descriptive text. On top of that if you share a link on a platform like Twitter or Facebook that automatically sucks in some snippet text, you're now violating the law as well.

Another change made by JURI is much, much, much more concerning. This is on Article 13, the part about mandatory upload filters. For unclear and unknown reasons, JURI decided to expand Article 13 to make it even more ridiculous. First, it redefines an "online content sharing services" to completely wipe out any intermediary liability protections for such platforms. The most standard form of protecting platforms from liability is to note (correctly!) that a platform is just a tool and is not the publisher or speaker of works posted/uploaded by third-party users. That's sensible. You can then (as the EU already does) put certain restrictions on those protections, such as requiring a form of a notice-and-takedown regime. But, the fundamental, common sense, idea is that a platform is the tool, and not the actual "speaker" of the third party content.

But JURI wiped that out. Instead, it explicitly states that any content shown via online platforms are the responsibility of those platforms by saying that such platforms "perform an act of communication to the public" in showing the content uploaded by users. This is a massive change and basically wipes out all intermediary liability protections for platforms:

Online content sharing service providers perform an act of communication to the public and therefore are responsible for their content. As a consequence, they should conclude fair and appropriate licensing agreements with rightholders. Therefore they cannot benefit from the liability exemption provided...

That's... bad. It's much, much worse than the original text from the commission, which made it clear that intermediary liability protections in the E-Commerce Directive still applied to such platforms. Here they explicitly remove that exception and say that platforms cannot benefit from such protections. As Reda points out, under this reading of the law almost any user-generated site on the internet will be in violation, and potentially at significantly greater legal liability than the various "pirate" sites people have complained about in the past:

By defining that platforms – and not their users – are the ones “communicating” uploaded works “to the public”, they become as liable for the actions of their users as if they had committed them themselves. Let’s imagine a company that makes an app for people to share videos of their cats. If even one user among millions uses the CatVideoWorld3000 app to record a Hollywood movie off a theater screen rather than their kitty, that’d be legally as bad as if the business’ employees had committed the copyright infringement themselves intentionally to profit off of it. The Pirate Bay, MegaUpload and Napster were all much more innocent than any site with an upload form will now be in the eyes of the law.

And that's not all that JURI did. It also outlawed image search with an amendment. No joke.

Use of protected content by information society services providing automated image referencing

Member States shall ensure that information society service providers that automatically reproduce or refer to significant amounts of copyright-protected visual works and make them available to the public for the purpose of indexing and referencing conclude fair and balanced licensing agreements with any requesting rightholders in order to ensure their fair remuneration. Such remuneration may be managed by the collective management organisation of the rightholders concerned.

This was not discussed previously and not recommended by the EU Commission. But, what the hell, while they're outlawing Google News, why not outlaw Google Images in the same shot.

There's a lot more in the text, but it's really, really bad. Effectively, the document envisions a world in which everything on the internet is "licensed" and any platform will be legally liable for any content on its platform. What you get in that world is not the internet -- the greatest communications medium ever made. What you get is... TV. A limited broadcast medium only for those who are pre-checked by gatekeepers.

It is incredibly important that the EU not move forward with this Directive. Contact the EU Parliament now and tell them to #SaveYourInternet before they vote on this proposal this Thursday.

41 Comments | Leave a Comment..

Posted on Techdirt - 3 July 2018 @ 3:23am

NY Times, Winner Of A Key 1st Amendment Case, Suddenly Seems Upset That 1st Amendment Protects Conservatives Too

from the what-the-actual... dept

Over the weekend, the NY Times' Adam Liptak, who usually does quite an excellent job covering the Supreme Court for the NY Times, published an absolutely bizarre long feature piece, claiming that conservatives had "weaponized" free speech, and that liberals who had been the leading champions of free speech for decades were now sometimes regretting their positions on free speech, because conservatives were now using it also. The whole piece is mind-bogglingly stupid for a whole variety of reasons, though Rob Beschizza probably put it best by noting:

If you can't read that, Rob notes:

This NYTimes op ed about "weaponized" free speech somehow manages to crudely misrepresent every opinion on free speech held by every faction in play in American politics. It is a diamond of failure glinting horribly from every perspective

That's about right. I'd also argue that it's a masterclass in confirmation bias and cherry-picking. It starts with the thesis of conservatives weaponizing free speech, and then tries to build a structure around that, ignoring any and all evidence to the contrary. I should also note that just about any argument that tries to lump a giant group of the population together as "conservatives" or "liberals" or "right wing" or "left wing" is generally going to be nonsense and cherry picking, rather than anything useful. And this article is no exception.

The key -- incredibly stupid -- underlying argument is that everything went downhill for "liberal" free speech when corporations started winning First Amendment cases. This is a really bad take -- and just to point out why, I'll point you to the New York Times v. Sullivan, one of the most important defamation cases ever decided by the courts, which used the First Amendment to make sure that the press had very strong protections in printing what they wanted without allowing angry litigants to take them down with defamation lawsuits. That case is only won by the NY Times -- the same publication that published this latest silly article -- because a corporation (the NY Times Company) is able to have First Amendment rights. Without corporations getting First Amendment rights, defamation cases would sink any publication doing serious reporting.

That this same NY Times is now publishing this tripe is a travesty and spits on that legacy.

Besides, if you're actually interested in the same issues about corporations using the First Amendment, a much better take on this topic was done just a couple months ago in an episode called How Corporations Got Rights. It takes a much more reasonable look at this issue and highlights why we shouldn't be so quick to complain about corporations having fundamental rights.

But Liptak's piece has none of the nuance of the On The Media segment, and instead hits you over the head again and again with anecdotes falsely arguing that free speech is now being "weaponized" against the most vulnerable, ignoring how frequently the First Amendment still is used every single day to protect the vulnerable. But that's ignored because it also protects people or ideas that the NY Times thinks "liberals" don't like:

“Because so many free-speech claims of the 1950s and 1960s involved anti-obscenity claims, or civil rights and anti-Vietnam War protests, it was easy for the left to sympathize with the speakers or believe that speech in general was harmless,” he said. “But the claim that speech was harmless or causally inert was never true, even if it has taken recent events to convince the left of that. The question, then, is why the left ever believed otherwise.”

Some liberals now say that free speech disproportionately protects the powerful and the status quo.

“When I was younger, I had more of the standard liberal view of civil liberties,” said Louis Michael Seidman, a law professor at Georgetown. “And I’ve gradually changed my mind about it. What I have come to see is that it’s a mistake to think of free speech as an effective means to accomplish a more just society.”

Basically, Liptak found some liberals who are now upset because they realize conservatives get their speech protected by the First Amendment too. The response to that should be "duh" or just (in true free speech fashion) plain mockery. Because, of course, the idea is that the First Amendment protects everyone's speech. That's the point. That some people on one side of the coin or the other are upset that it supports expression by those with opposing viewpoints really only highlights one thing: those people were never really First Amendment supporters. They were just supporters of having their own speech protected.

There may be reasonable questions to ask about the borderline between expression and action. Or if the current (extremely limited) list of exceptions to the First Amendment are appropriately bounded. But the idea that one side has "weaponized" free speech, or the idea that some people think that free speech is being used to protect ideas they don't like, is a silly concept and not one that deserves a serious think piece in the NY Times.

I mean, why give time to this nonsense without pointing out how silly it is:

To the contrary, free speech reinforces and amplifies injustice, Catharine A. MacKinnon, a law professor at the University of Michigan, wrote in “The Free Speech Century,” a collection of essays to be published this year.

“Once a defense of the powerless, the First Amendment over the last hundred years has mainly become a weapon of the powerful,” she wrote. “Legally, what was, toward the beginning of the 20th century, a shield for radicals, artists and activists, socialists and pacifists, the excluded and the dispossessed, has become a sword for authoritarians, racists and misogynists, Nazis and Klansmen, pornographers and corporations buying elections.”

Is there any evidence to back up the idea that it is "mainly" used as a weapon of the powerful? Because that would be interesting. Instead, there are just anecdotes. Anecdotes of assholes using the 1st Amendment to be assholes. But that's not actually new. Klansmen famously used the First Amendment to protect their right to march in Skokie decades ago.

Furthermore, notice how MacKinnon lumped "pornographers" in with misogynists, racists and Nazis above. Yet, there are lots of well known First Amendment cases involving pornography that I don't think anyone would have considered being "conservatives" or "the powerful" wielding the First Amendment. Rather it was the reverse. Larry Flynt winning his Supreme Court case against Jerry Falwell was not an example of a "conservative" win for free speech. Yet, why in this article is it now being lumped in as on that side of the ledger by claiming "pornography" is now conservative?

Again, Liptak is an excellent Supreme Court reporter, who does a much better job explaining the details and ins and outs of various cases than many other legal reporters. But this entire piece is garbage -- something I can say because I'm protected by the First Amendment. The NY Times has made a mockery of its own legacy as a First Amendment beacon in publishing such nonsense.

104 Comments | Leave a Comment..

Posted on Techdirt - 2 July 2018 @ 9:25am

Music Industry's Nonsense 'Myth Busting' About EU's Censorship Machines Is Basically Saying 'Nuh-uh' Repeatedly

from the that's-not-myth-busting dept

A few weeks back we busted the bogus myth busting by the big EU publishers who were trying to fight back against people explaining why the proposed EU Copyright Directive's Article 11 "link tax" was so damaging. That myth buster was so full of nonsense that it was easy to take apart. However, at least the publishers tried to explain their position, even if they failed miserably. Now, on the other bad part of the Copyright Directive, Article 13 with its mandatory upload filters and censorship machines, the UK's music collection society PRS for Music (an organization which, among other things, used to call up random small businesses and, if they heard music in the background, demand a license or claim that you need a performance license to play music to your horses) has now come out with a an Article 13 myth buster claiming to counteract what it claims are myths about that part of the Copyright Directive.

This one is pretty easy to debunk as well, but mainly because... PRS doesn't even bother to make any arguments or cite anything. It basically just makes a claim about what critics are saying and then says "nuh uh" and moves on. Here's the very first "myth" and PRS's attempt at myth busting:

FALSE: Article 13 will censor the internet

It will not lead to censorship of the internet. This argument is not based in fact, and easily dispelled by even a superficial review of the proposed directive. It is being propagated by companies which, ultimately, have a vested interest in not sharing their income fairly with creators.

That's it. No explanation. No, nothing. So, let's myth bust PRS's myth busting. It says "a superficial review of the proposed directive" will show that it won't censor the internet. And, I mean, if you want to be pedantic, of course Article 13 -- as a thing itself -- won't be censoring the internet. But that's not the concern that people have raised. The issue is that Article 13 will force many internet platforms into censoring perfectly legal speech across the internet.

And, because PRS couldn't bother to do so, I'll post the actual text of Article 13 that's so problematic. Here's straight from the proposal that was approved by the EU Parliament's JURI Committee a few weeks ago:

In the absence of licensing agreements with rightsholders online content sharing service providers shall take, in cooperation with rightholders, appropriate and proportionate measures leading to the non-availability of copyright or related-right infringing works or other subject-matter on those services, while non-infringing works and other subject matter shall remain available.

And, yes, it includes the trailing point saying that "non-infringing works" can remain available, but does not bother to point out how you can accomplish the first part while allowing the second as well. Basically everyone who understands how basic technology works recognizes that the only way to have measure that block the availability of copyright infringing works is to put in place a mandatory copyright filter. Those are (1) incredibly expensive and (2) incredibly bad at not taking down non-infringing works. YouTube has spent more money than anyone else on its ContentID filter. In 2016, YouTube revealed it spent $60 million building ContentID (and it's probably significantly more today) and anyone who has any experience with ContentID knows that it's terrible at taking down non-infringing works. It happens multiple times every day.

And yet PRS and other supporters of Article 13 want to imagine that every other online platform -- literally none of which have the resources that Google/YouTube put behind ContentID -- will magically build or buy a better filter that won't censor non-infringing material? What sort of fantasy land are they living in? Because the whole point of Article 13 is to force companies to put in these filters (and to buy licenses), if a platform implements a filter that actually respects fair use and other user rights, you can be damn sure that PRS will be one of the first in lines to sue. I mean, it's already sued other platforms.

So, yes, Article 13 will lead to widespread censorship and PRS must know this or they're even more clueless about technology than I had previously thought.

FALSE: Article 13 will restrict internet users

Article 13 imposes no obligation on users. The obligations relate only to platforms and rightsholders.

Actually, Article 13 makes it easier for users to create, post and share content online, as it requires platforms to get licenses and for rightsholders to ensure these licences cover the acts of all individual users acting in a non–commercial capacity. In short, most users no longer need to obtain separate licences.

Again, this is an incredibly intellectually dishonest framing by PRS. It is correct that Article 13's target is platforms... but it's the users who use these platforms. And if the requirements of Article 13 -- as laid out above -- force these platforms to crack down on how users use the platform (which it totally does), then clearly it will restrict internet users. They won't be able to do the things they want to do on the platforms they have used for years to post their thoughts and writings and memes and music and videos.

FALSE: Article 13 threatens online blogging platforms, code-sharing services and encyclopaedias like Wikipedia

Article 13 would only be applied to certain types of services, specifically those whose main purpose is to make a profit from storing and making available copyright-protected works.

Services operating on a non-commercial basis, or where the presence of copyright works are supplementary, such as blogging sites, are specifically excluded from the requirements to obtain a licence.

Online encyclopaedias and open source software services are also specifically excluded.

Once again, the intellectual dishonesty here is astounding and obnoxious. As PRS knows damn well, the original draft of Article 13 didn't carve out "online encyclopedias and open source software services." The issues was that both Wikipedia and Github pointed out how Article 13 would make their sites illegal... and when the technically clueless drafters of Article 13 realized that might be a problem, rather than go back and rethink the entire approach to Article 13, they just wrote a carve-out for each of those sites, and only those sites. There are thousands of sites (many of which were way too small to get anyone's attention) that will be impacted by Article 13. They didn't get attention for the problems of Article 13, and no carve-out has been written for them.

Pointing to the two specific carve-outs that were written in response to two specific complaints doesn't prove that Article 13 won't threaten various platforms. Also, the definitions of the carve-outs are so specific and so narrowly tailored to Wikipedia and Github as they are today, it could restrict both sites in their ability to grow and launch new services for fear of suddenly being dragged back under Article 13's draconian requirements.

FALSE: Article 13 will stifle creativity This is an argument which is as old as copyright itself and time and again has proven to be untrue.

In fact, the opposite is true, as the proposed changes will benefit creators of user generated content. Under the current system, it is the user who is required to clear the individual rights for the creation.

Once again, this is a "nuh uh" response, rather than any actual rebuttal. It's also wrong. The reason why Article 13 will almost certainly stifle creativity is that it will severely limit the ability of thousands of platforms that content creators rely on today to create, to distribute, to promote, and to sell their works. Can Bandcamp operate under Article 13? It'll be tough and phenomenally expensive. Patreon? Again, incredibly expensive. Kickstarter? Same. Many of these platforms may just leave the EU altogether or put severe restrictions on how the platform is used. That is seriously going to stifle creativity.

Contrary to what the old school recording industry wants you to believe, content creators and the various internet platforms are aligned in their interests. The platforms enable creators to do so much -- often without having to go through the old school gatekeepers. So, yes, it will stifle a massive amount of creativity. Of course, it's the kind of creators that PRS rarely deals with, so why should it care?

FALSE: Article 13 will make memes illegal

Parodies, such as memes, are already covered by exceptions to copyright, and nothing in the proposed Article 13 will allow rightholders to block the use of them.

Parodies are widely available on sites which already deploy content recognition tools and there is no evidence that this process has been detrimental. Currently, it is possible for there to be disputes between uploaders and rightholders as to whether a specific usage is covered by an exception – but the Copyright Directive will not alter this.

Right. Parodies are an exception to copyright. But nowhere does any of the backers of Article 13 explain how the "appropriate measures" to block any infringing works can magically determine what is, and what is not, parody. Or a meme. Indeed, we've already seen over and over again that ContentID -- the one system that has had the most time and investment, has no clue how to figure out "exceptions to copyright" (better described as user rights).

The point of the discussion about memes is not that Article 13 "outlaws" memes, but that is the effective result. Because it will require platforms to put in place filters that make any copyright covered work "non-available," that will naturally sweep up lots of memes, which make use of the work of others. And until we can teach computers to tell what is and what is not fair use or parody or whatever, that's going to block a lot of memes.

And that's not even touching on the point that it's not exactly settled law that "memes are parodies" and are "already covered by exceptions to copyright." There have been quite a few lawsuits over copyright in memes, and it's not at all a settled area of law.

FALSE: Article 13 will kill remixes

Services can be licensed to support remixes – and many already are. Look no further than the recent announcement about Mixcloud’s new licence.

In most cases, mash-ups are covered by existing copyright exceptions (such as parody, criticism, citation). So, they can be created and posted by citizens on the basis of these exceptions.

Notice the nice little twist in the answer here? "Services can be licensed to support remixes." The complaint that people were making is not about services, but about the fact that the standard way in which remixes are made -- whereby amateurs playing around remix stuff for fun and upload it to share... that will no longer be allowed. It will only happen on the few platforms that pay a massive license to allow remixes. And those platforms will have to pay for that license somehow, meaning they're less likely to allow amateurs and those just messing around to use their services for remixes. It seems quite likely that Article 13 will greatly diminish the number of remixes out in the world.

FALSE: Article 13 will be too expensive for small business and start-ups

Article 13 clearly states that Member States shall ensure that the measures shall be appropriate and proportionate, meaning that the types of measures which are deployed must reflect the specific size and scope of the service. The market already provides a wide range of content identification system at a wide range of prices.

Article 13 could clearly state that every EU citizen gets a baby zebra and it would be just as meaningless. A content upload filter is expensive, period. Again, YouTube spent $60 million building its filter. Supporters of Article 13 often like to point to Audible Magic, who sells upload filters, as proof that it's not that expensive. But when we spoke to small and medium sized platforms who have discussed using Audible Magic's filters, it was not at all cheap. One small platform was pitched Audible Magic at around $50k per month. That's over half a million dollars a year. And this was not a large platform. Half a million dollars would bankrupt this platform. So is PRS saying that if a platform can't afford that... it doesn't have to do anything? I highly doubt that, especially given PRS's litigious past against internet platforms.

FALSE: Article 13 is in breach of fundamental privacy rights

Article 13 specifically states that the measures should not require the identification of individual users and the processing of their personal data and should be in full compliance with the General Data Protection Regulation.

Who do we believe? The company that calls up businesses and demands licensing fees if it hears music playing in the background, or the UN's special rapporteur on free speech who said that Article 13 is in clear breach of human rights laws? Yeah, I'm going to go with the latter. As for the specific question on privacy, it seems quite likely that Article 13 will lead to many GDPR violations. Because if it's requiring platforms to take "appropriate measures" to stop users from re-uploading the same works, it's going to need to keep track of users, and what they upload. That's going to be a lot of data -- and some of it may be quite sensitive.

FALSE: Article 13 will result in a monolithic blocking tool to stop content

At the core of this concept is the idea of a super app, which can identify every piece of copyright content across billions of uploads in every EU country – and then block it.

Even putting aside the impracticality of such a concept, the argument is flawed for the simple reason that it assumes creators and producers are incentivised to block access to their works.

Centuries of copyright have proven this is not the case. Indeed, one of the core principles of copyright is that it incentivises the licensing of works. Requiring online platforms to obtain a licence will not lead to mass-scale blocking of copyright works online.

So... wait. Now PRS is admitting that content upload filters are "impractical"? So... why is it supporting a law where the only way to satisfy the law is to implement such a filter or to force every internet platform to buy a license.... oohhhhhhh. I see. PRS doesn't think Article 13 is a problem, since instead of allowing users to upload, PRS is hoping that every platform will just buy a license and only allow uploads from artists it represents.

In other words, PRS is looking forward to the internet being locked down like radio, rather than the internet. It wants a broadcast medium, rather than a communications medium.

As for the claim that creators aren't incentivized to block works -- we've heard that claim before. It's also intellectually dishonest. Yes, most content creators do want their works out there for people to experience. But the copyright holders -- often different than the content creators, mind you -- seem pretty eager to threaten and sue lots of internet platforms for daring to have any of that content ever available without a license.

So, yes, clearly it will be used for censorship. We have dozens of stories on Techdirt alone about copyright -- and filters like ContentID in particular -- being used for censorship. And it goes beyond just professional content creators. We've seen rich billionaires using copyright to censor critics. We've seen Turkey's President Recep Tayyip Erdogan use copyright to censor critics.

Anyone who thinks copyright isn't and won't be used to censor isn't paying attention or is incredibly dishonest. So, PRS, which one is it?

The EU Parliament votes on the Copyright Directive later this week. Don't let PRS for Music and its intellectually dishonest bullshit win out. Tell the EU Parliament to Save Your Internet.

42 Comments | Leave a Comment..

Posted on Techdirt - 30 June 2018 @ 12:00pm

Awesome Stuff: The Fidget Capsule

from the fidget-fidget-fidget dept

It's been a while since we've done an "Awesome Stuff" post, but we were sent a prototype of a new fidget device called Fidget Capsule and couldn't resist writing it up. You may recall, of course, that "fidget" devices were all the rage for a year or so, starting with the famed "Fidget Cube" and then being overtaken by the "fidget spinner" which was an astoundingly popular fad for a very brief period of time (anyone still use a fidget spinner? I didn't think so.) Of course, that hasn't stopped people from fidgeting. I will admit, without shame, that my desk has probably over a dozen different fidget devices -- as well as magnets and pens and other things that aren't technically designed for fidgeting, but that's exactly how I use them.


Even though I'm bizarrely fascinated in all sorts of fidget devices, I wasn't entirely sure if the world needed another one. However, the Fidget Capsule is pretty amazing. As you can see in the video, it's pretty straightforward and simple. Unlike the Fidget Cube, there's just one thing you can do with it: squeeze it. But, it does that very, very well. It's basically silent, and kind of perfect as an idle fidgeting device. I've actually found that many fidget devices are... not that good for fidgeting. You may start playing with it, but if you really need to concentrate on something, the fidgeting stops. If anything, I've found that Fidget Cubes are great for when I'm walking around, but not when I'm working. Fidget keyring chain toys are probably still my overall favorite -- as they're also tiny and easy to just to carry around all the time with you (especially since they're just basically a keyring), but the Fidget Capsule works great at my desk while I'm working or on a phone call.

It feels very solidly built, and I've dropped it a few times and don't see it being damaged at all. It certainly feels like it will last quite a while. The prototype they sent me is the red one, and I now see that they're actually selling them in batches, with each one having a different resistance. The one they sent me apparently has 6 lbs of resistance, which feels pretty good. I have no idea how the other levels would work (they come in 2lbs, 4lbs, 8lbs, and the special hardcore one at 20lbs). If you don't care at all about fidget toys then clearly these won't be for you, but if you're like me and get somewhat obsessive about them, it's pretty cool.

Potential downsides: unlike most other fidget toys, this one is pretty strictly a "desktop" or "tabletop" fidget device. You probably don't want to carry it around with you. It's a bit bulky and pretty heavy (again, solid metal material). It could fit in a pocket, but I don't think it would be particularly comfortable there. It does come with a magnetic display stand which is nice (though it took me nearly a week until I realize I had the display stand upside down -- and it works and looks much better right side up).

The one other potential downside: they really seem to want people to buy a set of either four or five of them in the different resistance levels. They don't really have options to just buy a single one -- other than the hardcore 20lb. one, which is priced so close to the various sets that it almost certainly makes sense to just upgrade to a set. And that will probably price it out of the range of many buyers. It's one thing to spend ~$10 to ~$15 on fidget toys, but this one requires you to spend around $50 or more. Considering you get a set of 4 or 5, the price per capsule could be as low as $9, which is not bad at all. But... you still have to buy all of them to get that kind of pricing and I'd imagine that's probably too much for many people. Still, it's a pretty cool device and is definitely good at what it's designed to do, so if you're obsessed with fidgeting, check it out.

8 Comments | Leave a Comment..

More posts from Mike Masnick >>