Mike Masnick’s Techdirt Profile


About Mike MasnickTechdirt Insider

Mike is the founder and CEO of Floor64 and editor of the Techdirt blog.

He can be found on Twitter at http://www.twitter.com/mmasnick

Posted on Techdirt - 21 June 2018 @ 10:40am

Activism & Doxing: Stephen Miller, ICE And How Internet Platforms Have No Good Options

from the and-for-fun,-the-cfaa-and-scraping dept

Last month, at the COMO Content Moderation Summit in Washington DC, I co-ran a "You Make the Call" session with Emma Llanso from CDT. The idea was to turn the audience into a content moderation/trust & safety team of a fictionalized social media platform. We showed numerous examples of content or accounts that were "flagged" and then showed the associated terms of service, and had the entire audience vote on what to do. One of the fictional examples involved someone posting a link to a third-party website "contactinfo.com" claiming to have the personal phone and email contact info of Harvey Weinstein and urging people "you know what to do!" with a hashtag. The relevant terms of service included this: "You may not post personal information about others without their consent."

The audience voting was pretty mixed on this. 47% of the audience punted on the question, choosing to escalate it to a supervisor as they felt they couldn't decide whether to leave the content up or take it down. 32% felt it should just be taken down. 10% said to just leave it up and another 10% said to put a content warning flag on the content. We joked a bit during the session that some of these examples were "ripped from the headlines" but apparently we predicted the headlines in this case, because there are two stories this week that touch on exactly this kind of thing.

Example one is the story that came out yesterday, in which Twitter chose to start locking the accounts of users who were either tweeting Trump senior advisor Stephen Miller's cell phone number, or merely linking to a Splinternews article that published his cell phone number (which I'm guessing has since been changed...).

Splinternews decided to publish Miller's phone number after multiple news reports attributed the inhumane* decision to separate children of asylum seekers from their parents to Miller, who has defended the plan. Other reports noted that Miller is enjoying all of the controversy over this policy. Splinternews, citing Donald Trump's own history of giving out the phone numbers of people who anger him, thought it was only fair that people be able to reach out to Miller.

This is -- for fairly obvious reasons -- a controversial decision. I think most news organizations would never do such a thing. Not surprisingly, the number spread rapidly on Twitter, and Twitter started locking all of those accounts until the tweets were removed. That seems at least well within reason under Twitter's rules that explicitly state:

You may not publish or post other people's private information without their express authorization and permission.

But, that question gets a lot sketchier when it comes to locking the accounts of people who merely linked to the Splinternews article. A la our fictionalized example, those people are not actually publishing or posting anyone's private info. They are posting a link to a third party that purports to have that information. And, of course, in this case, the situation is complicated even more than our fictionalized example because Splinternews is a news organization (owned by Univision), and Twitter also has said that it has a "newsworthy" exception to its rules.

Personally, I think it's the wrong call to lock the accounts of those linking to the news story, but... as we discovered in our own sample version, it's not an easy call and lots of people have strong opinions one way or the other. Indeed, part of the reason why Twitter may have decided to do this was that supporters of Trump/Miller started calling out the article as an example of doxxing and claiming that leaving it up showed that Twitter was unfairly biased against them. It is a no win situation.

And, of course, it wouldn't take long before people started coming up with clever workarounds, such as Parker Higgins (citing the infamous 09F9 controversy in which the MPAA tried to censor the revelation of a cryptographic key that broke the MPAA's preferred DRM, and people responded by posting variations on the code, including a color chart in which the hex codes of the colors were the code), who posted the following:

Would Twitter lock his account for posting a two color image? At some point, the whole thing gets... crazy. That's not to argue that revealing someone's private cell phone number is a good thing -- no matter how you feel about Miller or the border policy. But just on the content moderation side, it puts Twitter in a no win situation in which people are going to be pissed off no matter what it does. Oh, and of course, it also helped create something of a Streisand Effect. I certainly hadn't heard about the Splinternews article or that people were passing around Miller's phone number until the story broke about Twitter whac'ing at moles on its site.

And that takes us to the second example, which happened a day earlier -- and was also in response to people's quite reasonable* anger about the border policy. Sam Lavigne decided to make something of a public statement about how he felt about ICE by scraping** LinkedIn for profile information on everyone who works at ICE (and who has a LinkedIn public profile). His database included 1595 ICE employees. He wrote a Medium blog post about this, posted the repository to Github and another user, Russel Neiss, created a Twitter account (@iceHRgov) that tweeted out info about each of those employees from that database. Notice that none of those are linked. That's because all three companies took them down (though you can still find archives of the Medium post). There was also an archive of the Github repository, but it has since been memory-holed as well.

Again... this raises a lot of questions. Github claimed that it removed the page for "violating community guidelines" -- specifically around "doxxing and harassment, and violating a third party's privacy." Medium claimed that the post violated rules against "doxxing" and specifically the "aggregation of publicly available information to target, shame, blackmail, harass, intimidate, threaten or endanger." Twitter, in Twitter's usual way, is not commenting. LinkedIn put out a statement saying: "We do not support or condone what immigration authorities are doing at the border, but we can’t allow the illegal use of our member data. We will take appropriate action to ensure our members’ data is protected and used properly."

Many people point out that all of this feels kind of ridiculous, seeing as this is all public info that the individuals chose to reveal about themselves on a public website. While Medium's expansive definition of doxxing makes things interesting by including an intent standard in releasing the info, even if it is publicly available, the whole thing, again, demonstrates how complex this is. I know that some people will claim that these are easy calls -- but, just for fun, try flipping the equation a bit. If you're anti-Trump, how would you feel if a prominent alt-right person compiled and posted your info -- even if publicly available -- on a site where alt-right folks gather, with the clear intent of having hoards of Trump trolls harassing you. Be careful the precedent you set.

If it were up to me, I think I would have come down differently than Medium, Github and Twitter in this case. My rationale: (1) all of this info was public information (2) that those individuals chose to place on a public website, knowing it was public (3) they are all employed by the federal government, meaning they are public servants and (4) while the compilation was done by someone who is clearly against the border policy, Lavigne never encouraged or suggested harassment of ICE agents. Instead, he wrote: "While I don’t have a precise idea of what should be done with this data set, I leave it here with the hope that researchers, journalists and activists will find it useful." He separately noted that he believed "it's important to document what's happening, and by whom." That seems to actually make a strong point in favor of leaving the data up, as there is value in documenting what's happening.

That said, reasonable people can disagree on this question (even if there should be no disagreement about how inhumane the policy at the border has been*) of what is the appropriate way for different platforms to handle these situations -- taking into account that this situation could play out with very different players in the future, and there is value in being consistent.

This is the very point that we were demonstrating with that game that we ran at COMO. Many people seem to think that content moderation decisions are easy: you just take down the content that is bad, and leave up the content that is good. But it's pretty rare that the content is easily classified in one of those categories. There is an enormous gray area -- and much of it involves nuance and context, which is not always easy to come by -- and which may look incredibly different depending on where you sit and what kind of world you think we live in. I still think there are strong arguments that the platforms should have left much of the content discussed in this post up, but I'm not the one making that call.

When we ran that game in DC last month, it was notable that on every single example we used -- even the ones we thought were "easy calls" -- there were some audience members who selected every option in the game. That is, there was not a single situation in our examples in which everyone agreed what should be done. Indeed, since there were four options, and all four were chosen by at least one person in every single example, it shows just how difficult it really is to make these calls. They are subjective. And what plays into that subjective decision making includes your own views, your own perspective, your own reading of the content and the rules -- and sometimes third party factors, including how people are reacting and what public pressure you're getting (in both directions). It is an impossible situation.

This is also why the various calls to mandate that platforms do this or face legal liability are even more ridiculous and dangerous. There are no "right" answers to these decisions. There are solutions that seem better to lots of people, but plenty of others will disagree. If you think you know the "right" way that all of these questions should be handled, I guarantee you're wrong, and if you were in charge of these platforms, you'd end up feeling just as conflicted as well.

This is why it's really time to start thinking about and talking about better solutions. Simply calling on platforms to be the final arbiters of what goes online and what stays offline is not a workable solution.

* Just a side note: if you are among the small minority of ethically-challenged individuals who gets upset that I describe the policy as inhumane: fuck off. The policy is inhumane and if you're defending it, you should seriously take time to re-evaluate your ethics and your life choices. On a separate note, if you are among the people who are then going to try to justify this policy as "but Obama/others did it too," the same applies. Whataboutism is no argument here. The policy is inhumane no matter who did it, and pointing out that others did it too doesn't change that. And, as inhumane as it may have been in the past, it has been severely ramped up. There is no defense for it. Attempting to defend it only serves to out yourself as a horrible person who has issues. Seriously: get help.

** This doesn't even fit anywhere in with this story, but scraping LinkedIn is (stupidly) incredibly dangerous. Linkedin has a history of suing people for scraping public info off of LinkedIn. And even if it's lost some of those cases, the company appears to take a pretty aggressive stance towards scrapers. We can argue about how ridiculous this is, but, dammit, this post is already too long talking about other stuff, so discuss it separately.

43 Comments | Leave a Comment..

Posted on Free Speech - 20 June 2018 @ 10:43am

EU Parliamentary Committee Votes To Put American Internet Giants In Charge Of What Speech Is Allowed Online

from the bad-news dept

As we've been writing over the past few weeks, the EU Parliament's Legal Affairs Committee (JURI) voted earlier today on the EU's new Copyright Directive. Within that directive were two absolutely horrible ideas that are dangerous to an open internet -- a link tax and a mandatory copyright filtering requrement (i.e., the "censorship machines" proposal). While there was a big fight about it, and we heard that some in the EU Parliament were getting nervous about it, this morning they still voted in favor of both proposals and to move the entire Copyright Directive forward. The vote was close, but still went the wrong way:

Somewhat incredibly, no official rollcall tally was kept. MEP Julia Reda, however, has posted an unofficial roll call of who voted against internet freedom, showing (graphically) whether they voted for the link tax and/or censorship machines:

In case you can't see that here's who voted according to Reda's list -- most voted for both of the bad proposals, but for the few who didn't vote for the link tax, I've noted that separately. These politicians deserve to (1) be called out for trying to destroy an open internet and give in to legacy industries who want to censor the internet and (2) voted out of office next election:

  • Axel Voss, Germany (who was in charge of this entire thing and who has regularly played dumb whenever people point out just how bad these proposals are. He appears completely beholden to legacy industry interests). Voss's name should become synonymous with the destruction of a free and open internet.
  • Pavel Svoboda, Czech Republic (voted for censorship machines, but not the link tax)
  • Rosa Estaras Ferragut, Spain
  • Tadeusz Zwiefka, Poland,
  • Jozsef Szajer, Hungary
  • Francis Zammit Dimech, Malta
  • Luis de Grandes Pascual, Spain
  • Enrico Gasbarra, Italy
  • Mary Honeyball, UK
  • Jean-Marie Cavada, France
  • Marinho e Pinto, Portugal
  • Sajjad Karim, UK (voted for censorship machines, but not the link tax)
  • Joelle Bergeron, France
  • Marie-Christine Boutonnet, France
  • Gilles Libreton, France
Note those last two votes from France, as Lebreton and Boutonnet are both members of the French National Front party, the same party whose leader, Marine Le Pen, has been out and about screaming about how unfair it is that the party's YouTube channel was deleted by automatic copyright filters -- the same filters that her own party just voted to make mandatory for all platforms. Incredible.

This is a hugely unfortunate series of events. Having the proposal approved by the JURI Committee makes it much, much harder to stop this Directive from becoming official. But it is not the end of the road. Reda will be forcing a vote from the entire EU Parliament on the issue:

This is an unacceptable outcome that I will challenge in the next plenary session, asking all 750 MEPs to vote on whether to accept the Committee’s result or open it up for debate in that larger forum, which would then give us a final chance to make changes.

This vote will likely happen on July 4. Let’s make this the independence day of the internet, the day we #SaveYourInternet from censorship machines and a link tax. Are you in?

The digital freedom group EDRi has also detailed the next steps in this process and created an infographic showing what still needs to happen:

It will be difficult to stop this freight train after this morning's vote, but not impossible. If you want to see the internet remain viable as a communications platform, rather than seeing it locked down as the new broadcast television, in which giant American companies have the final say in what you're allowed to say online, you should probably let the EU Parliament know sooner, rather than later.

38 Comments | Leave a Comment..

Posted on Techdirt - 20 June 2018 @ 9:20am

Net Neutrality And The Broken Windows Fallacy

from the ajit-pai,-read-your-bastiat dept

I've mentioned the idea of the broken windows fallacy -- not to be confused with the long debunked broken windows theory of policing -- twice in the past in reference to net neutrality, including in my recent post about what Ajit Pai should have said about repealing net neutrality. But both times I talked about it, it was kind of buried in much longer articles, and the more I think about it, the more important I think it is in understanding why Pai and his supporters are so far off in their thinking and understanding on net neutrality. What I find most perplexing about this is that people who often position themselves as doing away with overly burdensome regulations -- which is a stance that Pai has staked out pretty clearly -- are usually the kind of folks who talk frequently about the broken windows fallacy. And yet, here, those same folks seem to be missing it.

As background, the broken windows fallacy comes from Frederic Bastiat, the French economist often associated with free market and libertarian thought, and it's his clever and highly evocative way of explaining why destructive behavior -- while it may generate economic activity, is not good for the economy, because it misses all of the other (often hidden) costs, including the opportunity cost of investing that money in more productive activity. Bastiat's version went as follows:

Have you ever witnessed the anger of the good shopkeeper, James Goodfellow, when his careless son has happened to break a pane of glass? If you have been present at such a scene, you will most assuredly bear witness to the fact that every one of the spectators, were there even thirty of them, by common consent apparently, offered the unfortunate owner this invariable consolation – "It is an ill wind that blows nobody good. Everybody must live, and what would become of the glaziers if panes of glass were never broken?"

Now, this form of condolence contains an entire theory, which it will be well to show up in this simple case, seeing that it is precisely the same as that which, unhappily, regulates the greater part of our economical institutions.

Suppose it cost six francs to repair the damage, and you say that the accident brings six francs to the glazier's trade – that it encourages that trade to the amount of six francs – I grant it; I have not a word to say against it; you reason justly. The glazier comes, performs his task, receives his six francs, rubs his hands, and, in his heart, blesses the careless child. All this is that which is seen.

But if, on the other hand, you come to the conclusion, as is too often the case, that it is a good thing to break windows, that it causes money to circulate, and that the encouragement of industry in general will be the result of it, you will oblige me to call out, "Stop there! Your theory is confined to that which is seen; it takes no account of that which is not seen."

It is not seen that as our shopkeeper has spent six francs upon one thing, he cannot spend them upon another. It is not seen that if he had not had a window to replace, he would, perhaps, have replaced his old shoes, or added another book to his library. In short, he would have employed his six francs in some way, which this accident has prevented

Let us take a view of industry in general, as affected by this circumstance. The window being broken, the glazier's trade is encouraged to the amount of six francs; this is that which is seen. If the window had not been broken, the shoemaker's trade (or some other) would have been encouraged to the amount of six francs; this is that which is not seen.

And if that which is not seen is taken into consideration, because it is a negative fact, as well as that which is seen, because it is a positive fact, it will be understood that neither industry in general, nor the sum total of national labour, is affected, whether windows are broken or not.

Now let us consider James B. himself. In the former supposition, that of the window being broken, he spends six francs, and has neither more nor less than he had before, the enjoyment of a window.

In the second, where we suppose the window not to have been broken, he would have spent six francs on shoes, and would have had at the same time the enjoyment of a pair of shoes and of a window.

Now, as James B. forms a part of society, we must come to the conclusion, that, taking it altogether, and making an estimate of its enjoyments and its labours, it has lost the value of the broken window.

When we arrive at this unexpected conclusion: "Society loses the value of things which are uselessly destroyed;" and we must assent to a maxim which will make the hair of protectionists stand on end — To break, to spoil, to waste, is not to encourage national labour; or, more briefly, "destruction is not profit."

In short, breaking windows may generate economic activity for the glazier, but that doesn't count the economic cost to whoever had his window broken, or the opportunity costs of how the money spent on fixing the window could have been fixed.

So how does this apply to net neutrality? Well, Ajit Pai and nearly all of the rather vocal supporters of taking away net neutrality rules continually go back to the claim that the rules harmed broadband infrastructure investment. We'll leave aside the (rather important point) that this claim is not even remotely close to true -- but even assuming it is, it's still a broken windows fallacy.

That's because broadband infrastructure investment is not the entire market, and focusing just on that is the same as just focusing on the economic activity for the glazier created by a broken window. To take this to the extreme case: if we want to stimulate broadband infrastructure investment, just rip up the current internet -- and then we'd need to spend a ton on rebuilding the internet. Yes, that would be the best way to "stimulate" a massive internet infrastructure investment, but the costs to everyone else would be dire.

In the same way, when the FCC focuses just on broadband infrastructure, it is ignoring the costs on everyone else who use the internet. Or, as per Bastiat's story, the FCC is ignoring the costs to the guy whose window is broken as well as all of the opportunity costs from the money he spends on the glazier that doesn't go towards more productive pursuits.

In the net neutrality world, those costs are massive. It is the costs of nearly all internet platforms and services, which now have massive levels of uncertainty about whether or not ISPs will end up abusing their power to limit access (or, more likely, charge for preferred access). It includes the uncertainty of the big broadband companies favoring their own content and service partners to effectively shut out independent services. It includes the costs to the public who have less choice and fewer services that they can use, and who are more locked in to a dwindling number of giant broadband companies.

In short, Ajit Pai's FCC has fallen completely for the broken windows fallacy, by focusing just on one narrow area of economic activity, without even being willing to acknowledge that it will negatively impact a much wider swath of the economy. This is especially disappointing to see, considering that Pai and his supporters keep claiming that they are the ones to "bring economics back" to the FCC, and they are the ones who argued that the Tom Wheeler FCC ignored economics. Yet, when you look at the details, it's Pai and his supporters who seem to be the ones sticking their heads in the sand here and, as Bastiat noted, confining their theory to "that which it seen" and taking "no account of that which is not seen."

In economics this is a pretty 101-level mistake. That the FCC is making it in dismantling a key concept that makes the internet function competitively is particularly disappointing.

61 Comments | Leave a Comment..

Posted on Techdirt - 19 June 2018 @ 10:44am

Boston Globe Posts Hilarious Fact-Challenged Interview About Regulating Google, Without Any Acknowledgement Of Errors

from the and-we-wonder-why-news-is-failing dept

Warning: this article will discuss a bunch of nonsense being said in a major American newspaper about Google. I fully expect that the usual people will claim that I am writing this because I always support Google -- which would be an interesting point if true -- but of course it is not. I regularly criticize Google for a variety sketchy practices. However, what this story is really about is why the Boston Globe would publish, without fact checking, a bunch of complete and utter nonsense.

The Boston Globe recently put together an entire issue about "Big Tech" and what to do about it. I'd link to it, but for some reason when I click on it, the Boston Globe is now telling me it no longer exists -- which, maybe, suggests that the Boston Globe should do a little more "tech" work itself. However, a few folks sent in this fun interview with noted Google/Facebook hater Jonathan Taplin. Now, we've had our run-ins with Taplin in the past -- almost always to correct a whole bunch of factual errors that he makes in attacking internet companies. And, it appears that we need to do this again.

Of course, you would think that the Boston Globe might have done this for us, seeing as they're a "newspaper" and all. Rather than just printing the words verbatim of someone who is going to say things that are both false and ridiculous, why not fact check your own damn interview? Instead, it appears that the Globe decided "let's find someone to say mean things about Google" and turned up Taplin... and then no one at the esteemed Globe decided "gee, maybe we should check to see if he actually knows what he's talking about or if he's full of shit." Instead, they just ran the interview, and people who read it without knowing that Taplin is laughably wrong won't find out about it unless they come here. But... let's dig in.

What would smart regulation look like?

You start with fairly rigorous privacy regulations where you have the ability to opt out of data collection from Google. Then you look at something like a modification of the part of the Digital Millennium Copyright Act, which is what is known as safe harbor. Google and Facebook and Twitter operate under a very unique set of legal regimes that no other company gets to benefit from, which is that no one can sue them for doing anything wrong.

Ability to opt-out of data collection -- fair enough. To some extent that's already possible if you know what you're doing, but it would be good if Google/Facebook made that easier. Honestly, that's not going to actually have much of an impact really. I still think the real solution to the dominance of Google/Facebook is to enable more competition that can provide better services that can help limit the power of those guys. But Taplin's suggestion really seems to be going in the other direction, seeking to lock in their power, while complaining about them.

The "modification" of the DMCA, for example, would almost certainly lock in Google and Facebook and make it nearly impossible for competitors to step up. Also, the DMCA is not "known as safe harbor." The DMCA -- a law that was almost universally pushed by the record labels -- is a law that updated copyright law in a number of ways, including giving copyright holders the power to censor on the internet, without any due process or judicial review of whether or not infringement had taken place. There is a small part of it, within Section 512, that includes a very limited safe harbor, that says that while actual infringers are still liable for infringement, the internet platforms they use are not liable if they follow a bunch of rules, including removing the content expeditiously and kicking people off their platform for repeat infringement.

The idea that "Google and Facebook and Twitter operate under a very unique set of legal regimes that no other company gets to benefit from" is complete and utter nonsense, and the Boston Globe's Alex Kingsbury should have pushed back on it. The Copyright Office's database of DMCA registered agents includes nearly 9,000 companies (including ours!), because the DMCA's 512 safe harbors apply to any internet platform who registers. Google, Facebook and Twitter don't get special treatment.

Furthermore, as a new report recently showed, taking away such safe harbors would do more to entrench the power of Google, Facebook and Twitter since all three companies can deal with such liability, while lots of smaller companies and upstarts cannot. It boggles the mind that the Boston Globe let Taplin say something so obviously false without challenging him.

And, we haven't even gotten to the second half of that sentence, which is the bizarre and simply false claim that the DMCA's Section 512 means that "no one can sue them for doing anything wrong." Again, this is just factually incorrect, and a good journalist would challenge someone for making such a blatantly false claim. The DMCA's 512 does not, in any way, stop anyone from suing anyone "for doing anything wrong." That's ridiculous. The DMCA's 512 says that a copyright holder will be barred from suing a platform for copyright infringement if a user (not the platform) infringes on copyright and when notified of that alleged infringement, the platform expeditiously removes that content. In addition to that, thanks to various court rulings, the DMCA's safe harbors are limited in other ways, including that the platforms cannot encourage their use for infringement and they must have implemented repeat infringer policies. No where in any of that does it say that platforms can't be sued for doing anything wrong.

If the platform does something wrong, they absolutely can be sued. It's simply a fantasy interpretation of the DMCA to pretend otherwise. Why didn't the Boston Globe point out these errors? I have no idea, but they let the interview and its nonsense continue.

In other words, they have complete liability protection from being sued for any of the content that is on their services. That is totally unique. Obviously newspapers doesn’t get that protection. And of course also [tech giants] have other advantages over all other corporations; all of the labor that users put in is basically free. Most of us work an hour a day for Google or Facebook improving their services, and we don’t get anything for that other than just services.

Again, they do not have "complete liability protection from being sued for any content that is on their services." Anything they post themselves, they are still liable for. Anything that a user posts on its platform, if the platform does not comply with DMCA 512, the platform can still be liable for. All DMCA 512 is saying is that they can be liable for a small sliver of content if they fail to follow the rules set out in the law that was pushed for heavily by the recording industry.

Next up, the claim that "obviously newspapers don't get that protection" is preposterous. Of course they do. A quick search of the Copyright Office database shows registrations by tons of newspaper companies, including the Chicago Tribune, the Daily News, USA Today, the Las Vegas Review-Journal, the LA Times, the Baltimore Sun, the Chicago Sun-Times, the Albany Times Union, the NY Times, the Times Herald, the Times Picayune, the Washington Times, the Post Standard, the Palm Beach Post, the Cincinnati Post, the Kentucky Post, the Seattle Post-Intelligencer, the NY Post, the St. Louis Post-Dispatch, the Washington Post, Ann Arbor News, the Albany Business News, Reno News & Review, the Dayton Daily News, Springfiled News Sun, the Des Moines Register, the Cincinnati Enquirer, the Branson News Leader, the Bergen News, the Pennysaver News, the News-Times, the New Canaan News, Orange County News, San Antonio News-Express, the National Law Journal, the Williamsburg Journal Tribune, the Wall Street Journal, the Jacksonville Journal-Courier, the Lafayette Journal-Courier, the Oregon Statesman Journal, the Daily Journal and on and on and on. Literally I just got tired of writing down names. There are a lot more.

Notably missing? As far as I can tell, the Boston Globe has not registered a DMCA agent. Odd that.

But, back to the point: yes, newspapers get the same damn protection. There is nothing special about Google, Facebook and Twitter. And by now Taplin must know this. So should the Boston Globe.

Ah, but perhaps -- you'll argue -- he means that the paper versions don't get the same protection, while the internet sites do. And, you'd still be wrong. All the DMCA 512 says is that you don't get to put liability on a third party who had no say in the content posted. With your normal print newspaper that's not an issue because a newspaper is not a user-generated content thing. It has an editor who is choosing what's in there. That's not true of online websites. And that's why we need a safe harbor like the DMCA's, otherwise people stupidly blame a platform for actions of their users.

And let's not forget -- because this is important -- anything a website does to directly encourage infringement would take away those safe harbors, a la the Grokster ruling in the Supreme Court, which said you lose those safe harbors if you're inducing infringing. In other words, basically every claim made by Taplin here is wrong. Why does the Boston Globe challenge none of them? What kind of interview is this?

And we're just on the first question. Let's move on.

What would eliminating the “safe harbor” provision in the Digital Millennium Copyright Act mean?

YouTube wouldn’t be able to post 44,000 ISIS videos and sell ads for them.

Wait, what? Once again, there's so much wrong in just this one sentence that it's almost criminal that the Boston Globe's reporter doesn't say something. Let's start with this one first: changing copyright law to get rid of a safe harbor will stop YouTube from posting ISIS videos? What about copyright law has any impact on ISIS videos one way or the other? Absolutely nothing. Even assuming that ISIS is somehow violating someone's copyright in their videos (which, seems unlikely?) what does that have to do with anything?

Second, YouTube is not posting any ISIS videos. YouTube is not posting any videos. Users of YouTube are posting videos. That's the whole point of the safe harbors. That it's users doing the uploading and not the platform. And the point of the DMCA safe harbor is to clarify the common sense point that you don't blame the tool for the user's actions. You don't blame Ford because someone drove a Ford as a getaway car in a bank robbery. You don't blame AT&T when someone calls in a bomb threat.

Third, YouTube has banned ISIS videos (and any terrorist propaganda videos) going back a decade. Literally back to 2008. That's when YouTube stopped allowing videos from terrorist organizations. How could Taplin not know this? How could the Boston Globe not know this. Over the years, YouTube has even built new algorithms designed to automatically spot "extremist" content and block it (how well that works is another question). Indeed, YouTube is so aggressive in taking down such videos that it's been known to also take down the videos of humanitarian groups documenting war crimes by terrorists.

Finally, YouTube has long refused to put ads on anything deemed controversial content. Also, it won't put ads on videos of channels without lots and lots of followers.

So basically in this one short sentence -- 14 words long -- has four major factual errors in it. Wow. And he's not done yet.

Or they wouldn’t be able to put up any musician’s work, whether they wanted it on the service or not, without having to bear some consequences. That would really change things.

Again, YouTube is not the one putting up works. Users of YouTube are. And if and when those people upload a video -- that is not covered by fair use or other user rights -- and it is infringing, then the copyright holder has every right under the DMCA that Taplin misstates earlier to force the video down. And if YouTube doesn't take it down, then they face all the consequences of being an infringer.

So what would "really change" if we removed the DMCA's safe harbors? Well, YouTube has already negotiated licenses with basically every record label and publisher at this point. So, basically nothing would change on YouTube. But, you know, for every other platform, they'd be screwed. So, Taplin's plan to "break up" Google... is to lock the company in as the only platform. Great.

And this leaves aside the fact (whether we like it or not) that under YouTube's ContentID system which allows copyright holders to "monetize" infringing works has actually opened up a (somewhat strange) new revenue stream for artists, who are now actually profiting greatly from letting people use their works without going through the hassle of negotiating a full license.

I also think it would change the whole fake news conversation completely, because, once Facebook or YouTube or Google had to take responsibility for what’s on their services, they would have to be a lot more careful to monitor what goes on there.

Again... what? What in the "whole fake news conversation" has anything to do with copyright? This is just utter nonsense.

Second, if platforms are suddenly "responsible" for what's on their service, then... Taplin is saying that the very companies he hates, that he thinks are the ruination of culture and society, should be the final arbiters of what speech is okay online. Is that really what he wants? He wants Google and Facebook and YouTube -- three platforms he's spent years attacking -- determining if his own speech is fake news?


Because, let's face it, as much as I hate the term, this interview is quintessential fake news. Nearly every sentence Taplin says includes some false statement -- often multiple false statements. And the Boston Globe published it. Should the Boston Globe now be liable for Taplin's deranged understanding of the law? Should we be able to sue the Boston Globe because it published utter nonsense uttered by Jonathan Taplin? Because that's what he's arguing for. Oh, but, I forgot, according to Taplin, the Boston Globe -- as a newspaper -- has no such safe harbor, so it's already fair game. Sue away, people...

Wouldn’t that approach subject these services to death by a thousand copyright-infringement lawsuits?

It would depend on how it was put into practice. When someone tries to upload pornography to YouTube, an artificial intelligence agent sees a bare breast and shunts it into a separate queue. Then a human looks at it and says, “Well, is this National Geographic, or is this porn?” If it’s National Geographic it probably gets on the service, and if it’s porn it goes in the trash. So, it’s not like they’re not doing this already. It’s just they’ve chosen to filter porn off of Facebook and Google and YouTube but they haven’t chosen to filter ISIS, hate speech, copyrighted material, fake news, that kind of stuff.

This is just a business decision on their part. They know every piece of content that’s being uploaded because they used the ID to decide who gets the advertising. So they could do all of this very easily. It’s just they don’t want to do it.

First off, finally, the Boston Globe reporter pushes back slightly. Not by correcting any of the many, many false claims that Taplin has made so far, but in highlighting a broader point: that Taplin's solution is completely idiotic and unworkable, because we already see the abuse that the DMCA takedown process gets. But... Taplin goes into spin mode and suggests there's some magic way that this system wouldn't be abused for censorship (even though the existing system is).

Then he explains his fantasy-land explanation of how YouTube moderation actually works. He's wrong. This is not how it works. Most content is never viewed by a human. But let's delve in deeper again. Taplin and some of his friends like to point to the automated filtering of porn. But porn is something that is much easier to teach a computer to spot. A naked breast is something you can teach a computer to spot pretty well. Fake news is not. Hate speech is not. Separately, notice that Taplin never ever mentions ContentID in this entire interview? Even though that does the very thing he seems to insist that YouTube refuses to do? ContentID does exactly what he claims this porn filter is doing. But he pretends it doesn't exist and hasn't existed for years.

And the Boston Globe just lets it slide.

Also, again, Taplin insists that YouTube and Facebook "haven't chosen to filter ISIS" even though both companies have done so for years. How does Taplin not know this? How does the Boston Globe reporter not know this? How does the Boston Globe think that its ignorant reporter should interview this ignorant person? Why did they then decide to publish any of this? Does the Boston Globe employ fact checkers at all? The mind boggles.

Meanwhile, we really shouldn't let it slide that Taplin -- when asked specifically about copyright infringement -- seems to argue that if copyright law was changed, it would somehow magically lead Google to stop ISIS videos, hate speech and fake news among other things. None of those things has anything to do with copyright law. Shouldn't he know this? Shouldn't the Boston Globe?

As for the second paragraph, it's also utter nonsense. YouTube "knows every piece of content that's being uploaded because they used the ID to decide who gets the advertising." What does that even mean. What is "the ID"? And, even in the cases where YouTube does decide to place ads on videos (again, which is greatly restricted, and is not for all content), the fact that Google's algorithms can try to insert relevant ads does not mean that Google "knows" what's in the content. It just means that an algorithm does some matching. And, sure, Taplin might point out that if they can do that, why can't they also do it for copyright and ISIS and the answer is that THEY DO. That's the whole fucking point.

Again, why is the Boston Globe publishing utter nonsense?

Is Google trying to forestall this kind of regulation?

Ultimately YouTube is already moving towards being a service that pays content providers. They announced last month that they’re going to put up a YouTube music channel. And that will look much more like Spotify than it looks like YouTube. In other words, they will license content from providers, they will charge $10 a month for the service, and you will then get curated lists of music. From the point of view of the artists and the record company, it’ll be a lot better than the system that exists now — where essentially YouTube says to you, your content is going to be on YouTube whether you want it to or not, so check this box if you want us to give you a little bit of the advertising.

YouTube has been paying content providers for years. I mean, it's been years since the company announced that in one year alone, it had paid musicians, labels and publishers over a billion dollars. And Taplin claims they're "moving" to such a model? Is he stuck in 2005? And, they already license content from providers. The $10/month thing again, is not new (it's been available for years), but that's a separate service, which is not the same as regular YouTube. And it has nothing to do with any of this. If the DMCA changed, then... that wouldn't have any impact at all on any of this.

Still, let's recap the logic here: So YouTube offering a music service, which it set up to compete with Spotify and Apple Music, and which has nothing to do with the regular YouTube platform, will somehow "forestall" taking away the DMCA's safe harbors? How exactly does that work? I mean, wouldn't the logic work the other way?

The whole interview is completely laughable. Taplin repeatedly makes claims that don't pass the laugh test for anyone with even the slightest knowledge of the space. And nowhere does the Boston Globe address the multiple outright factual errors. Sure, I can maybe (maybe?) understand not pushing back on Taplin in the moment of the interview. But why let this go to print without having someone (anyone?!?) with even the slightest understanding of the law or how YouTube actually works, check to see if Taplin's claims were based in reality? Is that really so hard?

Apparently it is for the Boston Globe and its "deputy editor" Alex Kingsbury.

34 Comments | Leave a Comment..

Posted on Techdirt - 19 June 2018 @ 3:19am

Dear EU Parliament: Why Are You About To Allow US Internet Companies To Decide What EU Citizens Can Say Online?

from the such-a-bizarre-thing dept

We've pointed this out over and over again with regards to all of the various attempts to "regulate" the internet giants of Google and Facebook: nearly every proposal put forth to date creates a regulatory regime that Google and Facebook can totally handle. Sure, they might find it to be a nuisance, but its well within the resources of both companies to handle whatever is thrown their way. However, most other companies are then totally fucked, because they simply cannot comply in any reasonable manner. And, yet, these proposals keep coming -- and people keep celebrating them in the false belief that they will somehow "contain" the two internet giants, when the reality is that it will lock them in as the defacto dominant internet players, making it nearly impossible for upstarts and competitors to enter the market.

This seems particularly bizarre when we're talking about the EU's approach to copyright. As we've been discussing over the past few weeks, the EU Parliaments Legal Affairs Committee is about to vote on the EU Copyright Directive, that has some truly awful provisions in it -- including Article 11's link tax and Article 13's mandatory filters. The rhetoric around both of these tends to focus on just how unfair it is that Google and Facebook have so much power, and are making so much money while legacy companies (news publishers for Article 11 and recording companies for Article 13) aren't making as much as they used to.

But, as more and more people are starting to point out, if the Copyright Directive moves forward as is, it will only serve to lock in those two companies as the controllers of the internet. So why is it that the European Parliament seems hellbent on handing the internet over to American internet companies? In the link above, Cory Doctorow tries to parse out what the hell they're thinking:

These proposals will make starting new internet companies effectively impossible -- Google, Facebook, Twitter, Apple, and the other US giants will be able to negotiate favourable rates and build out the infrastructure to comply with these proposals, but no one else will. The EU's regional tech success stories -- say Seznam.cz, a successful Czech search competitor to Google -- don't have $60-100,000,000 lying around to build out their filters, and lack the leverage to extract favorable linking licenses from news sites.

If Articles 11 and 13 pass, American companies will be in charge of Europe's conversations, deciding which photos and tweets and videos can be seen by the public, and who may speak.

In a (possibly paywalled) article over at Wired looking at the Copyright Directive, Docotorow is also quoted explaining just how massively this system will be abused for censorship of EU citizens:

"Because the directive does not provide penalties for abuse – and because rightsholders will not tolerate delays between claiming copyright over a work and suppressing its public display – it will be trivial to claim copyright over key works at key moments or use bots to claim copyrights on whole corpuses.

The nature of automated systems, particularly if powerful rightsholders insist that they default to initially blocking potentially copyrighted material and then releasing it if a complaint is made, would make it easy for griefers to use copyright claims over, for example, relevant Wikipedia articles on the eve of a Greek debt-default referendum or, more generally, public domain content such as the entirety of Wikipedia or the complete works of Shakespeare.

"Making these claims will be MUCH easier than sorting them out – bots can use cloud providers all over the world to file claims, while companies like Automattic (Wordpress) or Twitter, or even projects like Wikipedia, would have to marshall vast armies to sort through the claims and remove the bad ones – and if they get it wrong and remove a legit copyright claim, they face unbelievable copyright liability."

As we noted yesterday in highlighting a new paper looking at what happened when similar laws were implemented, the increase in censorship is not an idle threat or crying wolf. It happens. Frequently.

And, yet, we still have EU politicians and supporters of the Copyright Directive -- while they complain about Google and Facebook's power over the internet -- turning around and pushing for plans that not only will lock in both of those companies as the dominant internet companies, but also forcing upon them the sole power to censor the speech of EU citizens. And they're about to vote on this in just hours and don't seem to have the first clue about what a dumb idea all of this is.

28 Comments | Leave a Comment..

Posted on Techdirt - 18 June 2018 @ 3:30pm

UK Lawmaker Who Quizzed Facebook On Its Privacy Practices Doesn't Seem To Care Much About His Own Website's Privacy Practices

from the just-sayin' dept

Jason Smith, over at Indivigital has been doing quite a job of late in highlighting the hypocrisy of European lawmakers screaming at internet companies over their privacy practices, while doing little on their own websites of what they're demanding of the companies. He pointed out the EU Commission itself appeared to be violating the GDPR, leading it to claim that it was exempt. And now he's got a new story up, pointing out that the website of UK Parliament member, Damian Collins, who is the chair of the Digital, Culture, Media and Sport Committee... does not appear to have a privacy policy in place, even though he took the lead in quizzing Facebook about its own privacy practices and its lack of transparency on how it treats user data.

Now, there are those of us who believe that privacy policies are a dumb idea that don't do anything to protect people's privacy -- but if you're going to be grandstanding about how Facebook is not transparent enough about how it handles user data, it seems like you should be a bit transparent yourself. Smith's article details how many other members of the Digital, Culture, Media and Sport Committee don't seem to be living up to their own standards. They may have been attacking social media sites... but were happy to include tracking widgets from those very same social media sites on their own sites.

Julie4Sunderland.co.uk is maintained on behalf of Julie Elliott MP, a fellow member of the Digital, Culture, Media and Sport Committee. It serves third-party content from Facebook and upwards of 18 cookies on visitor’s computers.

Likewise, websites of fellow members Jo Stevens, Simon Hart, Julian Knight, Ian Lucas, Rebecca Pow and Giles Watling are also collecting data on behalf of the social networking giant from their visitors.

The websites of Julian Knight, Ian Lucas, Giles Watling and Rebecca Pow also collect data on visitors for Twitter. Meanwhile, Rebecca Pow’s website sets third-party cookies from YouTube.com.

Damian Collins’s website features a cookie message however the link in the message takes the user to a contact page that contains a form that requests the user’s name and email address.

The page on which the form resides contains a link that activates a modal window and encourages the user to sign-up for Damian Collins’s email newsletter.

Moreover, the Parliamentary page for the Digital, Culture, Media and Sport committee is also setting and serving third-party cookies and content from Twitter.

Now, you can reasonably argue that the websites of politicians aren't the same as a social media giant used by like half of the entire world. And there is a point there. But it's also worth noting that it's amazing how accusatory politicians and others get towards social media sites when they don't seem to live up to the same standards on their own websites. Maybe Facebook should do better -- but the very actions of these UK Parliament members, at the very least, suggests that even they recognize what they're demanding of Facebook is more cosmetic "privacy theater" than anything serious.

12 Comments | Leave a Comment..

Posted on Techdirt - 18 June 2018 @ 11:57am

French Political Party Voting For Mandatory Copyright Filters Is Furious That Its YouTube Channel Deleted By Filter

from the but-we-didn't-mean-for-US dept

It's been a long tradition here on Techdirt to show examples of politicians and political parties pushing for stricter, more draconian, copyright laws are often found violating those same laws. But the French Rassemblemant National (National Rally Point) party is taking this to new levels -- whining about the enforcement of internet filters, just as it's about to vote in favor of making such filters mandatory. Leaving aside that Rassemblemant National, which is the party headed by Marine Le Pen, is highly controversial, and was formerly known as Front National, it is still an extremely popular political party in France. And, boy, is it ever pissed off that YouTube took down its YouTube channel over automatically generated copyright strikes. Le Pen is particularly angry that YouTube's automatic filters were unable to recognize that they were just quoting other works:

Marine Le Pen was quoted as saying, “This measure is completely false; we can easily assert a right of quotation [to illustrate why the material was well within the law to broadcast]”.

Yes, but that's the nature of automated filters. They cannot tell what is "fair use" or what kinds of use are acceptable for commentary or criticism. They can just tell "was this work used?" and if so "take it down."

Given all that, and the fact that Le Pen complained that this was "arbitrary, political and unilateral," you have to think that her party is against the EU Copyright Directive proposal, which includes Article 13, which would make such algorithmic filters mandatory. Except... no. Within the EU Parliament, Rassemblemant National is in a coalition with a bunch of other anti-EU parties known as Europe of Nations and Freedoms or ENF. And how does ENF feel about Article 13? MEP Julia Reda has a handy dandy chart showing that ENF is very much in favor of Article 13 (and the Article 11 link tax).

So... we have a major political party in the EU, whose own YouTube channel has been shut down thanks to automated copyright filters in the form of YouTube's ContentID. And that party is complaining that ContentID, which is the most expensive and the most sophisticated of all the copyright filters out there, was unable to recognize that they were legally "quoting" another work... and their response is to order every other internet platform to install their own filters. Really?

48 Comments | Leave a Comment..

Posted on Techdirt - 18 June 2018 @ 10:44am

Lessons From Making Internet Companies Liable For User's Speech: You Get Less Speech, Less Security And Less Innovation

from the not-good dept

Stanford's Daphne Keller is one of the world's foremost experts on intermediary liability protections and someone we've mentioned on the website many times in the past (and have had her on the podcast a few times as well). She's just published a fantastic paper presenting lessons from making internet platforms liable for the speech of its users. As she makes clear, she is not arguing that platforms should do no moderation at all. That's a silly idea that no one who has any understanding of these issues thinks is a reasonable idea. The concern is that as many people (including regulators) keep pushing to pin liability on internet companies for the activities of their users, it creates some pretty damaging side effects. Specifically, the paper details how it harms speech, makes us less safe, and harms the innovation economy. It's actually kind of hard to see what the benefit side is on this particular cost-benefit equation.

As the paper notes, it's quite notable how the demands from people about what platforms should do keeps changing. People keep demanding that certain content gets removed, while others freak out that too much content is being removed. And sometimes it's the same people (they want the "bad" stuff -- i.e., stuff they don't like -- removed, but get really angry when the stuff they do like is removed). Perhaps even more importantly, the issues for why certain content may get taken down are the same issues that often involve long and complex court cases, with lots of nuance and detailed arguments going back and forth. And yet, many people seem to think that private companies are somehow equipped to credibly replicate that entire judicial process, without the time, knowledge or resources to do so:

As a society, we are far from consensus about legal or social speech rules. There are still enough novel and disputed questions surrounding even long-standing legal doctrines, like copyright and defamation, to keep law firms in business. If democratic processes and court rulings leave us with such unclear guidance, we cannot reasonably expect private platforms to do much better. However they interpret the law, and whatever other ethical rules they set, the outcome will be wrong by many people’s standards.

Keller then looked at a variety of examples involving intermediary liability to see what the evidence says would happen if we legally delegate private internet platforms into the role of speech police. It doesn't look good. Free speech will suffer greatly:

The first cost of strict platform removal obligations is to internet users’ free expression rights. We should expect over-removal to be increasingly common under laws that ratchet up platforms’ incentives to err on the side of taking things down. Germany’s new NetzDG law, for example, threatens platforms with fines of up to &euro'50 million for failure to remove “obviously” unlawful content within twenty-four hours’ notice. This has already led to embarrassing mistakes. Twitter suspended a German satirical magazine for mocking a politician, and Facebook took down a photo of a bikini top artfully draped over a double speed bump sign.11 We cannot know what other unnecessary deletions have passed unnoticed.

From there, the paper explores the issue of security. Attempts to stifle terrorists' use of online services by pressuring platforms to remove terrorist content may seem like a good idea (assuming we agree that terrorism is bad), but the actual impact goes way beyond just having certain content removed. And the paper looks at what the real world impact of these programs have been in the realm of trying to "counter violent extremism."

The second cost I will discuss is to security. Online content removal is only one of many tools experts have identified for fighting terrorism. Singular focus on the internet, and overreliance on content purges as tools against real-world violence, may miss out on or even undermine other interventions and policing efforts.

The cost-benefit analysis behind CVE campaigns holds that we must accept certain downsides because the upside—preventing terrorist attacks—is so crucial. I will argue that the upsides of these campaigns are unclear at best, and their downsides are significant. Over-removal drives extremists into echo chambers in darker corners of the internet, chills important public conversations, and may silence moderate voices. It also builds mistrust and anger among entire communities. Platforms straining to go “faster and further” in taking down Islamist extremist content in particular will systematically and unfairly burden innocent internet users who happened to be speaking Arabic, discussing Middle Eastern politics, or talking about Islam. Such policies add fuel to existing frustrations with governments that enforce these policies, or platforms that appear to act as state proxies. Lawmakers engaged in serious calculations about ways to counter real-world violence—not just online speech—need to factor in these unintended consequences if they are to set wise policies.

Finally, the paper looks at the impact on innovation and the economy and, again, notes that putting liability on platforms for user speech can have profound negative impacts.

The third cost is to the economy. There is a reason why the technology-driven economic boom of recent decades happened in the United States. As publications with titles like “How Law Made Silicon Valley” point out, our platform liability laws had a lot to do with it. These laws also affect the economic health of ordinary businesses that find customers through internet platforms—which, in the age of Yelp, Grubhub, and eBay, could be almost any business. Small commercial operations are especially vulnerable when intermediary liability laws encourage over-removal, because unscrupulous rivals routinely misuse notice and takedown to target their competitors.

The entire paper weighs in at a neat 44 pages and it's chock full of useful information and analysis on this very important question. It should be required reading for anyone who thinks that there are easy answers to the question of what to do about "bad" content online, and it highlights that we actually have a lot of data and evidence to answer the questions that many legislators seem to be regulating based on how they "think" the world would work, rather than how the world actually works.

Current attitudes toward intermediary liability, particularly in Europe, verge on “regulate first, ask questions later.” I have suggested here that some of the most important questions that should inform policy in this area already have answers. We have twenty years of experience to tell us how intermediary liability laws affect, not just platforms themselves, but the general public that relies on them. We also have valuable analysis and sources of law from pre-internet sources, like the Supreme Court bookstore cases. The internet raises new issues in many areas—from competition to privacy to free expression—but none are as novel as we are sometimes told. Lawmakers and courts are not drafting on a blank slate for any of them.

Demands for platforms to get rid of all content in a particular category, such as “extremism,” do not translate to meaningful policy making—unless the policy is a shotgun approach to online speech, taking down the good with the bad. To “go further and faster” in eliminating prohibited material, platforms can only adopt actual standards (more or less clear, and more or less speech-protective) about the content they will allow, and establish procedures (more or less fair to users, and more or less cumbersome for companies) for enforcing them.

On internet speech platforms, just like anywhere else, only implementable things happen. To make sound policy, we must take account of what real-world implementation will look like. This includes being realistic about the capabilities of technical filters and about the motivations and likely choices of platforms that review user content under threat of liability.

This is an important contribution to the discussion, and highly recommended. Go check it out.

41 Comments | Leave a Comment..

Posted on Techdirt - 18 June 2018 @ 3:23am

Norwegian Court Orders Website Of Public Domain Court Decisions Shut Down With No Due Process

from the this-is-messed-up dept

What's up Europe? We've been talking a lot about insanity around the new copyright directive, but the EU already has some pretty messed up copyright/related rights laws on the books that are creating absurd situations. The following is one of them. One area where US and EU laws differ is on the concept of the "database right." The US does not grant a separate copyright on a collection of facts. The EU does. Studies have shown how this is horrible idea, and if you compare certain database-driven industries in the US and the EU, you discover how much damage database rights do to innovation, competition and the public. But, alas, they still exist. And they continue to be used in positively insane ways.

Enter Hakon Wium Lie. You might know him as basically the father of Cascading Style Sheets (CSS). Or the former CTO of the Opera browser. Or maybe even as the founder of the Pirate Party in Norway. Either way, he's been around a while in this space, and knows what he's talking about. Via Boing Boing we learn that: (1) Wium Lie has been sued for a completely absurd reason of (2) helping a site publish public domain court rulings that (3) are not even protected by a database right and (4) the judge ruled in favor of the plaintiff (5) in 24 hours (6) before Lie could respond and (7) ordered him to pay the legal fees of the other side.

I've numbered these because I had to break out each absurd part separately just to start to try to comprehend just how ridiculous the whole thing is. And now, let's go through how each part is absurd in turn:

1. Wium Lie is being sued as an accomplice to the site rettspraksis.no by an operation called Lovdata. Wium Lie tells the entire history in his post, but way back in the early days of the web, while he was helping to create CSS, Wium Lie also helped put Norway's (public domain) laws online. At the time, that same company, Lovdata, was charging people $1-per-minute to access the laws. Really. Eventually, Lovdata dropped the fees and is the official free publishers of the laws in Norway. Of course, statutory law is just one part of "the law." Case law is also quite important and (thankfully) court orders (that make up the bulk of case law) are also in the public domain in Norway. However, Lovdata charges an absurd $1,500 per year to access those decisions. And, it claims a database right* on the collection it makes available online.

2. And yet, Wium Lie is still being sued. Why? When he saw that the website rettspraksis.no was trying to collect and publish these decisions, he borrowed Lovdata CD-ROMs from the National Library in Oslo. He borrowed the 2002 version of the CD-ROM. This date is important, because the EU's database rights last for... 15 years. 2002 databases (and, yes, Wium Lie points out that it's odd to call a stack of documents a database...) are no longer protected by the database rights.

3. So, yeah, the data is clearly in the public domain, and Wium Lie didn't violate anyone's copyright or database rights. Wium Lie notes that Lovdata didn't even try to contact him or rettspraksis.no before suing, but just told the court that they must be scraping the expensive online database:

I'm very surprised that Lovdata didn't contact us to ask us where we had copied the court decisions from. In the lawsuit, they speculate that we have siphoned their servers by using automated «crawlers». And, since their surveillance systems for detecting siphoning were not triggered, our crawlers must have been running for a very long time, in breach of the database directive. The correct answer is that we copied the court decisions from the old discs I found in the National Library. We would have told them this immediately if they had simply asked.

4. This is the most perplexing to me in all of this. I can't read the Norwegian verdict (which, for Lovdata's lawyers, I did not get from scraping your site!), and don't know enough about Norwegian law, but this seems positively bizarre to me. It seems to go against fundamental concepts of basic due process, but how could a judge come out with a verdict like this?

5. ?!?>#$@!%#!%!@!%!#%!!

6. Again: is this how due process works in Norway? In the US, of course, there are things like preliminary injunctions that might be granted pretty quickly, but even then -- especially when it comes to gagging speech, there is supposed to be at least some element of due process. Here there appears to have been something close to none. Furthermore, in the US, this kind of thing would only be allowed if one side could show irreversible harm from leaving the site up. It is difficult to see how anyone could legitimately argue irreversible harm for publishing the country's own (public domain) court rulings.

I find it shocking that the judge ordered the take down of our website, rettspraksis.no, within 24 hours of the lawsuit being filed and WITHOUT HEARING ARGUMENTS FROM US. (Sorry for switching to CAPS, but this is really important.) We were ready and available to bring forth our arguments but were never given the chance. Furthermore, upon learning of the lawsuit, we, as a precaution, had voluntarily removed our site. If the judge had bothered to check he would have seen that what he was ordering was already done. There should be a much higher threshold for judges to close websites that just the request of some organization.

7. And, even if this was the equivalent of an injunction, to also tell Wium Lie and rettspraksis.no that they need to pay Lovdata's legal fees is just perplexing.

the two of us, the volunteers, were slapped with a $12,000 fee to cover the fees of Lovdata's own lawyer, Jon Wessel-Aas. So, the judge actually ordered that we had to pay the lawyer from the opposite side, WITHOUT HAVING BEEN GIVEN A CHANCE TO ARGUE OUR CASE.

This whole situation is infuriating. Being sued is a horrible experience in the first place. But the details here pile absurd upon preposterous upon infuriating. The whole database rights concept is already a troublesome thing, but this application of it is positively monstrous. Wium Lie now has some good lawyers working for him, and hopefully this whole travesty will get overturned, but what a clusterfuck.

* A separate tangent that I'll just note here rather than cluttering up all of the above. I was a bit confused to read references to the EU's database directive/database rights, because Norway is not part of the EU. However, since it is a part of the European Economic Area (yes -- this can all get confusing), it has apparently agreed to enact legislation that complies with certain EU Directives, including the Copyright and Database Directives.

45 Comments | Leave a Comment..

Posted on Free Speech - 15 June 2018 @ 3:38am

UN Free Speech Expert: EU's Copyright Directive Would Be An Attack On Free Speech, Violate Human Rights

from the don't-let-it-happen dept

We've been writing a lot about the EU's dreadful copyright directive, but that's because it's so important to a variety of issues on how the internet works, and because it's about to go up for a vote in the EU Parliament's Legal Affairs Committee next week. David Kaye, the UN's Special Rapporteur on freedom of expression has now chimed in with a very thorough report, highlighting how Article 13 of the Directive -- the part about mandatory copyright filters -- would be a disaster for free speech and would violate the UN's Declaration on Human Rights, and in particular Article 19 which (in case you don't know) says:

Everyone has the right to freedom of opinion and expression; the right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media regardless of frontiers.

As Kaye's report notes, the upload filters of Article 13 of the Copyright Directive would almost certainly violate this principle.

Article 13 of the proposed Directive appears likely to incentivize content-sharing providers to restrict at the point of upload user-generated content that is perfectly legitimate and lawful. Although the latest proposed versions of Article 13 do not explicitly refer to upload filters and other content recognition technologies, it couches the obligation to prevent the availability of copyright protected works in vague terms, such as demonstrating “best efforts” and taking “effective and proportionate measures.” Article 13(5) indicates that the assessment of effectiveness and proportionality will take into account factors such as the volume and type of works and the cost and availability of measures, but these still leave considerable leeway for interpretation.

The significant legal uncertainty such language creates does not only raise concern that it is inconsistent with the Article 19(3) requirement that restrictions on freedom of expression should be “provided by law.” Such uncertainty would also raise pressure on content sharing providers to err on the side of caution and implement intrusive content recognition technologies that monitor and filter user-generated content at the point of upload. I am concerned that the restriction of user-generated content before its publication subjects users to restrictions on freedom of expression without prior judicial review of the legality, necessity and proportionality of such restrictions. Exacerbating these concerns is the reality that content filtering technologies are not equipped to perform context-sensitive interpretations of the valid scope of limitations and exceptions to copyright, such as fair comment or reporting, teaching, criticism, satire and parody.

Kaye further notes that copyright is not the kind of thing that an algorithm can readily determine, and the fact-specific and context-specific nature of copyright requires much more than just throwing algorithms at the problem -- especially when a website may face legal liability for getting it wrong. And even if the Copyright Directive calls for platforms to have remediation processes, that takes the question away from actual due process on these complex issues.

The designation of such mechanisms as the main avenue to address users’ complaints effectively delegates content blocking decisions under copyright law to extrajudicial mechanisms, potentially in violation of minimum due process guarantees under international human rights law. The blocking of content – particularly in the context of fair use and other fact-sensitive exceptions to copyright – may raise complex legal questions that require adjudication by an independent and impartial judicial authority. Even in exceptional circumstances where expedited action is required, notice-and-notice regimes and expedited judicial process are available as less invasive means for protecting the aims of copyright law.

In the event that content blocking decisions are deemed invalid and reversed, the complaint and redress mechanism established by private entities effectively assumes the role of providing access to remedies for violations of human rights law. I am concerned that such delegation would violate the State’s obligation to provide access to an “effective remedy” for violations of rights specified under the Covenant. Given that most of the content sharing providers covered under Article 13 are profit-motivated and act primarily in the interests of their shareholders, they lack the qualities of independence and impartiality required to adjudicate and administer remedies for human rights violations. Since they also have no incentive to designate the blocking as being on the basis of the proposed Directive or other relevant law, they may opt for the legally safer route of claiming that the upload was a terms of service violation – this outcome may deprive users of even the remedy envisioned under Article 13(7). Finally, I wish to emphasize that unblocking, the most common remedy available for invalid content restrictions, may often fail to address financial and other harms associated with the blocking of timesensitive content.

He goes on to point -- as we have -- that while large platforms may be able to deal with all of this, smaller ones are going to be in serious trouble:

I am concerned that the proposed Directive will impose undue restrictions on nonprofits and small private intermediaries. The definition of an “online content sharing provider” under Article 2(5) is based on ambiguous and highly subjective criteria such as the volume of copyright protected works it handles, and it does not provide a clear exemption for nonprofits. Since nonprofits and small content sharing providers may not have the financial resources to establish licensing agreements with media companies and other right holders, they may be subject to onerous and legally ambiguous obligations to monitor and restrict the availability of copyright protected works on their platforms. Although Article 13(5)’s criteria for “effective and proportionate” measures take into account the size of the provider concerned and the types of services it offers, it is unclear how these factors will be assessed, further compounding the legal uncertainty that nonprofits and small providers face. It would also prevent a diversity of nonprofit and small content-sharing providers from potentially reaching a larger size, and result in strengthening the monopoly of the currently established providers, which could be an impediment to the right to science and culture as framed in Article 15 of the ICESCR.

It's well worth reading the whole thing. I don't know if this will have more resonance with the members of the EU Parliament's Legal Affairs Committee, but seeing as they keep brushing off or ignoring most people pointing out these very same points, one hopes that someone in Kaye's position will at least get them to think twice about continuing to support such a terrible proposal.

17 Comments | Leave a Comment..

Posted on Techdirt - 14 June 2018 @ 1:27pm

Once Again Congress Votes Proactively To Keep Itself Ignorant On Technology

from the a-series-of-tubes dept

Four years ago, we wrote about the House voting to keep itself ignorant on technology, and unfortunately, I can now basically just rerun that post again, with a few small tweaks, so here we go:

The Office of Technology Assessment existed in Congress from 1972 until 1995, when it was defunded by the Newt Gingrich-led "Contract with America" team. The purpose was to actually spend time to analyze technology issues and to provide Congress with objective analysis of the impact of technology and the policies that Congress was proposing. Remember how back when there was the big SOPA debate and folks in Congress kept talking about how they weren't nerds and needed to hear from the nerds? Right: the OTA was supposed to be those nerds, but it hasn't existed in nearly two decades -- even though it still exists in law. It just isn't funded.

Rep. Mark Takano (in 2014 it was Rush Holt) thought that maybe we should finally give at least a little bit of money to test bringing back OTA and to help better advise Congress. While some would complain about Congress spending any money, this money was to better inform Congress so it stopped making bad regulations related to technology, which costs a hell of a lot more than the $2.5 million Takano's amendment proposed. Also, without OTA, Congress is much more reliant on very biased lobbyists, rather than a truly independent government organization.

The fact that we're seeing this kind of nonsense in Congress should show why we need it:

A quartet of tech experts arrived at a little-noticed hearing at the U.S. Capitol in May with a message: Quantum computing is a bleeding-edge technology with the potential to speed up drug research, financial transactions and more.

To Rep. Adam Kinzinger, though, their highly technical testimony might as well have been delivered in a foreign language. “I can understand about 50 percent of the things you say,” the Illinois Republican confessed.

But, alas, like so many things in Congress these days, the issue of merely informing themselves has become -- you guessed it --partisan. The amendment failed 195 to 217 on mostly partisan lines (15 Republicans voted for it vs. 211 against, and only 6 Democrats voted against it, while 180 voted for it). If there's any silver lining, that's slightly better than in 2014 when a similar vote failed 164 to 248. So... progress?

Either way, when Congress is ignorant, we all suffer. That so many in Congress are voting to keep themselves and their colleagues ignorant should be seen as a problem.

33 Comments | Leave a Comment..

Posted on Techdirt - 14 June 2018 @ 3:21am

European Citizens: You Stopped ACTA, But The New Copyright Directive Is Much, Much Worse: Speak Up

from the protect-the-internet dept

It's understandable that people are getting fatigued from all the various attacks on the internet, but as I've noted recently, one of the biggest threats to our open internet is the incredibly bad Copyright Directive that is on the verge of being voted on by the EU Parliament's Legal Affairs Committee. The Directive is horrible on many fronts, and we've been highlighting two key ones. First, the dangerous link tax and, second, the mandatory upload censorship filters. Each of these could have major ramifications for how the internet will function.

Incredibly, both are driven mainly by industry animus towards Google from legacy industries that feel left behind. The link tax is the brainchild of various news publishers, while the upload filters are mainly driven by the recording industry. But, of course, what should be quite obvious at this point is that both of these ideas will only make Google stronger while severely limiting smaller competitors. Google can pay the link tax. Google has already built perhaps the most sophisticated content filtering system (which still sucks). Nearly everyone else cannot. So, these moves don't hurt Google. They hurt all of Google's possible competitors (including many European companies).

Six years ago, there was another threat in the EU for a horrible copyright plan, which was the ACTA "anti-counterfeiting trade agreement" being pushed (note a pattern here) by legacy copyright industries, looking to expand copyright law in a misguided attack on Google. Like this time, the horrible plan was being mainly pushed by the EU Commission. But with ACTA, the EU Parliament stepped up and rejected ACTA. However, that only happened after citizens hit the streets all over Europe to protest ACTA.

It is impossible to expect that every time politicians are about to do something bad on the internet or with copyright law that everyone can take to the streets. That's not going to happen. But the new Copyright Directive is significantly worse than anything that was in ACTA, and if the EU Parliament doesn't realize that by next week, the internet we know and love may be fundamentally changed in a way that we will all come to regret. I mentioned these already, but check out SaveYourInternet.eu, ChangeCopyright.org and SaveTheLink.org.

You can (and should) also follow MEP Julia Reda who has been leading the charge against these awful proposals and who has been posting how to help stop it on her website and on her Twitter feed. You can also listen to Reda discuss all of this on our podcast.

7 Comments | Leave a Comment..

Posted on Techdirt - 13 June 2018 @ 3:40pm

'Transparent' FCC Doesn't Want To Reveal Any Details About Ajit Pai's Stupid Reese's Mug

from the bringing-transparency-back dept

One of FCC Chair Ajit Pai's claims about how he's changed the FCC is that he's making it more transparent. And, to be fair, he did make one key change that his predecessors failed to do: which is releasing the details of rulemakings before they're voted on. That was good. But in so many other ways, Pai has been significantly less than transparent. And this goes all the way down to incredibly stupid things, like his silly stupid giant Reese's coffee mug. That mug is so famous, that even John Oliver mocked it in his story on net neutrality:

Taylor Amarel had some questions about the mug, and made a FOIA request using Muckrock, that might shed some details on the mug (and, perhaps, a few other things):

I would like to obtain all emails sent to, from, or copied to Deanne Erwin, Executive Assistant, containing any of the following non-case-sensitive key-strings: “reeses”, “ethics”, “mug”, “liberals”, or “Reese’s” from January 1, 2017 to present day.

But the wonderfully "transparent" Ajit Pai... apparently didn't want that. The FCC's General Counsel sent back an oddly accusatory email to Amarel, demanding a ridiculous amount of completely unnecessary information -- claiming it needed that info to assess fees to respond to the FOIA request:

In our attempts to discern your fee categorization, we became aware that the name you provided, Taylor Amarel, is likely a pseudonym. In order to proceed with your request, please provide us with your name, your personal mailing address, and a phone number where you can be reached.... We ask that you provide this information by May 29, 2018. If we do not hear from you by then, we will assume you are unwilling to provide this information and will close your requests accordingly.

As Muckrock noted, there is no reason why anyone should need to prove that they are using their real name or to provide all this personal info to the FCC, and it feels like an intimidation technique. Muckrock does note that such info might be useful in determining if Amarel should be granted media status, which might help waive fees, but Amarel did not request to be covered under such status.

Amarel handed over the info... and was then told that it would cost $233 to get the emails related to Pai's Reese's mug. Using Muckrock's own crowdfunding platform, users chipped in to fund the money, so hopefully at some point the FCC will live up to its legally required transparency and tell us about that stupid mug.

26 Comments | Leave a Comment..

Posted on Techdirt - 13 June 2018 @ 10:44am

Hey Google: Stop Trying To Patent A Compression Technique An Inventor Released To The Public Domain

from the being-evil dept

For the most part, Google has actually been one of the good guys on patent issues. Unlike some other Silicon Valley companies, Google has long resisted using its patents to go after others, instead only using the patents defensively. It has also fought for patent reform and experimented with new models to keep its own patents out of the hands of patent trolls. But it's been involved in an ongoing fight to patent something that an earlier inventor deliberately released into the public domain, and it reflects incredibly poorly on Google to keep fighting for this.

A Polish professor, Jarek Duda, came up with a new compression technique known as asymmetric numeral systems (ANS) years back, and decided to release it to the public domain, rather than lock it up. ANS has turned out to be rather important, and lots of companies have made use of it. Last summer, Duda noticed that Google appeared to be trying to patent the idea both in the US and around the globe.

Tragically, this happened just weeks after Duda had called out other attempts to patent parts of ANS, and specifically said he hoped that companies "like Google" would stand up and fight against such attempts. Three weeks later he became aware of Google's initial patent attempt and noted "now I understand why there was no feedback" on his request to have companies like Google fight back against attempts to patent ANS. In that same thread, he details how there is nothing new in that patent, and calls it "completely ridiculous." Despite noting that he can't afford to hire a patent lawyer, he's been trying to get patent offices to reject this patent, wasting a bunch of time and effort.

While a preliminary ruling in Europe appeared to side with Duda, accepting his evidence of prior art, Google is still fighting against that ruling and is continuing its efforts to patent the same thing in the US. This is getting new attention now after Tim Lee at Ars Technica wrote about the story, but it's been covered elsewhere in the past, including getting lots of attention on Reddit a year ago and Hacker News soon after that.

Google's response to Lee at Ars Technica are simply ridiculous. First, it claimed that Duda's invention was merely "a theoretical concept" while it is trying to patent "a specific application of that theory that reflects additional work by Google's engineers." But if you read through the analysis by many people who understand the space, that doesn't appear to be the case. There's very little that appears "new" in the Google patent, or non-obvious based on what Duda and others had already disclosed.

Google's second response is even more nonsensical:

"Google has a long-term and continuing commitment to royalty-free, open source codecs (e.g., VP8, VP9, and AV1) all of which are licensed on permissive royalty-free terms, and this patent would be similarly licensed."

While that's true, that's no excuse for locking up what's in the public domain and promising to treat it nicely.

The thing is there is simply no reason for Google to continue down this path. Again, the company has almost never been an aggressor on patents preferring to use them defensively. And it can still do that here -- by just pointing to the public domain to invalidate anyone else's attempt to patent this. The fact that Google is being slammed in various forums over this (and has been since a year ago) should have clued the company in to the fact that (1) this isn't necessary and (2) harming its own reputation with engineers just to secure a patent it doesn't need is not a good idea.

Google has tons of patents. It doesn't need this one. If it really thinks that its own invention here goes beyond what Duda did -- and Ars Technica notes that Google ignored multiple requests to explain what is different in its patent application -- then the company needs to be much more transparent and upfront about what is different from Duda's work and the company can just as easily release the same information to the public domain as well. Yes, that would be giving up on one patent, but Google can survive donating a patentable idea to the public domain if it actually has one.

41 Comments | Leave a Comment..

Posted on Free Speech - 12 June 2018 @ 12:03pm

High School Student's Speech About Campus Sexual Assault Gets Widespread Attention After School Cuts Her Mic

from the streisand-high dept

It's that time of year when kids are graduating from high school, and the age old tradition of the valedictorian speech is happening all around the country. While exciting for the kids, families and other students, these kinds of speeches are generally pretty quickly forgotten and certainly tend not to make the national news. However, in nearby Petaluma, California, something different is happening, all because a bunch of spineless school administration officials freaked out that the valedictorian, Lulabel Seitz, wanted to discuss sexual assault. During her speech, the school cut her mic when she started talking about that issue (right after talking about how the whole community had worked together and fought through lots of adversity, including the local fires that ravaged the area a few months back). Seitz has since posted the video of both her mic being cut off and then with her being filmed giving the entire speech directly to a camera.

And, of course, now that speech -- and the spineless jackasses who cut the mic -- are getting national news coverage. The story of her speech and the mic being cut has been on NPR, CBS, ABC, CNN, Time, the NY Post, the Washington Post and many, many more.

In the ABC story, she explains that they told her she wasn't allowed to "go off script" (even pulling out of a final exam to tell her they heard rumors she was going to go off speech and that she wasn't allowed to say anything negative about the school) and that's why the mic was cut, even as the school didn't know what she was going to say. She also notes -- correctly -- that it was a pretty scary thing for her to continue to go through with the speech she wanted to give, despite being warned (for what it's worth, decades ago, when I was in high school, I ended up in two slightly similar situations, with the administration demanding I edit things I was presenting -- in one case I caved and in one I didn't -- and to this day I regret caving). Indeed, she deserves incredible kudos for still agreeing to give her speech, and it's great to see the Streisand Effect make so many more people aware of (1) her speech and (2) what a bunch of awful people the administrators at her school are for shutting her speech down.

As for the various administrators, their defense of this action is ridiculous. They're quoted in a few places, but let's take the one from the Washington Post:

“In Lulabel’s case, her approved speech didn’t include any reference to an assault,” [Principal David Stirrat] said. “We certainly would have considered such an addition, provided no individuals were named or defamed.”

As Seitz notes, she never intended to name names, and the school had told her so many times not to talk about these things it was obvious to her that she wouldn't have been able to give that speech if she had submitted the full version. In the ABC interview she explained that rather than just letting the valedictorian speak as normal, the school had actually told her she had to "apply" to speak.

Dave Rose, an assistant superintendent, told the Press Democrat that he could remember only one other time that administrators had disconnected a microphone during a student’s graduation speech in the past seven years, but said he believed it was legal.

“If the school is providing the forum, then the school has the ability to have some control over the message,” Rose said.

Actually, that's not how the First Amendment works. Schools can limit some things, but not if it's based on the content of the message, which appears to be the case here. Of course, I doubt that Seitz is going to go to court over this as it's not worth it, but thanks to the Streisand Effect, she doesn't need to. The world has learned about her speech... and about how ridiculous the administrators are in her school district.

76 Comments | Leave a Comment..

Posted on Techdirt - 12 June 2018 @ 9:29am

Ending The Memes: EU Copyright Directive Is No Laughing Matter

from the it's-bad dept

On Friday, I wrote about all of the many problems with the link tax part of the proposed EU copyright directive -- but that's only part of the problem. The other major concern is around mandatory upload filters. As we discussed with Julia Reda during last week's podcast, the upload filters may be even more nefarious. Even the BBC has stepped up with an article about how it could put an end to memes. While that might be a bit of an exaggeration, it's only just a bit exaggerated. Despite the fact that the E-Commerce Directive already makes it clear that platforms should not be liable for content placed on their platforms by users absent any notice and that there can be no "general monitoring" obligation, the proposal for Article 13 would require that all sites have a way to block copyright-covered content from being uploaded without permission of the copyright holder.

As per usual, this appears to have been written by those who have little understanding of how the internet itself works, or how this will impact a whole wide variety of services. Indeed, there's almost nothing that makes any sense about it at all. Even if you argue that it's designed to target the big platforms -- the Googles and Facebooks of the world -- it makes no sense. Both Google and Facebook already implement expensive filtering systems because they decided it was good for their business to do so at their scale. And even if you argue that it makes sense for platforms like YouTube to put in place filters, it doesn't take into account many factors about what copyright covers, and the sheer impossibility of making filters that work across everything.

How would a site like Instagram create a working filter? Could it catch direct 100% copies? Sure, probably. But what if you post a photo to Instagram of someone standing in a room that has a copyright-covered photograph or painting on the wall? Does that need to be blocked? What about a platform like Github where tons of code is posted? Is Github responsible for managing every bit of copyright-covered code and making sure no one copies any of it? What about sites that aren't directly about the content, but which involve copyright-covered content, such as Tinder. Many of the photos of people on Tinder are covered by copyright, often held by a photographer, rather than the uploader. Will Tinder need to put in place a filter that blocks all of those uploads? Who will that be helping exactly? How about a blog like ours? Are we going to be responsible to make sure no one posts a copyright-covered quote in the comments? How are we to design and build a database of all copyright-covered content to block such uploads (and won't such a database potentially create an even larger copyright question in the first place)? What about a site like Airbnb? What if a photo of a home on Airbnb includes copyright-covered content in the background? Kickstarter? Patreon? I'm not sure how either service (which, we should remind you, both help artists get paid) can really function if this becomes law. Would they need a filter to block creators from uploading their own works?

And that leaves out even more fundamental questions about how do filters handle things like fair use? Or parody? To date, they don't. Now making such filters mandatory even for smaller sites would be a complete and total disaster for how the internet works.

This is why it is not hyperbolic at all to suggest that this change to how the EU looks at copyright could have a massive consequence on how the internet functions. At the very least, it is likely to limit the places where users can participate, because that will price out tons of services. It takes the internet far, far away from its core as a communications platform and moves it more and more towards one that is broadcast only. Perhaps that's what the EU really wants, but at least the discussion should be honest on that point. So far, it is not. The debate goes over the usual grounds, claiming that copyright holders are somehow being ripped off by the internet -- though that is stated without evidence. If the EU wants to fundamentally change how the internet works, it should at least justify those changes with something real and be willing to explain why those changes are acceptable. To date, that has not happened.

Internet companies are trying to speak out about this, but many are so busy fighting other fires -- such as the net neutrality repeal here in the US -- that it's difficult to run over to Europe to point out just how moronic this is. Automattic (the people who do Wordpress) have put out a big statement about the problems of this plan that is well worth reading:

We’re against the proposed change to Article 13 because we have seen, first-hand, the dangers of relying on automated tools to police nuanced speech and copyright issues. Bots or algorithms simply cannot determine whether a blog post, photo in a news article, or video posted to a website is copyright infringement or legitimate use. This is especially true on a platform like wordpress.com, where copyrighted materials are legitimately posted in the context of news articles, commentary, criticism, remixing, memes — thousands of times per day.

We’ve also seen how copyright enforcement, without adequate procedures and safeguards to protect free expression, skews the system in favor of large, well-funded players, and against those who need protection the most: individual website owners, bloggers, and small publishers who don’t have the resources or legal wherewithal to defend their legitimate speech.

Based on our experience, the changes to Article 13, while well-intentioned will almost certainly lead to a flood of unintended, but very real, censorship and chilling of legitimate, important, online speech.

Reddit has also put out a statement:

Article 13 would force internet platforms to install automatic upload filters to scan (and potentially censor) every single piece of content for potential copyright-infringing material. This law does not anticipate the difficult practical questions of how companies can know what is an infringement of copyright. As a result of this big flaw, the law’s most likely result would be the effective shutdown of user-generated content platforms in Europe, since unless companies know what is infringing, we would need to review and remove all sorts of potentially legitimate content if we believe the company may have liability.

Finally, a bunch of internet luminaries, including Tim Berners-Lee, Vint Cerf, Brewster Kahle, Katherine Maher, Bruce Schneier, Dave Farber, Pam Samuelson, Mitch Kapor, Tim O'Reilly, Guido von Rossum, Mitchell Baker, Jimmy Wales and many, many more have put out quite a statement on how bad this is:

In particular, far from only affecting large American Internet platforms (who can well afford the costs of compliance), the burden of Article 13 will fall most heavily on their competitors, including European startups and SMEs. The cost of putting in place the necessary automatic filtering technologies will be expensive and burdensome, and yet those technologies have still not developed to a point where their reliability can be guaranteed.Indeed, if Article 13 had been in place when Internet’s core protocols and applications were developed, it is unlikely that it would exist today as we know it.

The impact of Article 13 would also fall heavily on ordinary users of Internet platforms—not only those who upload music or video (frequently in reliance upon copyright limitations and exceptions, that Article 13 ignores), but even those who contribute photos, text, or computer code to open collaboration platforms such as Wikipedia and GitHub.

Scholars also doubt the legality of Article 13; for example, the Max Planck Institute for Innovation and Competition has written that “obliging certain platforms to apply technology that identifies and filters all the data of each of its users before the upload on the publicly available services is contrary to Article 15 of the InfoSoc Directive as well as the European Charter of Fundamental Rights.”

It doesn't have to be this way. There are campaign pages for those in Europe to contact their MEPs at SaveYourInternet.eu and ChangeCopyright.org. As it stands, the EU's Legal Affairs Committee will vote on this proposal next week. If it passes in tact, it's very likely that this will become official and all EU member countries will need to change their laws to enable this ridiculous and counterproductive plan.

There are, of course, all sorts of threats to the internet. In the past, SOPA/PIPA and ACTA would have changed fundamental concepts. Here in the US, we've just dumped net neutrality in the garbage. How the internet is shaped post-GDPR is still being figured out. But I can't think of a greater threat to the basic functioning of the internet than the current proposal in the EU right now. And, yet, it seems to not be getting nearly as much attention as those other things. Perhaps we're all fatigued from the other threats to the internet. But we need to wake up and speak out, because this one is worse. It will fundamentally change massive parts of how the internet works -- and almost all of it is designed to make it incredibly difficult to run an internet site that allows for any public participation at all.

If you're not in the EU, you can still speak up, and hopefully some Members of the European Parliament will pay attention. The world is watching what the EU Parliament does next week to the internet. If it goes along with the plan it will stamp out innovation and free speech, and basically hand over a huge gift to a small group of large media players who never liked the disruptive nature of the internet in the first place, and are now gleeful that EU regulators have more or less gone along with their plan to stamp out what makes the internet so wonderful. We've heard that some EU Parliament members are getting at least a little concerned because of the noise people are making about this, but it's time to make them very concerned. They are trying to fundamentally change the internet, and they don't seem to care about or understand what this actually means.

53 Comments | Leave a Comment..

Posted on Techdirt - 12 June 2018 @ 3:41am

EU Explores Making GDPR Apply To EU Government Bodies... But With Much Lower Fines

from the good-for-the-goose,-not-so-good-for-the-gander dept

We recently wrote how various parts of the EU governing bodies were in violation of the GDPR, to which they noted that the GDPR doesn't actually apply to them for "legal reasons." In most of the articles about this, however, EU officials were quick to explain that there would be new similar regulations that did apply to EU governing bodies. Jason Smith at the site Indivigital, who kicked off much of this discussion by discovering loads of personal info on people hosted on EU servers, has a new post up looking at the proposals to apply GDPR-like regulations on the EU governing bodies itself.

There are two interesting points here. First, when this was initially proposed last year, the plan was to have it come into effect on the very same day as the GDPR went into effect: May 25, 2018, and that it was "essential" that the public understand that the EU itself was complying with the same rules as everyone else.

Essential however, from the perspective of the individual, is that the common principles throughout the EU data protection framework be applied consistently irrespective of who happens to be the data controller. It is also essential that the whole framework applies at the same time, that is, in May 2018, deadline for GDPR to be fully applicable.

Guess what didn't happen? Everything in the paragraph above. The EU forced everyone else to comply by May of this year. But gave itself extra time -- time in which it is not complying with the rules and brushing it off as no big deal, while simultaneously telling everyone else that it's easy to comply.

Also, while the GDPR puts incredible fines on those who fail to comply... the fines for if the EU doesn't comply (if this rule ever actually goes into effect) are much more limited. Under the GDPR, companies can be fined 20 million euros or 4% of revenue, whichever is higher, meaning that any smaller company can be put out of business, but the plan for the EU itself is for fines to top out at €50,000 per mistake, with a cap of €500,000 per year.

Must be nice when you're the government and can make different rules for yourself, while mocking anyone who thinks that the rules for everyone else are a bit too aggressive and onerous.

22 Comments | Leave a Comment..

Posted on Net Neutrality Special Edition - 11 June 2018 @ 10:44am

What Ajit Pai Should Have Said About Killing Net Neutrality... And Why It Still Would Have Been Wrong

from the but-he-didn't,-which-says-it-all dept

As you've probably heard, today's the day that the fairly straightforward and non-onerous net neutrality rules put in place in place by Tom Wheeler back in 2015 are officially taken off the books. We've posted a ton about net neutrality and will post a lot more, but I've been thinking about a few things related to how all of this went down that seem worth discussing. As background reading, it might help to first read why I changed my mind about net neutrality -- from originally being against the FCC setting any rules to eventually being for a fairly limited set of rules. However, what really inspired this post was the podcast conversation I had with Barry Eisler back in December about the lost art of productive debate. One of the points that Barry made was that it's especially easy in these crazy social media-driven times to argue against someone by taking the absolute worst or most extreme version of their argument and then destroying that. As he notes, as a former practicing lawyer, it's the kind of thing he was trained to do. However, he suggested that the more intellectually honest way of holding a debate is to actually reframe the argument and present back to the person what their best argument appears to be, and then debate that.

That can be difficult to do -- but let's take a shot. Because the arguments that Pai and his supporters have given so far for wiping net neutrality off the books really don't make any sense.

The core of Pai's argument to date has been that the net neutrality rules that were put in place in 2015 created a massive regulatory burden on broadband access providers, leading them to decrease their investment. In a new interview he suggests that decreasing the regulatory burden on broadband access providers will have some sort of trickle down effect on everyone else: "I think ultimately it’s going to mean better, faster, cheaper Internet access and more competition."

But if that's actually the case, it seems that Pai should back up that statement with a further explanation. The broadband market is about infrastructure, and if you understand the economics of infrastructure there are a number of different factors at play. For example, it seems quite reasonable to point out that both the net neutrality rules and the lack of net neutrality rules create some winners and losers. And it would be reasonable to point out that the FCC shouldn't be the ones picking those winners and losers (though, that would be a tough sell, given the FCC's entire charter is basically to be doing that). An intellectually honest description of the net neutrality rules would note that there are multiple competing interests at play: the interests of broadband infrastructure players (big telcos/cablecos), the interests of upstart competitors (smaller ISPs), the interests of big services on the internet, the interests of small services on the internet, and the interests of the public. And you could (and perhaps should) look at the impact on each of those and note who is likely to benefit and who is not.

And then, if we were having an honest debate, Pai could note that he believes, strongly, that empowering the biggest broadband infrastructure players with leverage to create differentiated services on their network would provide benefits that outweigh the damage that might do to others. Pai sort of tries to make that argument with his statement about "better, faster, cheaper Internet access and more competition", but he fails to describe any real mechanism for that to actually happen -- other than suggesting that merely removing the regulations will magically lead to it. So, let's try to create the mechanism by which this might happen. You could say that... due to a lack of regulations, it will allow the big broadband providers more leeway to experiment with alternative business models and new technologies, which will create a faster and better internet, including different types of access and different levels of service for different users. And maybe you could then argue that this would also somehow allow for new entrants, because they could attack the market in disruptive ways.

Another argument Pai could have -- and perhaps should have -- made would be looking into questions about how the market for broadband works. Last year, in posting the free market case for net neutrality, we noted that there are cases where even the staunchest free market economists out there have recognized that government intervention can make some sense, mainly in getting uncompetitive markets unstuck. Pai could have tried to rebut the various assumptions in that piece as well. For example, he could argue that broadband is not, in fact, a natural monopoly, and present some evidence for why the market was increasingly competitive -- and even argue that knocking out net neutrality rules would enable brand new broadband infrastructure investment to occur by enabling brand new competitive forms of internet access.

If he said all of that, then we could be having an honest debate about getting rid of the net neutrality rules. So, now, let's point out why even if he said all of the above, he'd still be incorrect (this may be why he didn't say all of the above, because he understands these arguments don't hold up to much scrutiny, and it's much easier to stick with a voodoo trickle-down "broadband regulation bad!" form of argument -- but that's getting away from the intent of this post). So, first let's tackle the "get rid of regulation to help everyone" argument.

As we've discussed at length, the one argument that Pai has made strongly -- that broadband investment is down because of net neutrality -- is not even remotely close to true. Just looking at broadband companies and their Wall Street statements (in which they face serious penalties for lying), you find that there is no decline in broadband investment due to the net neutrality regulations, at all. So the idea that removing these "barriers" will somehow lead to an increase in investment doesn't make any sense.

Second, even going beyond the pure "less regulation means more investment" argument, if the market is not at all competitive (and it is not), then what incentive do the broadband players really have to invest and innovate here? As we've seen time and time again, the real incentive for innovation is greater competition. But we're actually seeing less and less competition in the broadband market, which means that broadband providers have greater and greater incentive to just protect their own monopolistic position and cheap out on customer service and innovation.

Third, if we look at the various players in the market, it's disturbing that Pai only seems to take into account the interests of the largest broadband providers. The smaller broadband providers have made it clear that net neutrality actually improves competition because, without it, the largest broadband providers are better positioned to cut the necessary deals with Netflix, Amazon, Google, Apple, Facebook and others, while the smaller players won't be able to do so. That means that the bigger players will be able to contractually make smaller broadband companies provide a worse overall experience to their users, again further cementing the lack of competition. And while Pai claims that net neutrality harms smaller ISPs, the FCC's own data shows the opposite.

Fourth, so much of the argument that we have to focus on broadband infrastructure investment of the largest players is a broken windows fallacy. As I pointed out in a post last year, if broadband infrastructure investment is the only metric we're using here, the FCC's best plan would be to physically destroy the internet. After all, that would stimulate an awful lot of new investment. You just have to ignore all the costs it would create for everyone else.

And that's the part that's most disturbing here. Pai seems to be deliberately focusing only on the big broadband players and what this means for them -- without caring about what it means for everyone else. Having clear and simple net neutrality rules in place helps lots and lots of internet services, because it means they get to compete on a level playing field against the Googles, Facebooks, Amazons and Netflixes of the world. Those companies can pay to get preferential treatment. The startups cannot -- and Pai has yet to explain how this state of affairs helps those startups. Just saying, "well you'll have more internet," isn't an answer.

Indeed, a strong argument can be made that harming the ability of smaller companies on the network to compete, in favor of granting much greater discriminatory power to the large networks themselves, does much more harm to competition and a free market than having a set of very limited net neutrality rules in place that give all of the internet companies and entrepreneurs certainty that they can actually build their own businesses.

As for the argument that the market is more and more competitive -- that is clearly not at all true. Pai himself telegraphed this fact by trying to downgrade the definition of broadband to make the market look more competitive.

So, what are we left with? Pai's key argument appears to rest on the broken windows fallacy, by only focusing on broadband infrastructure investment and ignoring all of the investment a layer up on the network and how much that will be harmed by this move. He also is misrepresenting what is happening with infrastructure investment and the level of competition in the market.

In the end, we're left with no real argument at all for why we've just wiped out net neutrality, other than a blind faith that "regulation is bad." That may get some people excited, but hardly seems like a real exploration of the topic. And that's going to matter, because one of the key things that Pai is going to need to defend in court during the various challenges is what material change had happened that required this repeal. And simply falsely claiming that broadband infrastructure spending had dropped is hardly convincing.

30 Comments | Leave a Comment..

Posted on Techdirt - 8 June 2018 @ 1:51pm

Three Takes On Microsoft Acquiring Github

from the confused-ideology dept

As you almost certainly know by now, earlier this week Microsoft announced that it was acquiring Github. There's been plenty of hand-wringing about this among some. Microsoft has a pretty long history of bad behavior and so many of the developers who use Github don't have much love or trust of Microsoft, and thus are perhaps reasonably concerned about what will happen. While I'm disappointed that another interesting independent company is being snapped up by a giant, I'm not completely convinced this will be a bad thing in the long run. Microsoft is a fairly different company than it was in the past, and there are reasons to believe it should know enough not to fuck things up. Alternatively, if it does fuck it up, it's really not that hard for a new and innovative company to step into the void (and certainly, others are already jockeying for position to attract disgruntled Github users).

For this post, however, I wanted to point to three different reports in reaction to the news -- because I was fascinated by all three of these takes. More specifically, I found two of them thought-provoking, and one laugh-inducing. And it made me realize just how poorly many non-specialized reporters understand the stuff they're reporting on, while how those who have a really deep and implicit understanding of things provide so much greater insight. Let's start with the laugh-inducing one, before moving on to the thought-provoking. The hilariously bad take is found as an editorial in the Guardian, which has already been corrected once for falsely claiming that Github was open source software, rather than that it hosted open source software (among other things). But the really insane paragraph is this one:

GitHub, by contrast, grew out of the free software movement, which had similar global ambitions to Microsoft. The confused ideology behind it, a mixture of Rousseau with Ayn Rand, held both that humans are naturally good and that selfishness works out for the best. Thus, if only coders would write and give away the code they were interested in, the results would solve everyone else’s problems. This was also astonishingly successful. The internet now depends on free software.

Confused ideology? Mixture of Rousseau with Ayn Rand? What the fuck are they talking about? And then after noting how free software has been phenomenally successful, it then says this:

But the belief that everyone coding would solve anyone’s problems has been shown up as completely ludicrous. If anything, computer literacy has declined over the generations as computers have got easier to use. In the heyday of Microsoft, almost everyone knew some tricks to make a computer do what it should, because almost everyone had to if they wanted to get anything done. But hardly anyone today has the first idea of programming a mobile phone. They just work. That’s progress, but not in the direction some idealists expected. Significant open source software is now produced almost entirely by giant commercial companies. It solves their problems but could be said to multiply ours. Huge cultural and political changes are presented as technological inevitabilities. They are not. The value of GitHub lies not in the open-source software it hosts, which anyone could copy, but in the trust reposed in it by users. It is culture, not code, that’s worth those billions of dollars.

The whole piece seems premised entirely on a near total misunderstanding of the reasons why people use Github, the ethos of free software, and well... just about everything. Of course it's culture that's important... but it's so odd that this editorial goes out of the way to insult a strawman culture it believes permeates Github, while then claiming that it's what's valuable.

So let's move on to the better takes. I'll start with Paul Ford who is, hands down, the absolute best, most thoughtful, insightful and thought-provoking writer about technology issues around. His piece for Bloomberg Businessweek, entitled GitHub is Microsoft's $7.5 Billion Undue Button is truly excellent. It not only does one of the best jobs I've seen in explaining Github for the layman, but does so in the context of explaining why this deal makes sense for Microsoft. Amusingly, I think that Ford is making the same point that the Guardian's editorial was trying to make, but the difference is that Ford actually understands the details, whereas whoever wrote the byline-less Guardian editorial clearly does not.

GitHub represents a big Undo button for Microsoft, too. For many years, Microsoft officially hated open source software. The company was Steve Ballmer turning bright colors, sweating through his shirt, and screaming like a Visigoth. But after many years of ritual humiliation in the realms of search, mapping, and especially mobile, Microsoft apparently accepted that the 1990s were over. In came Chief Executive Officer Satya Nadella, who not only likes poetry and has a kind of Obama-esque air of imperturbable capability, but who also has the luxury of reclining Smaug-like atop the MSFT cash hoard and buying such things as LinkedIn Corp. Microsoft knows it’s burned a lot of villages with its hot, hot breath, which leads to veiled apologies in press releases. “I’m not asking for your trust,” wrote Nat Friedman, the new CEO of GitHub who’s an open source leader and Microsoft developer, on a GitHub-hosted web page when the deal was announced, “but I’m committed to earning it.”

But perhaps most interesting in Ford's piece is that, while it understands why Microsoft is doing what it's doing, it's also a bit wistful of how he'd always kind of hoped that Github would become something more -- something more normal, something that applied to much more of what everyone did. While it doesn't directly say it, it does imply that that dream probably won't happen with Microsoft in control.

I had idle fantasies about what the world of technology would look like if, instead of files, we were all sharing repositories and managing our lives in git: book projects, code projects, side projects, article drafts, everything. It’s just so damned … safe. I come home, work on something, push the changes back to the master repository, and download it when I get to work. If I needed to collaborate with other people, nothing would need to change. I’d just give them access to my repositories (repos, for short). I imagined myself handing git repos to my kids. “These are yours now. Iteratively add features to them, as I taught you.”

For years, I wondered if GitHub would be able to pull that off—take the weirdness of git and normalize it for the masses, help make a post-file world. Ultimately, though, it was a service made by developers to meet the needs of other developers. Can’t fault them for that. They took something very weird and made it more usable.

The final thought provoking piece comes from Ben Thompson at Stratechery, who sees the clear business rationale of Microsoft's decision. Microsoft built its entire business as a platform for developers (who it sometimes treated terribly...). But as we've moved past a desktop world and into a cloud world, Microsoft has much less pull on developers. Github brings it tons and tons of developers.

Go back to Windows: Microsoft had to do very little to convince developers to build on the platform. Indeed, even at the height of Microsoft’s antitrust troubles, developers continued to favor the platform by an overwhelming margin, for an obvious reason: that was where all the users were. In other words, for Windows, developers were cheap.

That is no longer the case today: Windows remains an important platform in the enterprise and for gaming (although Steam, much to Microsoft’s chagrin, takes a good amount of the platform profit there), but the company has no platform presence in mobile, and is in second place in the cloud. Moreover, that second place is largely predicated on shepherding existing corporate customers to cloud computing; it is not clear why any new company — or developer — would choose Microsoft.

This is the context for thinking about the acquisition of GitHub: lacking a platform with sufficient users to attract developers, Microsoft has to “acquire” developers directly through superior tooling and now, with GitHub, a superior cloud offering with a meaningful amount of network effects. The problem is that acquiring developers in this way, without the leverage of users, is extraordinarily expensive; it is very hard to imagine GitHub ever generating the sort of revenue that justifies this purchase price.

Thompson's piece (among many other good insights) suggests why developers might not need to fear Microsoft's ownership, because of all the potential acquirers, Microsoft probably has the least incentive to ruin Github:

This, by the way, is precisely why Microsoft is the best possible acquirer for GitHub, a company that, having raised $350 million in venture capital, was possibly not going to make it as an independent entity. Any company with a platform with a meaningful amount of users would find it very hard to resist the temptation to use GitHub as leverage; on the other side of the spectrum, purely enterprise-focused companies like IBM or Oracle would be tempted to wring every possible bit of profit out of the company.

What Microsoft wants is much fuzzier: it wants to be developers’ friend, in large part because it has no other option. In the long run, particularly as Windows continues to fade, the company will be ever more invested in a world with no gatekeepers, where developer tools and clouds win by being better on the merits, not by being able to leverage users.

My own take is somewhere between all of these. As soon as I heard the rumor, I started thinking back to the famed Steve Ballmer chant of "Developers, Developers, Developers!"

Microsoft has always needed developers, but in the past it got them by being the center of gravity of the tech universe. A huge percentage of developers were drawn to Microsoft because they had to develop for Microsoft's platform. That allowed Microsoft to get away with a bunch of shady practices that certainly created a bunch of trust issues (Facebook might want to take note of this, by the way). Nowadays, in the cloud world, Microsoft doesn't have that kind of leverage. It's still a massive player, but not one that sucks in everything around it. And, it does have new leadership that seems to understand the different world in which Microsoft operates. So it will be interesting to see where it goes.

But, as someone who believes in the value of reinvention and innovation among the tech industry, it's not necessarily great to see successful mid-tier companies just gobbled up by giants. It happens -- and perhaps it clears the field for something fresh and new. Perhaps it even clears the field for that utopic git-driven world that Ford envisions. But, in the present-tense, it's at least a bit deflating to think that a very different, and very powerful, approach to the way people collaborate and code... ends up in Microsoft's universe.

And, as a final note on these three pieces: this is why we should seek out and promote people who actually understand technology and business in understanding what is happening in the technology world. The Guardian piece is laughable, because it appears to be written by someone with such a surface-level understanding of open source or free software that it comes off as utter nonsense. But the pieces by Ford and Thompson actually help add to our understanding of the news, while providing insightful takes on it. The Guardian (and others) should learn from that.

65 Comments | Leave a Comment..

Posted on Techdirt - 8 June 2018 @ 10:53am

Revenge Porn Dude Craig Brittain Files Hilariously Bad Lawsuit Against Twitter

from the not-how-it-works-craig dept

Ah, Craig Brittain is back. Never quite satisfied to recognize that after the FTC sanctioned him, his name is the equivalent of Pustule Nickelback McHitler II, he's continued to lead his life of ridiculousness online, alternating between declaring himself a free speech hero and pushing to censor all his critics. And let us not forget his Senate campaign race in Arizona, which seemed to focus on Brittain's strategy of insulting lots of people while declaring it was obvious he was going to win. That went so well that at the end of May it was revealed that he failed to get enough signatures and thus is not on the ballot.

Apparently having some extra free time on his schedule, he has sued Twitter, pro se of course. It's a fun read, and extra amusing as it comes just days after Chuck Johnson's lawsuit against Twitter on sorta similar grounds was tentatively tossed out of court. At least Johnson had an actual lawyer file his suit. Brittain's lawsuit, of course, cites the Packingham decision that a bunch of people have been misrepresenting to claim that it says social media can be considered a public forum. Brittain combines his misrepresentation of that opinion with a misrepresentation of the recent decision that President Trump cannot block followers, in order to claim that Twitter can't kick off any political candidate.

This lawsuit implicates Twitter's responsibility as a public forum as recently ruled in Knight First Amendment Institute v. Trump et. al... where the honorable Naomi Reice Buchwald, Judge for the Southern District of New York, ruled that President Donald J. Trump must unblock all Twitter users, regardless of the content of their messaging, and also ruled that President Trump's Twitter space is an interactive public forum. The ruling also implicates that Twitter itself is a public forum space under the US Constitution, and thus all First Amendment Protections (must) apple to its use.

Yeah, that's not what that ruling said at all, but, I guess you get points for trying?

In regards to Knight First Amendment Center v. Trump, Defendant must reasonably provide access to that public forum space by unsuspending all users who are followers of President Donald J. Trump or any other public official or candidate, as well as any/all public candidates and officials, whether they are supporters, critics, or neutral to the points of view of the President of the United States or any other candidate or elected official.

Likewise, being as President Donald J. Trump is one of many politicians whose tweets create such a public space, Twitter must extend that same public forum to followers and critics of all US politicians and subsequently all journalistic outlets, in order to protect two-way freedom of speech established by the First Amendment.

Two-way freedom of speech? That's a new one. I'm sure the court will just accept this totally made-up, nonsensical concept, especially right after you totally misrepresent the findings of the Knight Center ruling (in which Twitter wasn't even a party and in which Twitter was not required to do anything). The lawsuit also contains many paragraphs of meaningless nonsense about how Twitter is not a neutral platform, which... has no impact on anything (even if some people -- including some actual Senators -- want to pretend otherwise).

Also, this (capitalization in the original):

The loss of the Accounts is a Crippling Blow to Plaintiffs and Others, and presents a Chilling Effect to the First Amendment and other Constitutional Rights, where a Crippling Blow shall be defined as 'an unconscionable and substantial loss with no defined legal remedy or recourse', and a Chilling Effect shall be defined as 'an action which suppresses similar/related rights including but not limited to the First Amendment rights to access and utilize a public forum for speech as well as the desire of other users to speak out against similar actions, for fear of action(s) such as censorship, suspension/ban, shadowban or downranking being taken against them as well'.

That's all one sentence. Try to say it in a single breath. It's fun. Anyway, this is also not how law works, and clearly there's little need to go step by step over how wrong this is... but I'll just note that when you define your own made up tort as one "with no defined legal remedy or recourse" you've basically just admitted that your entire lawsuit is bullshit.

Not to be missed is Brittain's discussion of who he is in the "Parties" section, in which he claims that "he has committed himself to reinventing and rehabilitating his life and image" and that Twitter was a necessary component to this. He leaves out the many people who he attacked (disclaimer: including me) with his account(s) over the years. Similarly, he tries to paint himself as "lifelong champion of free and even dangerous speech as a natural right" while (yet again) ignoring his repeated attempts to abuse the law to try to silence reports of his own history running a revenge porn site, setting up a fake lawyer to demand payments to get pictures off of that site, and the eventual FTC settlement concerning that whole effort. But really, it's the next part that's the most laugh inducing:

His accumulated total followers (over 400,000) have made him the most popular anarchist/libertarian thinker in world history, where anarchism is defined as 'self-government by peaceful and voluntary interaction and exchange', governed by the Non-Aggression Principle, defined as 'to not harm anyone or their property'.

Got that? He is the most popular anarchist/libertarian thinker in world history. Because he had 400,000 followers (and that's leaving aside news reports that claimed that almost half of Brittain's followers were fake). And make sure you don't miss out on the fact that Brittain is important because some wrestlers followed him on Twitter. That's in there too. It goes on like this for a while. There's also an impressively long section in which Brittain namechecks a bunch of other accounts that Twitter suspended for no clear reason, followed by even more examples where a bunch of people freak out and claim that they've been shadowbanned (even though it's unclear if they actually were). Incredibly, there are then 17 pages (which Brittain lists as a single paragraph in his filing) that repost an EFF brief in the Knight Center case that doesn't actually say what Brittain then pretends it says. This is not an appendix or an exhibit. It's just stuck there right in the middle of Brittain's complaint. This is followed by a lengthy treatise on the fact that President Obama used Twitter, which has no bearing on... well... anything.

At this point, you're on page 60 of the filing and you finally (finally!) get to the first actual cause of action which is, incredibly, "Violation of the First Amendment of the US Constitution." Which, as we've already discussed (and other courts have already found) is nonsense. Twitter is not bound by the First Amendment. That only restricts government entities. There are a bunch of other claims as well, some more nutty than others -- but all of them pretty nutty. The antitrust claim is a personal favorite. The "proof" of monopoly power in that one? The claim that Twitter controls 25% of the US social networking market. Which, uh, is not the definition of a monopoly, but Brittain's suit claims: "Therefore, it can logically be concluded that Defendant is in possession of monopoly power." This statement is not explained any further.

Also: Brittain claims that Twitter is violating CDA 230. CDA 230, of course, being the intermediary liability protection statute that literally explains why this case is nonsense and will get tossed out. It's the part of the law that says Twitter can moderate its platform however it likes. But Brittain tries to twist that.. by claiming that because Twitter itself uses Twitter, it is now an information content provider, rather than a service provider, and therefore liable for third party content:

Defendant's protections under 47 U.S. Code §230 stem from its classification as an interactive computer service. However, the presence of @Policy and unequal treatment for its users, as well as the promotion of content it agrees with ("Moments" "Front Page") and the "downranking" of content it disagrees with (to include suspensions and shadowbanning) indicate(s) that Twitter is actually an information content provider. Thereby, Twitter should be declared liable for content which appears on its platform, until at which point it ceases to act as an information content provider, and acts solely as an interactive computer service.

Nice theory. Too bad it's been rejected by basically every court since 230 became law. Courts have (rightly) found that internet services can be both an interactive computer service and an information content provider -- such that they are liable solely for the content they produce, but not for the content third parties produce on their platform. But, Brittain apparently is unaware of the reams of caselaw on this... which I guess is not that surprising.

We'd be remiss if we didn't also mention Brittain's proposed remedies. It starts off asking for a whole long list of nonsensical injunctions and declarations, then lawsuit costs and attorney's fees (he's filed this without attorneys, of course) and then "such other and further relief as this Court deems just and proper," which is normally where these kinds of things would end. But then he seems to remember that he wants money, so after all that he adds in a demand for $1 billion dollars. Well, at least I think that's what he's demanding. He calls it an injunction, which is not what you call a monetary award, and then has some sort of weird formula in which an injunction is summary judgment and it has to do with Twitter's valuation, because [reasons].

For an injunction in the form of an additional summary judgment for the Plaintiff, against the Defendant, in accordance with Defendant's valuation of over $25 Billion US Dollars, of no less than $1,000,000,000.00 US Dollars.

An injunction in the form of summary judgment in accordance with a valuation for a billion dollars? This is a word salad of legal nonsense.

Anyway, if the past is any indication, we eagerly await this "lifelong champion of free and even dangerous speech as a natural right" to now seek to have this article deleted from Google. But, we also eagerly await the "LOLwut?" response from the poor judge assigned this case.

Read More | 9 Comments | Leave a Comment..

More posts from Mike Masnick >>