Mike Masnick’s Techdirt Profile

mmasnick

About Mike MasnickTechdirt Insider

Mike is the founder and CEO of Floor64 and editor of the Techdirt blog.

He can be found on Twitter at http://www.twitter.com/mmasnick



Posted on Free Speech - 22 June 2018 @ 12:02pm

Supposed 'Free Speech' Warrior Jordan Peterson Sues University Because Silly Professor Said Some Mean Things About Him

from the the-intellectual-derp-web dept

I have to admit that until earlier this year, I'd never heard of Jordan Peterson. I first heard about him when he was on Russ Robert's Econtalk podcast, and it was sort of a weird discussion to go into blind, without any knowledge of Peterson. That's because throughout the podcast I found him to be extremely defensive, as if he was constantly under attack and had to parry away an onslaught of criticism. Other than that, I thought he had a few interesting ideas, mixed in with some nutty ideas. Soon after, I suddenly seemed to be hearing about him everywhere. In the last two months, the NY Times did a giant profile on him (in which he does not come off very well). He then played a major role in another bizarre and silly profile of what has been dubbed the "Intellectual Dark Web" -- a network of hilariously self-important people who seem to think they're oppressed for having thoughts out of the mainstream... even though the NY Times article goes on to describe how they all (with Peterson leading the pack) have massive followings, pack stadiums, sell insane numbers of books, and make crazy amounts of money from crowdfunding.

A core piece of that NY Times editor Bari Weiss article was the ill-supported claim that "free speech is under siege" and that these members of the "Intellectual Dark Web" were the renegades being shunned for speaking the truth that no one wanted to hear. To me, it seemed more like they were a bunch of self-important semi-hucksters who lots and lots of people were listening to, but who some people have criticized -- and they take that to mean that free speech is under attack. The more I read and watched about Peterson in particular, the more frustrating everything around him became. He certainly spews a lot of pseudo-intellectual nonsense, but so do many of the people who are angry at him. Many of the critiques of Peterson are, at best, sloppy and inaccurate. And Peterson has perfected playing the obtuse victim.

He's obviously very intelligent and is able to key in on the inaccurate representations of him, and uses that as a wedge to try to discredit those who are criticizing him. But the debates always seem to be more about misunderstanding both sides, and Peterson often appears to embrace the idea that he's a victim in all of this because people do such a poor job attacking his ideas (even if they're nutty and borderline nonsensical). This now famous interview between Peterson and Channel 4's Cathy Newman is a good example of this -- as is the also famous video of Peterson debating some angry students. In both cases, the criticisms that people are making of Peterson's ideology and viewpoints are a caricature -- and Peterson seizes on the misrepresentations, but does so in a fascinating way. Rather than trying to increase understanding and agreement, both sides just dig in and speak entirely at cross purposes. It's entertaining for people who support Peterson, who get to mock the silly misrepresentations of his critics, as well as for those who dislike Peterson, who get to mock his appearance of evading and sidestepping direct questions. It's all theater, and no one comes out of these any wiser. No one is trying to move towards more understanding. They all seem to embrace the misunderstanding as evidence of just how wrong the other side is.

Of course, part of the irony is that as he's perfected playing victim to what he (perhaps reasonably) considers to be unfair criticism, he seems to be adapting the very same stance that he accuses "the radical left" and "snowflakes" of embracing: he becomes quite intolerant of his critics. And now it's reached a new level of ridiculousness (again on all sides) with Peterson suing Wilfrid Laurier University for defamation. It's not often you see people who claim to be free speech warriors suing people for defamation, and especially not just because they said some not nice stuff about him. But, it appears that Peterson is really trying to come out as both a free speech defender... and a victim of free speech at the same time.

And, to be clear, the actions of Wilfrid Laurier University are completely preposterous and deserve to be mocked widely as they have been. It involved a teaching assistant at the school, Lindsay Shepherd, who had showed a clip of Peterson discussing gender pronouns (a topic that Peterson has strong feelings about) in a class. Shepherd does not appear supportive of Peterson's position, but was clearly using the clip to inspire a conversation. That seems laudable. What seems preposterous is what happened next: Shepherd was pulled into a disciplinary hearing and basically told that merely playing video of a public debate of Peterson potentially violated the human rights of students and was the equivalent of playing a clip of Hitler. Shepherd recorded the meeting and it's incredibly stupid. Shepherd, quite reasonably points out what she was trying to do, and the administrators come off as a caricature of the overly politically correct morons that some people (incorrectly) assume run every campus these days. Listening to the whole thing, is painful. Shepherd comes out looking reasonable. The school looks ridiculous. Indeed, the school apologized last fall soon after the audio of her meeting went viral.

Last week Shepherd sued the University herself, with claims of harassment, intentional infliction of nervous shock, negligence and constructive dismissal. It's interesting to note that within the filing, Shepherd's suit directly claims that the professors and administrators in the meeting with her defamed Peterson with their inaccurate portrayals of Peterson. Her own lawsuit, though, does not have any defamation claims.

And, then, this week, Peterson filed his suit -- employing the same lawyer as Shephard. In a statement, Peterson claims that he decided to do so after seeing Shepherd's lawsuit and speaking with her lawyer. Again, irony abounds, as his statement sounds quite a bit like those he was criticizing -- stating that he hopes this makes them think twice before saying mean things about him. He first says he decided to file the lawsuit because he felt that the university "had learned very little from its public embarrassment," and therefore apparently needed the power of the state to fine them for their own speech? That seems... very unlike a "free speech warrior." And then there's this:

I thought that two lawsuits might make the point, better than one. I'm hoping that the combination of two lawsuits might be enough to convince careless university professors and administrators blinded by their own ideology, to be much more circumspect in their actions and their words.

That... does not seem like someone who is a free speech warrior. That... does not seem like someone who believes in open debate. Even as ridiculous and silly as the University's actions were -- and they deserve tremendous mockery for their hysterical and bizarre response to Shepherd's lesson -- responding by suing for defamation is crazy. Canada, unfortunately, has defamation laws that strongly favor the plaintiff making the claims -- unlike in the US where we have a strong First Amendment -- but already experts seem to be suggesting his case is unlikely to succeed. If it were filed in the US, based on what I listened to of the meeting between Shepherd and the professors/administrators at the university, the lawsuit would be laughed out of court, and would be blasted as a censorial attempt to silence someone for protected speech, even if that speech is nonsense. Unfortunately, I have not yet been able to read the full complaint by Peterson as it does not appear to be readily available, so at this point I am only going off of the source material of the recording that Shepherd made, along with the claims that it is the content of that recording that is the basis of Peterson's defamation claims. If there is something more in the actual complaint, I would be happy to revise my opinion of the situation.

The whole thing seems ridiculous, frankly, as with so many of the debates around Peterson. Lots of people are making silly arguments and talking at cross purposes. Almost everyone comes off looking silly. However, just because debates get silly and heated, or just because some professors or teachers make silly claims, the idea of running to the courts and crying defamation, while directly claiming you hope the lawsuit will silence other professors at other universities certainly suggests that Peterson is no friend to free speech.

And this brings us back around to the whole "Intellectual Dark Web" thing. This case suggests the same ridiculous pattern. This is not deep thinkers being oppressed for their heretical great ideas. These are insecure, thin-skinned people with silly ideas, playing victim when other silly people make silly statements about them. Everyone gets to play victim. No one seeks to actually build up more understanding or reasoned debate. Instead, everyone just gets to dig in on their own silly positions. It's not the Intellectual Dark Web. It's the Intellectual Derp Web. And now it's attacking free speech, while pretending to be staunch defenders of free speech. Derp.

119 Comments | Leave a Comment..

Posted on Techdirt - 22 June 2018 @ 10:45am

Silos, Centralization And Censorship: Losing The Promise Of The Internet

from the a-tale-of-two-clouds dept

The somewhat apocryphal purpose of the early internet was to have a system that could survive a nuclear war, by building it in nodes, such that it couldn't be knocked out easily. That distributed and decentralized concept had many other benefits as well. Somewhat famously, 25 years ago, John Gillmore declared"The Net interprets censorship as damage and routes around it." And there remains some truth to that... in part. But the internet has changed drastically over the decades, and we're now living in the age of the cloud -- which might better be described as the age of the large third party who can be influenced.

Bruce Schneier has written up an interesting article discussing how the rise of the cloud has also enabled much more censorship.

Internet censors have a new strategy in their bid to block applications and websites: pressuring the large cloud providers that host them. These providers have concerns that are much broader than the targets of censorship efforts, so they have the choice of either standing up to the censors or capitulating in order to maximize their business. Today’s internet largely reflects the dominance of a handful of companies behind the cloud services, search engines and mobile platforms that underpin the technology landscape. This new centralization radically tips the balance between those who want to censor parts of the internet and those trying to evade censorship. When the profitable answer is for a software giant to acquiesce to censors' demands, how long can internet freedom last?

It's a good question, and one that I've been thinking a lot about in the past few years. I think it's an overreaction to blame the concept of "the cloud." Indeed, the idea of moving information onto the internet, rather than buried on local machines has some massive benefits, including the ability to access the information and services from any device, as well as being able to (sometimes) connect various services together and accomplish much more.

The real problem to me -- and one I've spoken about going back many years -- is that today's "cloud" is not the "cloud" we should want. It's become a series of silos. Silos owned by large companies. But there's no reason it needs to remain that way. There is simply no reason that we can't build a "cloud" in which end users retain full control over their data. They may allow third party services to access and interact with that data, but it's bizarre how the vision of the "cloud" has turned into a world where it basically just means Google, Microsoft, IBM, Rackspace, whoever else, hosting all your data and retaining all of the control to it, including the control to take it down and make it disappear.

Most of Schneier's piece focuses on Russia's somewhat Quixotic focus on shutting down Telegram, but notes that what happens is almost entirely up to a few large internet companies, and how much they'll push back on pressure from Russia (or other governments):

Tech giants have gotten embroiled in censorship battles for years. Sometimes they fight and sometimes they fold, but until now there have always been options. What this particular fight highlights is that internet freedom is increasingly in the hands of the world's largest internet companies. And while freedom may have its advocates—the American Civil Liberties Union has tweeted its support for those companies, and some 12,000 people in Moscow protested against the Telegram ban—actions such as disallowing domain fronting illustrate that getting the big tech companies to sacrifice their near-term commercial interests will be an uphill battle. Apple has already removed anti-censorship apps from its Chinese app store.

But it's unfortunate that that is the end result. Sometimes it's good that there are large companies who will (sometimes) fight these battles for smaller players, but that shouldn't be the last resort to protect against censorship of the type that Russia and China and other countries seek. For years we've been saying that it's time for us to rethink the internet, and move back towards a more decentralized, distributed world in which this kind of censorship isn't even an issue. It hasn't happened yet, but it feels like we're increasingly moving towards a world in which that's going to be necessary if we want to retain what is best about the internet.

28 Comments | Leave a Comment..

Posted on Techdirt - 22 June 2018 @ 9:38am

Artist Files Completely Frivolous Copyright Lawsuit Against The NRA For Briefly Showing Public Sculpture In Stupid Video

from the clenched-bean-of-truth dept

I apologize in advance, but this story is full of frivolous annoying things. Unfortunately, they are frivolous annoying things that hit at the very core intersection of stuff we talk about here on Techdirt: copyright and free expression. Last year, the NRA pushed out a truly ridiculous advertising video, referred to as "The Clenched Fist of Truth" or "The Violence of Lies." It was a stupid video from a stupid organization which served no purpose other than to upset people who hate the NRA. Trolling as advertising. It generated some level of pointless outrage and people went on with their lives. I'm not linking to the video because I don't need to give it any more attention and if you really want to see it, you know how to use the internet.

Now, let's move on to Anish Kapoor, a British sculptor who is also annoying. In the early 2000s, he made a silly sculpture for Chicago's Millenium Park that people from Chicago (and elsewhere) tend to love to mock. It's called The Bean. I mean, officially, it's called "Cloud Gate," but no one calls it that. Even Kapoor now now calls it the Bean.

However, copyright disputes over the Bean go way back. Back in 2005 there was an article about security guards evicting photographers for taking pictures of the popular tourist selfie photo opp, because the city said it had to enforce the copyright of the artist. No, really. They said that. There's been a long, and somewhat ridiculous, debate about the copyright on public sculptures. Many of us believe -- with pretty damn good justification, I'd say -- that if you agree to a commission from a public entity, in which you are creating a sculpture for the government, you should also give up your copyright with it. Barring that, any and all photography of that sculpture in a public place should simply be declared fair use. Unfortunately, courts have disagreed with this -- which is unfortunate.

Over the last year, Kapoor has been particularly up in arms over the fact that the NRA's silly video includes a ridiculous brief clip of the Bean. It appears for less than a second in a montage of clips. But it's there:

Kapoor has been unhappy about this for a while, and earlier this year penned an open letter to the NRA decrying its policies. This is good. This is what free speech allows.

However, this week, he took it a step further and filed a really, really dumb copyright lawsuit against the NRA (first noted by ARTnews).

The filing itself screams out how frivolous it is in repeatedly complaining about the political message of the NRA's video, rather than anything related to the actual copyright related rights at issue.

On June 29, 2017, NRA broadcast on television and the internet a video recruiting advertisement entitled variously “The Clenched Fist of Truth” or “The Violence of Lies”, denouncing the media and the “liberal agenda.” It warns of civil unrest and violence, and states that the only way to save “our” country from the “lies” of the liberal media and the “liberal agenda” is with the “clenched fist of truth,” i.e., with guns (obviously referencing NRA’s previous slogan by Charlton Heston that “I'll give you my gun when you pry it from my cold, dead hands.”) It is a clear call to armed violence against liberals and the media.

I mean, yeah. But what does that have to do with copyright? Absolutely nothing.

The actual copyright claim is incredibly, laughably weak:

As a result of Defendant’s copyright infringement, Plaintiff has suffered and continues to suffer actual damages in an amount according to proof at trial.

Oh come on. There is no one who is watching that video and thinking that Kapoor somehow supports the message and therefore won't work with him. Also this:

As a further result of Defendant’s copyright infringement, Defendant has obtained direct and indirect profits it would not have otherwise realized but for its infringement of Plaintiff’s copyrighted Work, including but not limited to increased membership dues following the publication of the Infringing Video. Plaintiff is entitled to disgorgement of such profits,

Nah. That's not how it works. First of all, if the NRA is profiting from the video, it's not because the Bean is in it. Take out the Bean, replace it with some other stupid statue and nothing changes at all. There is nothing about the Bean that makes the video. There is no profit because of the use of the Bean imagery.

But the larger point: this is so obviously fair use that it's not even worth going through the full four factor analysis. This is less than a second in a political video showing a public sculpture in a public location. It's not key to the video. It's used as part of commentary.

The nature of Kapoor's lawsuit, however, is quite obviously to stifle free speech he disagrees with. We can all agree that the NRA is an odious organization with an odious message, but let's not dismantle the First Amendment just because of that group's ridiculous and dishonest methods for defending the Second Amendment. The NRA has every right to use that snippet and all Kapoor's lawsuit is doing is getting the NRA's video that much more attention. The case seems likely to get tossed out quickly. The case was filed in Illinois, which has an okay anti-SLAPP law, which means the end result may actually be that Kapoor ends up paying the NRA's legal fees.

We've talked at length over the years about how copyright often conflicts with free speech. People often respond with some version of "but piracy isn't free speech." That's a silly claim, but there are still cases like this one where the intent obviously has absolutely nothing to do with the purposes of copyright law, but solely as a method to silence speech. The courts shouldn't allow it and seem unlikely to do so. Kapoor had every opportunity to exercise his First Amendment rights to speak out against the NRA. Filing a frivolous copyright lawsuit attempting to stifle speech, however, goes way too far.

Read More | 60 Comments | Leave a Comment..

Posted on Techdirt - 21 June 2018 @ 10:40am

Activism & Doxing: Stephen Miller, ICE And How Internet Platforms Have No Good Options

from the and-for-fun,-the-cfaa-and-scraping dept

Last month, at the COMO Content Moderation Summit in Washington DC, I co-ran a "You Make the Call" session with Emma Llanso from CDT. The idea was to turn the audience into a content moderation/trust & safety team of a fictionalized social media platform. We showed numerous examples of content or accounts that were "flagged" and then showed the associated terms of service, and had the entire audience vote on what to do. One of the fictional examples involved someone posting a link to a third-party website "contactinfo.com" claiming to have the personal phone and email contact info of Harvey Weinstein and urging people "you know what to do!" with a hashtag. The relevant terms of service included this: "You may not post personal information about others without their consent."

The audience voting was pretty mixed on this. 47% of the audience punted on the question, choosing to escalate it to a supervisor as they felt they couldn't decide whether to leave the content up or take it down. 32% felt it should just be taken down. 10% said to just leave it up and another 10% said to put a content warning flag on the content. We joked a bit during the session that some of these examples were "ripped from the headlines" but apparently we predicted the headlines in this case, because there are two stories this week that touch on exactly this kind of thing.

Example one is the story that came out yesterday, in which Twitter chose to start locking the accounts of users who were either tweeting Trump senior advisor Stephen Miller's cell phone number, or merely linking to a Splinternews article that published his cell phone number (which I'm guessing has since been changed...).

Splinternews decided to publish Miller's phone number after multiple news reports attributed the inhumane* decision to separate children of asylum seekers from their parents to Miller, who has defended the plan. Other reports noted that Miller is enjoying all of the controversy over this policy. Splinternews, citing Donald Trump's own history of giving out the phone numbers of people who anger him, thought it was only fair that people be able to reach out to Miller.

This is -- for fairly obvious reasons -- a controversial decision. I think most news organizations would never do such a thing. Not surprisingly, the number spread rapidly on Twitter, and Twitter started locking all of those accounts until the tweets were removed. That seems at least well within reason under Twitter's rules that explicitly state:

You may not publish or post other people's private information without their express authorization and permission.

But, that question gets a lot sketchier when it comes to locking the accounts of people who merely linked to the Splinternews article. A la our fictionalized example, those people are not actually publishing or posting anyone's private info. They are posting a link to a third party that purports to have that information. And, of course, in this case, the situation is complicated even more than our fictionalized example because Splinternews is a news organization (owned by Univision), and Twitter also has said that it has a "newsworthy" exception to its rules.

Personally, I think it's the wrong call to lock the accounts of those linking to the news story, but... as we discovered in our own sample version, it's not an easy call and lots of people have strong opinions one way or the other. Indeed, part of the reason why Twitter may have decided to do this was that supporters of Trump/Miller started calling out the article as an example of doxxing and claiming that leaving it up showed that Twitter was unfairly biased against them. It is a no win situation.

And, of course, it wouldn't take long before people started coming up with clever workarounds, such as Parker Higgins (citing the infamous 09F9 controversy in which the MPAA tried to censor the revelation of a cryptographic key that broke the MPAA's preferred DRM, and people responded by posting variations on the code, including a color chart in which the hex codes of the colors were the code), who posted the following:

Would Twitter lock his account for posting a two color image? At some point, the whole thing gets... crazy. That's not to argue that revealing someone's private cell phone number is a good thing -- no matter how you feel about Miller or the border policy. But just on the content moderation side, it puts Twitter in a no win situation in which people are going to be pissed off no matter what it does. Oh, and of course, it also helped create something of a Streisand Effect. I certainly hadn't heard about the Splinternews article or that people were passing around Miller's phone number until the story broke about Twitter whac'ing at moles on its site.

And that takes us to the second example, which happened a day earlier -- and was also in response to people's quite reasonable* anger about the border policy. Sam Lavigne decided to make something of a public statement about how he felt about ICE by scraping** LinkedIn for profile information on everyone who works at ICE (and who has a LinkedIn public profile). His database included 1595 ICE employees. He wrote a Medium blog post about this, posted the repository to Github and another user, Russel Neiss, created a Twitter account (@iceHRgov) that tweeted out info about each of those employees from that database. Notice that none of those are linked. That's because all three companies took them down (though you can still find archives of the Medium post). There was also an archive of the Github repository, but it has since been memory-holed as well.

Again... this raises a lot of questions. Github claimed that it removed the page for "violating community guidelines" -- specifically around "doxxing and harassment, and violating a third party's privacy." Medium claimed that the post violated rules against "doxxing" and specifically the "aggregation of publicly available information to target, shame, blackmail, harass, intimidate, threaten or endanger." Twitter, in Twitter's usual way, is not commenting. LinkedIn put out a statement saying: "We do not support or condone what immigration authorities are doing at the border, but we can’t allow the illegal use of our member data. We will take appropriate action to ensure our members’ data is protected and used properly."

Many people point out that all of this feels kind of ridiculous, seeing as this is all public info that the individuals chose to reveal about themselves on a public website. While Medium's expansive definition of doxxing makes things interesting by including an intent standard in releasing the info, even if it is publicly available, the whole thing, again, demonstrates how complex this is. I know that some people will claim that these are easy calls -- but, just for fun, try flipping the equation a bit. If you're anti-Trump, how would you feel if a prominent alt-right person compiled and posted your info -- even if publicly available -- on a site where alt-right folks gather, with the clear intent of having hoards of Trump trolls harassing you. Be careful the precedent you set.

If it were up to me, I think I would have come down differently than Medium, Github and Twitter in this case. My rationale: (1) all of this info was public information (2) that those individuals chose to place on a public website, knowing it was public (3) they are all employed by the federal government, meaning they are public servants and (4) while the compilation was done by someone who is clearly against the border policy, Lavigne never encouraged or suggested harassment of ICE agents. Instead, he wrote: "While I don’t have a precise idea of what should be done with this data set, I leave it here with the hope that researchers, journalists and activists will find it useful." He separately noted that he believed "it's important to document what's happening, and by whom." That seems to actually make a strong point in favor of leaving the data up, as there is value in documenting what's happening.

That said, reasonable people can disagree on this question (even if there should be no disagreement about how inhumane the policy at the border has been*) of what is the appropriate way for different platforms to handle these situations -- taking into account that this situation could play out with very different players in the future, and there is value in being consistent.

This is the very point that we were demonstrating with that game that we ran at COMO. Many people seem to think that content moderation decisions are easy: you just take down the content that is bad, and leave up the content that is good. But it's pretty rare that the content is easily classified in one of those categories. There is an enormous gray area -- and much of it involves nuance and context, which is not always easy to come by -- and which may look incredibly different depending on where you sit and what kind of world you think we live in. I still think there are strong arguments that the platforms should have left much of the content discussed in this post up, but I'm not the one making that call.

When we ran that game in DC last month, it was notable that on every single example we used -- even the ones we thought were "easy calls" -- there were some audience members who selected every option in the game. That is, there was not a single situation in our examples in which everyone agreed what should be done. Indeed, since there were four options, and all four were chosen by at least one person in every single example, it shows just how difficult it really is to make these calls. They are subjective. And what plays into that subjective decision making includes your own views, your own perspective, your own reading of the content and the rules -- and sometimes third party factors, including how people are reacting and what public pressure you're getting (in both directions). It is an impossible situation.

This is also why the various calls to mandate that platforms do this or face legal liability are even more ridiculous and dangerous. There are no "right" answers to these decisions. There are solutions that seem better to lots of people, but plenty of others will disagree. If you think you know the "right" way that all of these questions should be handled, I guarantee you're wrong, and if you were in charge of these platforms, you'd end up feeling just as conflicted as well.

This is why it's really time to start thinking about and talking about better solutions. Simply calling on platforms to be the final arbiters of what goes online and what stays offline is not a workable solution.

* Just a side note: if you are among the small minority of ethically-challenged individuals who gets upset that I describe the policy as inhumane: fuck off. The policy is inhumane and if you're defending it, you should seriously take time to re-evaluate your ethics and your life choices. On a separate note, if you are among the people who are then going to try to justify this policy as "but Obama/others did it too," the same applies. Whataboutism is no argument here. The policy is inhumane no matter who did it, and pointing out that others did it too doesn't change that. And, as inhumane as it may have been in the past, it has been severely ramped up. There is no defense for it. Attempting to defend it only serves to out yourself as a horrible person who has issues. Seriously: get help.

** This doesn't even fit anywhere in with this story, but scraping LinkedIn is (stupidly) incredibly dangerous. Linkedin has a history of suing people for scraping public info off of LinkedIn. And even if it's lost some of those cases, the company appears to take a pretty aggressive stance towards scrapers. We can argue about how ridiculous this is, but, dammit, this post is already too long talking about other stuff, so discuss it separately.

76 Comments | Leave a Comment..

Posted on Free Speech - 20 June 2018 @ 10:43am

EU Parliamentary Committee Votes To Put American Internet Giants In Charge Of What Speech Is Allowed Online

from the bad-news dept

As we've been writing over the past few weeks, the EU Parliament's Legal Affairs Committee (JURI) voted earlier today on the EU's new Copyright Directive. Within that directive were two absolutely horrible ideas that are dangerous to an open internet -- a link tax and a mandatory copyright filtering requrement (i.e., the "censorship machines" proposal). While there was a big fight about it, and we heard that some in the EU Parliament were getting nervous about it, this morning they still voted in favor of both proposals and to move the entire Copyright Directive forward. The vote was close, but still went the wrong way:

Somewhat incredibly, no official rollcall tally was kept. MEP Julia Reda, however, has posted an unofficial roll call of who voted against internet freedom, showing (graphically) whether they voted for the link tax and/or censorship machines:

In case you can't see that here's who voted according to Reda's list -- most voted for both of the bad proposals, but for the few who didn't vote for the link tax, I've noted that separately. These politicians deserve to (1) be called out for trying to destroy an open internet and give in to legacy industries who want to censor the internet and (2) voted out of office next election:

  • Axel Voss, Germany (who was in charge of this entire thing and who has regularly played dumb whenever people point out just how bad these proposals are. He appears completely beholden to legacy industry interests). Voss's name should become synonymous with the destruction of a free and open internet.
  • Pavel Svoboda, Czech Republic (voted for censorship machines, but not the link tax)
  • Rosa Estaras Ferragut, Spain
  • Tadeusz Zwiefka, Poland,
  • Jozsef Szajer, Hungary
  • Francis Zammit Dimech, Malta
  • Luis de Grandes Pascual, Spain
  • Enrico Gasbarra, Italy
  • Mary Honeyball, UK
  • Jean-Marie Cavada, France
  • Marinho e Pinto, Portugal
  • Sajjad Karim, UK (voted for censorship machines, but not the link tax)
  • Joelle Bergeron, France
  • Marie-Christine Boutonnet, France
  • Gilles Libreton, France
Note those last two votes from France, as Lebreton and Boutonnet are both members of the French National Front party, the same party whose leader, Marine Le Pen, has been out and about screaming about how unfair it is that the party's YouTube channel was deleted by automatic copyright filters -- the same filters that her own party just voted to make mandatory for all platforms. Incredible.

This is a hugely unfortunate series of events. Having the proposal approved by the JURI Committee makes it much, much harder to stop this Directive from becoming official. But it is not the end of the road. Reda will be forcing a vote from the entire EU Parliament on the issue:

This is an unacceptable outcome that I will challenge in the next plenary session, asking all 750 MEPs to vote on whether to accept the Committee’s result or open it up for debate in that larger forum, which would then give us a final chance to make changes.

This vote will likely happen on July 4. Let’s make this the independence day of the internet, the day we #SaveYourInternet from censorship machines and a link tax. Are you in?

The digital freedom group EDRi has also detailed the next steps in this process and created an infographic showing what still needs to happen:

It will be difficult to stop this freight train after this morning's vote, but not impossible. If you want to see the internet remain viable as a communications platform, rather than seeing it locked down as the new broadcast television, in which giant American companies have the final say in what you're allowed to say online, you should probably let the EU Parliament know sooner, rather than later.

39 Comments | Leave a Comment..

Posted on Techdirt - 20 June 2018 @ 9:20am

Net Neutrality And The Broken Windows Fallacy

from the ajit-pai,-read-your-bastiat dept

I've mentioned the idea of the broken windows fallacy -- not to be confused with the long debunked broken windows theory of policing -- twice in the past in reference to net neutrality, including in my recent post about what Ajit Pai should have said about repealing net neutrality. But both times I talked about it, it was kind of buried in much longer articles, and the more I think about it, the more important I think it is in understanding why Pai and his supporters are so far off in their thinking and understanding on net neutrality. What I find most perplexing about this is that people who often position themselves as doing away with overly burdensome regulations -- which is a stance that Pai has staked out pretty clearly -- are usually the kind of folks who talk frequently about the broken windows fallacy. And yet, here, those same folks seem to be missing it.

As background, the broken windows fallacy comes from Frederic Bastiat, the French economist often associated with free market and libertarian thought, and it's his clever and highly evocative way of explaining why destructive behavior -- while it may generate economic activity, is not good for the economy, because it misses all of the other (often hidden) costs, including the opportunity cost of investing that money in more productive activity. Bastiat's version went as follows:

Have you ever witnessed the anger of the good shopkeeper, James Goodfellow, when his careless son has happened to break a pane of glass? If you have been present at such a scene, you will most assuredly bear witness to the fact that every one of the spectators, were there even thirty of them, by common consent apparently, offered the unfortunate owner this invariable consolation – "It is an ill wind that blows nobody good. Everybody must live, and what would become of the glaziers if panes of glass were never broken?"

Now, this form of condolence contains an entire theory, which it will be well to show up in this simple case, seeing that it is precisely the same as that which, unhappily, regulates the greater part of our economical institutions.

Suppose it cost six francs to repair the damage, and you say that the accident brings six francs to the glazier's trade – that it encourages that trade to the amount of six francs – I grant it; I have not a word to say against it; you reason justly. The glazier comes, performs his task, receives his six francs, rubs his hands, and, in his heart, blesses the careless child. All this is that which is seen.

But if, on the other hand, you come to the conclusion, as is too often the case, that it is a good thing to break windows, that it causes money to circulate, and that the encouragement of industry in general will be the result of it, you will oblige me to call out, "Stop there! Your theory is confined to that which is seen; it takes no account of that which is not seen."

It is not seen that as our shopkeeper has spent six francs upon one thing, he cannot spend them upon another. It is not seen that if he had not had a window to replace, he would, perhaps, have replaced his old shoes, or added another book to his library. In short, he would have employed his six francs in some way, which this accident has prevented

Let us take a view of industry in general, as affected by this circumstance. The window being broken, the glazier's trade is encouraged to the amount of six francs; this is that which is seen. If the window had not been broken, the shoemaker's trade (or some other) would have been encouraged to the amount of six francs; this is that which is not seen.

And if that which is not seen is taken into consideration, because it is a negative fact, as well as that which is seen, because it is a positive fact, it will be understood that neither industry in general, nor the sum total of national labour, is affected, whether windows are broken or not.

Now let us consider James B. himself. In the former supposition, that of the window being broken, he spends six francs, and has neither more nor less than he had before, the enjoyment of a window.

In the second, where we suppose the window not to have been broken, he would have spent six francs on shoes, and would have had at the same time the enjoyment of a pair of shoes and of a window.

Now, as James B. forms a part of society, we must come to the conclusion, that, taking it altogether, and making an estimate of its enjoyments and its labours, it has lost the value of the broken window.

When we arrive at this unexpected conclusion: "Society loses the value of things which are uselessly destroyed;" and we must assent to a maxim which will make the hair of protectionists stand on end — To break, to spoil, to waste, is not to encourage national labour; or, more briefly, "destruction is not profit."

In short, breaking windows may generate economic activity for the glazier, but that doesn't count the economic cost to whoever had his window broken, or the opportunity costs of how the money spent on fixing the window could have been fixed.

So how does this apply to net neutrality? Well, Ajit Pai and nearly all of the rather vocal supporters of taking away net neutrality rules continually go back to the claim that the rules harmed broadband infrastructure investment. We'll leave aside the (rather important point) that this claim is not even remotely close to true -- but even assuming it is, it's still a broken windows fallacy.

That's because broadband infrastructure investment is not the entire market, and focusing just on that is the same as just focusing on the economic activity for the glazier created by a broken window. To take this to the extreme case: if we want to stimulate broadband infrastructure investment, just rip up the current internet -- and then we'd need to spend a ton on rebuilding the internet. Yes, that would be the best way to "stimulate" a massive internet infrastructure investment, but the costs to everyone else would be dire.

In the same way, when the FCC focuses just on broadband infrastructure, it is ignoring the costs on everyone else who use the internet. Or, as per Bastiat's story, the FCC is ignoring the costs to the guy whose window is broken as well as all of the opportunity costs from the money he spends on the glazier that doesn't go towards more productive pursuits.

In the net neutrality world, those costs are massive. It is the costs of nearly all internet platforms and services, which now have massive levels of uncertainty about whether or not ISPs will end up abusing their power to limit access (or, more likely, charge for preferred access). It includes the uncertainty of the big broadband companies favoring their own content and service partners to effectively shut out independent services. It includes the costs to the public who have less choice and fewer services that they can use, and who are more locked in to a dwindling number of giant broadband companies.

In short, Ajit Pai's FCC has fallen completely for the broken windows fallacy, by focusing just on one narrow area of economic activity, without even being willing to acknowledge that it will negatively impact a much wider swath of the economy. This is especially disappointing to see, considering that Pai and his supporters keep claiming that they are the ones to "bring economics back" to the FCC, and they are the ones who argued that the Tom Wheeler FCC ignored economics. Yet, when you look at the details, it's Pai and his supporters who seem to be the ones sticking their heads in the sand here and, as Bastiat noted, confining their theory to "that which it seen" and taking "no account of that which is not seen."

In economics this is a pretty 101-level mistake. That the FCC is making it in dismantling a key concept that makes the internet function competitively is particularly disappointing.

63 Comments | Leave a Comment..

Posted on Techdirt - 19 June 2018 @ 10:44am

Boston Globe Posts Hilarious Fact-Challenged Interview About Regulating Google, Without Any Acknowledgement Of Errors

from the and-we-wonder-why-news-is-failing dept

Warning: this article will discuss a bunch of nonsense being said in a major American newspaper about Google. I fully expect that the usual people will claim that I am writing this because I always support Google -- which would be an interesting point if true -- but of course it is not. I regularly criticize Google for a variety sketchy practices. However, what this story is really about is why the Boston Globe would publish, without fact checking, a bunch of complete and utter nonsense.

The Boston Globe recently put together an entire issue about "Big Tech" and what to do about it. I'd link to it, but for some reason when I click on it, the Boston Globe is now telling me it no longer exists -- which, maybe, suggests that the Boston Globe should do a little more "tech" work itself. However, a few folks sent in this fun interview with noted Google/Facebook hater Jonathan Taplin. Now, we've had our run-ins with Taplin in the past -- almost always to correct a whole bunch of factual errors that he makes in attacking internet companies. And, it appears that we need to do this again.

Of course, you would think that the Boston Globe might have done this for us, seeing as they're a "newspaper" and all. Rather than just printing the words verbatim of someone who is going to say things that are both false and ridiculous, why not fact check your own damn interview? Instead, it appears that the Globe decided "let's find someone to say mean things about Google" and turned up Taplin... and then no one at the esteemed Globe decided "gee, maybe we should check to see if he actually knows what he's talking about or if he's full of shit." Instead, they just ran the interview, and people who read it without knowing that Taplin is laughably wrong won't find out about it unless they come here. But... let's dig in.

What would smart regulation look like?

You start with fairly rigorous privacy regulations where you have the ability to opt out of data collection from Google. Then you look at something like a modification of the part of the Digital Millennium Copyright Act, which is what is known as safe harbor. Google and Facebook and Twitter operate under a very unique set of legal regimes that no other company gets to benefit from, which is that no one can sue them for doing anything wrong.

Ability to opt-out of data collection -- fair enough. To some extent that's already possible if you know what you're doing, but it would be good if Google/Facebook made that easier. Honestly, that's not going to actually have much of an impact really. I still think the real solution to the dominance of Google/Facebook is to enable more competition that can provide better services that can help limit the power of those guys. But Taplin's suggestion really seems to be going in the other direction, seeking to lock in their power, while complaining about them.

The "modification" of the DMCA, for example, would almost certainly lock in Google and Facebook and make it nearly impossible for competitors to step up. Also, the DMCA is not "known as safe harbor." The DMCA -- a law that was almost universally pushed by the record labels -- is a law that updated copyright law in a number of ways, including giving copyright holders the power to censor on the internet, without any due process or judicial review of whether or not infringement had taken place. There is a small part of it, within Section 512, that includes a very limited safe harbor, that says that while actual infringers are still liable for infringement, the internet platforms they use are not liable if they follow a bunch of rules, including removing the content expeditiously and kicking people off their platform for repeat infringement.

The idea that "Google and Facebook and Twitter operate under a very unique set of legal regimes that no other company gets to benefit from" is complete and utter nonsense, and the Boston Globe's Alex Kingsbury should have pushed back on it. The Copyright Office's database of DMCA registered agents includes nearly 9,000 companies (including ours!), because the DMCA's 512 safe harbors apply to any internet platform who registers. Google, Facebook and Twitter don't get special treatment.

Furthermore, as a new report recently showed, taking away such safe harbors would do more to entrench the power of Google, Facebook and Twitter since all three companies can deal with such liability, while lots of smaller companies and upstarts cannot. It boggles the mind that the Boston Globe let Taplin say something so obviously false without challenging him.

And, we haven't even gotten to the second half of that sentence, which is the bizarre and simply false claim that the DMCA's Section 512 means that "no one can sue them for doing anything wrong." Again, this is just factually incorrect, and a good journalist would challenge someone for making such a blatantly false claim. The DMCA's 512 does not, in any way, stop anyone from suing anyone "for doing anything wrong." That's ridiculous. The DMCA's 512 says that a copyright holder will be barred from suing a platform for copyright infringement if a user (not the platform) infringes on copyright and when notified of that alleged infringement, the platform expeditiously removes that content. In addition to that, thanks to various court rulings, the DMCA's safe harbors are limited in other ways, including that the platforms cannot encourage their use for infringement and they must have implemented repeat infringer policies. No where in any of that does it say that platforms can't be sued for doing anything wrong.

If the platform does something wrong, they absolutely can be sued. It's simply a fantasy interpretation of the DMCA to pretend otherwise. Why didn't the Boston Globe point out these errors? I have no idea, but they let the interview and its nonsense continue.

In other words, they have complete liability protection from being sued for any of the content that is on their services. That is totally unique. Obviously newspapers doesn’t get that protection. And of course also [tech giants] have other advantages over all other corporations; all of the labor that users put in is basically free. Most of us work an hour a day for Google or Facebook improving their services, and we don’t get anything for that other than just services.

Again, they do not have "complete liability protection from being sued for any content that is on their services." Anything they post themselves, they are still liable for. Anything that a user posts on its platform, if the platform does not comply with DMCA 512, the platform can still be liable for. All DMCA 512 is saying is that they can be liable for a small sliver of content if they fail to follow the rules set out in the law that was pushed for heavily by the recording industry.

Next up, the claim that "obviously newspapers don't get that protection" is preposterous. Of course they do. A quick search of the Copyright Office database shows registrations by tons of newspaper companies, including the Chicago Tribune, the Daily News, USA Today, the Las Vegas Review-Journal, the LA Times, the Baltimore Sun, the Chicago Sun-Times, the Albany Times Union, the NY Times, the Times Herald, the Times Picayune, the Washington Times, the Post Standard, the Palm Beach Post, the Cincinnati Post, the Kentucky Post, the Seattle Post-Intelligencer, the NY Post, the St. Louis Post-Dispatch, the Washington Post, Ann Arbor News, the Albany Business News, Reno News & Review, the Dayton Daily News, Springfiled News Sun, the Des Moines Register, the Cincinnati Enquirer, the Branson News Leader, the Bergen News, the Pennysaver News, the News-Times, the New Canaan News, Orange County News, San Antonio News-Express, the National Law Journal, the Williamsburg Journal Tribune, the Wall Street Journal, the Jacksonville Journal-Courier, the Lafayette Journal-Courier, the Oregon Statesman Journal, the Daily Journal and on and on and on. Literally I just got tired of writing down names. There are a lot more.

Notably missing? As far as I can tell, the Boston Globe has not registered a DMCA agent. Odd that.

But, back to the point: yes, newspapers get the same damn protection. There is nothing special about Google, Facebook and Twitter. And by now Taplin must know this. So should the Boston Globe.

Ah, but perhaps -- you'll argue -- he means that the paper versions don't get the same protection, while the internet sites do. And, you'd still be wrong. All the DMCA 512 says is that you don't get to put liability on a third party who had no say in the content posted. With your normal print newspaper that's not an issue because a newspaper is not a user-generated content thing. It has an editor who is choosing what's in there. That's not true of online websites. And that's why we need a safe harbor like the DMCA's, otherwise people stupidly blame a platform for actions of their users.

And let's not forget -- because this is important -- anything a website does to directly encourage infringement would take away those safe harbors, a la the Grokster ruling in the Supreme Court, which said you lose those safe harbors if you're inducing infringing. In other words, basically every claim made by Taplin here is wrong. Why does the Boston Globe challenge none of them? What kind of interview is this?

And we're just on the first question. Let's move on.

What would eliminating the “safe harbor” provision in the Digital Millennium Copyright Act mean?

YouTube wouldn’t be able to post 44,000 ISIS videos and sell ads for them.

Wait, what? Once again, there's so much wrong in just this one sentence that it's almost criminal that the Boston Globe's reporter doesn't say something. Let's start with this one first: changing copyright law to get rid of a safe harbor will stop YouTube from posting ISIS videos? What about copyright law has any impact on ISIS videos one way or the other? Absolutely nothing. Even assuming that ISIS is somehow violating someone's copyright in their videos (which, seems unlikely?) what does that have to do with anything?

Second, YouTube is not posting any ISIS videos. YouTube is not posting any videos. Users of YouTube are posting videos. That's the whole point of the safe harbors. That it's users doing the uploading and not the platform. And the point of the DMCA safe harbor is to clarify the common sense point that you don't blame the tool for the user's actions. You don't blame Ford because someone drove a Ford as a getaway car in a bank robbery. You don't blame AT&T when someone calls in a bomb threat.

Third, YouTube has banned ISIS videos (and any terrorist propaganda videos) going back a decade. Literally back to 2008. That's when YouTube stopped allowing videos from terrorist organizations. How could Taplin not know this? How could the Boston Globe not know this. Over the years, YouTube has even built new algorithms designed to automatically spot "extremist" content and block it (how well that works is another question). Indeed, YouTube is so aggressive in taking down such videos that it's been known to also take down the videos of humanitarian groups documenting war crimes by terrorists.

Finally, YouTube has long refused to put ads on anything deemed controversial content. Also, it won't put ads on videos of channels without lots and lots of followers.

So basically in this one short sentence -- 14 words long -- has four major factual errors in it. Wow. And he's not done yet.

Or they wouldn’t be able to put up any musician’s work, whether they wanted it on the service or not, without having to bear some consequences. That would really change things.

Again, YouTube is not the one putting up works. Users of YouTube are. And if and when those people upload a video -- that is not covered by fair use or other user rights -- and it is infringing, then the copyright holder has every right under the DMCA that Taplin misstates earlier to force the video down. And if YouTube doesn't take it down, then they face all the consequences of being an infringer.

So what would "really change" if we removed the DMCA's safe harbors? Well, YouTube has already negotiated licenses with basically every record label and publisher at this point. So, basically nothing would change on YouTube. But, you know, for every other platform, they'd be screwed. So, Taplin's plan to "break up" Google... is to lock the company in as the only platform. Great.

And this leaves aside the fact (whether we like it or not) that under YouTube's ContentID system which allows copyright holders to "monetize" infringing works has actually opened up a (somewhat strange) new revenue stream for artists, who are now actually profiting greatly from letting people use their works without going through the hassle of negotiating a full license.

I also think it would change the whole fake news conversation completely, because, once Facebook or YouTube or Google had to take responsibility for what’s on their services, they would have to be a lot more careful to monitor what goes on there.

Again... what? What in the "whole fake news conversation" has anything to do with copyright? This is just utter nonsense.

Second, if platforms are suddenly "responsible" for what's on their service, then... Taplin is saying that the very companies he hates, that he thinks are the ruination of culture and society, should be the final arbiters of what speech is okay online. Is that really what he wants? He wants Google and Facebook and YouTube -- three platforms he's spent years attacking -- determining if his own speech is fake news?

Really?

Because, let's face it, as much as I hate the term, this interview is quintessential fake news. Nearly every sentence Taplin says includes some false statement -- often multiple false statements. And the Boston Globe published it. Should the Boston Globe now be liable for Taplin's deranged understanding of the law? Should we be able to sue the Boston Globe because it published utter nonsense uttered by Jonathan Taplin? Because that's what he's arguing for. Oh, but, I forgot, according to Taplin, the Boston Globe -- as a newspaper -- has no such safe harbor, so it's already fair game. Sue away, people...

Wouldn’t that approach subject these services to death by a thousand copyright-infringement lawsuits?

It would depend on how it was put into practice. When someone tries to upload pornography to YouTube, an artificial intelligence agent sees a bare breast and shunts it into a separate queue. Then a human looks at it and says, “Well, is this National Geographic, or is this porn?” If it’s National Geographic it probably gets on the service, and if it’s porn it goes in the trash. So, it’s not like they’re not doing this already. It’s just they’ve chosen to filter porn off of Facebook and Google and YouTube but they haven’t chosen to filter ISIS, hate speech, copyrighted material, fake news, that kind of stuff.

This is just a business decision on their part. They know every piece of content that’s being uploaded because they used the ID to decide who gets the advertising. So they could do all of this very easily. It’s just they don’t want to do it.

First off, finally, the Boston Globe reporter pushes back slightly. Not by correcting any of the many, many false claims that Taplin has made so far, but in highlighting a broader point: that Taplin's solution is completely idiotic and unworkable, because we already see the abuse that the DMCA takedown process gets. But... Taplin goes into spin mode and suggests there's some magic way that this system wouldn't be abused for censorship (even though the existing system is).

Then he explains his fantasy-land explanation of how YouTube moderation actually works. He's wrong. This is not how it works. Most content is never viewed by a human. But let's delve in deeper again. Taplin and some of his friends like to point to the automated filtering of porn. But porn is something that is much easier to teach a computer to spot. A naked breast is something you can teach a computer to spot pretty well. Fake news is not. Hate speech is not. Separately, notice that Taplin never ever mentions ContentID in this entire interview? Even though that does the very thing he seems to insist that YouTube refuses to do? ContentID does exactly what he claims this porn filter is doing. But he pretends it doesn't exist and hasn't existed for years.

And the Boston Globe just lets it slide.

Also, again, Taplin insists that YouTube and Facebook "haven't chosen to filter ISIS" even though both companies have done so for years. How does Taplin not know this? How does the Boston Globe reporter not know this? How does the Boston Globe think that its ignorant reporter should interview this ignorant person? Why did they then decide to publish any of this? Does the Boston Globe employ fact checkers at all? The mind boggles.

Meanwhile, we really shouldn't let it slide that Taplin -- when asked specifically about copyright infringement -- seems to argue that if copyright law was changed, it would somehow magically lead Google to stop ISIS videos, hate speech and fake news among other things. None of those things has anything to do with copyright law. Shouldn't he know this? Shouldn't the Boston Globe?

As for the second paragraph, it's also utter nonsense. YouTube "knows every piece of content that's being uploaded because they used the ID to decide who gets the advertising." What does that even mean. What is "the ID"? And, even in the cases where YouTube does decide to place ads on videos (again, which is greatly restricted, and is not for all content), the fact that Google's algorithms can try to insert relevant ads does not mean that Google "knows" what's in the content. It just means that an algorithm does some matching. And, sure, Taplin might point out that if they can do that, why can't they also do it for copyright and ISIS and the answer is that THEY DO. That's the whole fucking point.

Again, why is the Boston Globe publishing utter nonsense?

Is Google trying to forestall this kind of regulation?

Ultimately YouTube is already moving towards being a service that pays content providers. They announced last month that they’re going to put up a YouTube music channel. And that will look much more like Spotify than it looks like YouTube. In other words, they will license content from providers, they will charge $10 a month for the service, and you will then get curated lists of music. From the point of view of the artists and the record company, it’ll be a lot better than the system that exists now — where essentially YouTube says to you, your content is going to be on YouTube whether you want it to or not, so check this box if you want us to give you a little bit of the advertising.

YouTube has been paying content providers for years. I mean, it's been years since the company announced that in one year alone, it had paid musicians, labels and publishers over a billion dollars. And Taplin claims they're "moving" to such a model? Is he stuck in 2005? And, they already license content from providers. The $10/month thing again, is not new (it's been available for years), but that's a separate service, which is not the same as regular YouTube. And it has nothing to do with any of this. If the DMCA changed, then... that wouldn't have any impact at all on any of this.

Still, let's recap the logic here: So YouTube offering a music service, which it set up to compete with Spotify and Apple Music, and which has nothing to do with the regular YouTube platform, will somehow "forestall" taking away the DMCA's safe harbors? How exactly does that work? I mean, wouldn't the logic work the other way?

The whole interview is completely laughable. Taplin repeatedly makes claims that don't pass the laugh test for anyone with even the slightest knowledge of the space. And nowhere does the Boston Globe address the multiple outright factual errors. Sure, I can maybe (maybe?) understand not pushing back on Taplin in the moment of the interview. But why let this go to print without having someone (anyone?!?) with even the slightest understanding of the law or how YouTube actually works, check to see if Taplin's claims were based in reality? Is that really so hard?

Apparently it is for the Boston Globe and its "deputy editor" Alex Kingsbury.

34 Comments | Leave a Comment..

Posted on Techdirt - 19 June 2018 @ 3:19am

Dear EU Parliament: Why Are You About To Allow US Internet Companies To Decide What EU Citizens Can Say Online?

from the such-a-bizarre-thing dept

We've pointed this out over and over again with regards to all of the various attempts to "regulate" the internet giants of Google and Facebook: nearly every proposal put forth to date creates a regulatory regime that Google and Facebook can totally handle. Sure, they might find it to be a nuisance, but its well within the resources of both companies to handle whatever is thrown their way. However, most other companies are then totally fucked, because they simply cannot comply in any reasonable manner. And, yet, these proposals keep coming -- and people keep celebrating them in the false belief that they will somehow "contain" the two internet giants, when the reality is that it will lock them in as the defacto dominant internet players, making it nearly impossible for upstarts and competitors to enter the market.

This seems particularly bizarre when we're talking about the EU's approach to copyright. As we've been discussing over the past few weeks, the EU Parliaments Legal Affairs Committee is about to vote on the EU Copyright Directive, that has some truly awful provisions in it -- including Article 11's link tax and Article 13's mandatory filters. The rhetoric around both of these tends to focus on just how unfair it is that Google and Facebook have so much power, and are making so much money while legacy companies (news publishers for Article 11 and recording companies for Article 13) aren't making as much as they used to.

But, as more and more people are starting to point out, if the Copyright Directive moves forward as is, it will only serve to lock in those two companies as the controllers of the internet. So why is it that the European Parliament seems hellbent on handing the internet over to American internet companies? In the link above, Cory Doctorow tries to parse out what the hell they're thinking:

These proposals will make starting new internet companies effectively impossible -- Google, Facebook, Twitter, Apple, and the other US giants will be able to negotiate favourable rates and build out the infrastructure to comply with these proposals, but no one else will. The EU's regional tech success stories -- say Seznam.cz, a successful Czech search competitor to Google -- don't have $60-100,000,000 lying around to build out their filters, and lack the leverage to extract favorable linking licenses from news sites.

If Articles 11 and 13 pass, American companies will be in charge of Europe's conversations, deciding which photos and tweets and videos can be seen by the public, and who may speak.

In a (possibly paywalled) article over at Wired looking at the Copyright Directive, Docotorow is also quoted explaining just how massively this system will be abused for censorship of EU citizens:

"Because the directive does not provide penalties for abuse – and because rightsholders will not tolerate delays between claiming copyright over a work and suppressing its public display – it will be trivial to claim copyright over key works at key moments or use bots to claim copyrights on whole corpuses.

The nature of automated systems, particularly if powerful rightsholders insist that they default to initially blocking potentially copyrighted material and then releasing it if a complaint is made, would make it easy for griefers to use copyright claims over, for example, relevant Wikipedia articles on the eve of a Greek debt-default referendum or, more generally, public domain content such as the entirety of Wikipedia or the complete works of Shakespeare.

"Making these claims will be MUCH easier than sorting them out – bots can use cloud providers all over the world to file claims, while companies like Automattic (Wordpress) or Twitter, or even projects like Wikipedia, would have to marshall vast armies to sort through the claims and remove the bad ones – and if they get it wrong and remove a legit copyright claim, they face unbelievable copyright liability."

As we noted yesterday in highlighting a new paper looking at what happened when similar laws were implemented, the increase in censorship is not an idle threat or crying wolf. It happens. Frequently.

And, yet, we still have EU politicians and supporters of the Copyright Directive -- while they complain about Google and Facebook's power over the internet -- turning around and pushing for plans that not only will lock in both of those companies as the dominant internet companies, but also forcing upon them the sole power to censor the speech of EU citizens. And they're about to vote on this in just hours and don't seem to have the first clue about what a dumb idea all of this is.

28 Comments | Leave a Comment..

Posted on Techdirt - 18 June 2018 @ 3:30pm

UK Lawmaker Who Quizzed Facebook On Its Privacy Practices Doesn't Seem To Care Much About His Own Website's Privacy Practices

from the just-sayin' dept

Jason Smith, over at Indivigital has been doing quite a job of late in highlighting the hypocrisy of European lawmakers screaming at internet companies over their privacy practices, while doing little on their own websites of what they're demanding of the companies. He pointed out the EU Commission itself appeared to be violating the GDPR, leading it to claim that it was exempt. And now he's got a new story up, pointing out that the website of UK Parliament member, Damian Collins, who is the chair of the Digital, Culture, Media and Sport Committee... does not appear to have a privacy policy in place, even though he took the lead in quizzing Facebook about its own privacy practices and its lack of transparency on how it treats user data.

Now, there are those of us who believe that privacy policies are a dumb idea that don't do anything to protect people's privacy -- but if you're going to be grandstanding about how Facebook is not transparent enough about how it handles user data, it seems like you should be a bit transparent yourself. Smith's article details how many other members of the Digital, Culture, Media and Sport Committee don't seem to be living up to their own standards. They may have been attacking social media sites... but were happy to include tracking widgets from those very same social media sites on their own sites.

Julie4Sunderland.co.uk is maintained on behalf of Julie Elliott MP, a fellow member of the Digital, Culture, Media and Sport Committee. It serves third-party content from Facebook and upwards of 18 cookies on visitor’s computers.

Likewise, websites of fellow members Jo Stevens, Simon Hart, Julian Knight, Ian Lucas, Rebecca Pow and Giles Watling are also collecting data on behalf of the social networking giant from their visitors.

The websites of Julian Knight, Ian Lucas, Giles Watling and Rebecca Pow also collect data on visitors for Twitter. Meanwhile, Rebecca Pow’s website sets third-party cookies from YouTube.com.

Damian Collins’s website features a cookie message however the link in the message takes the user to a contact page that contains a form that requests the user’s name and email address.

The page on which the form resides contains a link that activates a modal window and encourages the user to sign-up for Damian Collins’s email newsletter.

Moreover, the Parliamentary page for the Digital, Culture, Media and Sport committee is also setting and serving third-party cookies and content from Twitter.

Now, you can reasonably argue that the websites of politicians aren't the same as a social media giant used by like half of the entire world. And there is a point there. But it's also worth noting that it's amazing how accusatory politicians and others get towards social media sites when they don't seem to live up to the same standards on their own websites. Maybe Facebook should do better -- but the very actions of these UK Parliament members, at the very least, suggests that even they recognize what they're demanding of Facebook is more cosmetic "privacy theater" than anything serious.

12 Comments | Leave a Comment..

Posted on Techdirt - 18 June 2018 @ 11:57am

French Political Party Voting For Mandatory Copyright Filters Is Furious That Its YouTube Channel Deleted By Filter

from the but-we-didn't-mean-for-US dept

It's been a long tradition here on Techdirt to show examples of politicians and political parties pushing for stricter, more draconian, copyright laws are often found violating those same laws. But the French Rassemblemant National (National Rally Point) party is taking this to new levels -- whining about the enforcement of internet filters, just as it's about to vote in favor of making such filters mandatory. Leaving aside that Rassemblemant National, which is the party headed by Marine Le Pen, is highly controversial, and was formerly known as Front National, it is still an extremely popular political party in France. And, boy, is it ever pissed off that YouTube took down its YouTube channel over automatically generated copyright strikes. Le Pen is particularly angry that YouTube's automatic filters were unable to recognize that they were just quoting other works:

Marine Le Pen was quoted as saying, “This measure is completely false; we can easily assert a right of quotation [to illustrate why the material was well within the law to broadcast]”.

Yes, but that's the nature of automated filters. They cannot tell what is "fair use" or what kinds of use are acceptable for commentary or criticism. They can just tell "was this work used?" and if so "take it down."

Given all that, and the fact that Le Pen complained that this was "arbitrary, political and unilateral," you have to think that her party is against the EU Copyright Directive proposal, which includes Article 13, which would make such algorithmic filters mandatory. Except... no. Within the EU Parliament, Rassemblemant National is in a coalition with a bunch of other anti-EU parties known as Europe of Nations and Freedoms or ENF. And how does ENF feel about Article 13? MEP Julia Reda has a handy dandy chart showing that ENF is very much in favor of Article 13 (and the Article 11 link tax).

So... we have a major political party in the EU, whose own YouTube channel has been shut down thanks to automated copyright filters in the form of YouTube's ContentID. And that party is complaining that ContentID, which is the most expensive and the most sophisticated of all the copyright filters out there, was unable to recognize that they were legally "quoting" another work... and their response is to order every other internet platform to install their own filters. Really?

48 Comments | Leave a Comment..

Posted on Techdirt - 18 June 2018 @ 10:44am

Lessons From Making Internet Companies Liable For User's Speech: You Get Less Speech, Less Security And Less Innovation

from the not-good dept

Stanford's Daphne Keller is one of the world's foremost experts on intermediary liability protections and someone we've mentioned on the website many times in the past (and have had her on the podcast a few times as well). She's just published a fantastic paper presenting lessons from making internet platforms liable for the speech of its users. As she makes clear, she is not arguing that platforms should do no moderation at all. That's a silly idea that no one who has any understanding of these issues thinks is a reasonable idea. The concern is that as many people (including regulators) keep pushing to pin liability on internet companies for the activities of their users, it creates some pretty damaging side effects. Specifically, the paper details how it harms speech, makes us less safe, and harms the innovation economy. It's actually kind of hard to see what the benefit side is on this particular cost-benefit equation.

As the paper notes, it's quite notable how the demands from people about what platforms should do keeps changing. People keep demanding that certain content gets removed, while others freak out that too much content is being removed. And sometimes it's the same people (they want the "bad" stuff -- i.e., stuff they don't like -- removed, but get really angry when the stuff they do like is removed). Perhaps even more importantly, the issues for why certain content may get taken down are the same issues that often involve long and complex court cases, with lots of nuance and detailed arguments going back and forth. And yet, many people seem to think that private companies are somehow equipped to credibly replicate that entire judicial process, without the time, knowledge or resources to do so:

As a society, we are far from consensus about legal or social speech rules. There are still enough novel and disputed questions surrounding even long-standing legal doctrines, like copyright and defamation, to keep law firms in business. If democratic processes and court rulings leave us with such unclear guidance, we cannot reasonably expect private platforms to do much better. However they interpret the law, and whatever other ethical rules they set, the outcome will be wrong by many people’s standards.

Keller then looked at a variety of examples involving intermediary liability to see what the evidence says would happen if we legally delegate private internet platforms into the role of speech police. It doesn't look good. Free speech will suffer greatly:

The first cost of strict platform removal obligations is to internet users’ free expression rights. We should expect over-removal to be increasingly common under laws that ratchet up platforms’ incentives to err on the side of taking things down. Germany’s new NetzDG law, for example, threatens platforms with fines of up to &euro'50 million for failure to remove “obviously” unlawful content within twenty-four hours’ notice. This has already led to embarrassing mistakes. Twitter suspended a German satirical magazine for mocking a politician, and Facebook took down a photo of a bikini top artfully draped over a double speed bump sign.11 We cannot know what other unnecessary deletions have passed unnoticed.

From there, the paper explores the issue of security. Attempts to stifle terrorists' use of online services by pressuring platforms to remove terrorist content may seem like a good idea (assuming we agree that terrorism is bad), but the actual impact goes way beyond just having certain content removed. And the paper looks at what the real world impact of these programs have been in the realm of trying to "counter violent extremism."

The second cost I will discuss is to security. Online content removal is only one of many tools experts have identified for fighting terrorism. Singular focus on the internet, and overreliance on content purges as tools against real-world violence, may miss out on or even undermine other interventions and policing efforts.

The cost-benefit analysis behind CVE campaigns holds that we must accept certain downsides because the upside—preventing terrorist attacks—is so crucial. I will argue that the upsides of these campaigns are unclear at best, and their downsides are significant. Over-removal drives extremists into echo chambers in darker corners of the internet, chills important public conversations, and may silence moderate voices. It also builds mistrust and anger among entire communities. Platforms straining to go “faster and further” in taking down Islamist extremist content in particular will systematically and unfairly burden innocent internet users who happened to be speaking Arabic, discussing Middle Eastern politics, or talking about Islam. Such policies add fuel to existing frustrations with governments that enforce these policies, or platforms that appear to act as state proxies. Lawmakers engaged in serious calculations about ways to counter real-world violence—not just online speech—need to factor in these unintended consequences if they are to set wise policies.

Finally, the paper looks at the impact on innovation and the economy and, again, notes that putting liability on platforms for user speech can have profound negative impacts.

The third cost is to the economy. There is a reason why the technology-driven economic boom of recent decades happened in the United States. As publications with titles like “How Law Made Silicon Valley” point out, our platform liability laws had a lot to do with it. These laws also affect the economic health of ordinary businesses that find customers through internet platforms—which, in the age of Yelp, Grubhub, and eBay, could be almost any business. Small commercial operations are especially vulnerable when intermediary liability laws encourage over-removal, because unscrupulous rivals routinely misuse notice and takedown to target their competitors.

The entire paper weighs in at a neat 44 pages and it's chock full of useful information and analysis on this very important question. It should be required reading for anyone who thinks that there are easy answers to the question of what to do about "bad" content online, and it highlights that we actually have a lot of data and evidence to answer the questions that many legislators seem to be regulating based on how they "think" the world would work, rather than how the world actually works.

Current attitudes toward intermediary liability, particularly in Europe, verge on “regulate first, ask questions later.” I have suggested here that some of the most important questions that should inform policy in this area already have answers. We have twenty years of experience to tell us how intermediary liability laws affect, not just platforms themselves, but the general public that relies on them. We also have valuable analysis and sources of law from pre-internet sources, like the Supreme Court bookstore cases. The internet raises new issues in many areas—from competition to privacy to free expression—but none are as novel as we are sometimes told. Lawmakers and courts are not drafting on a blank slate for any of them.

Demands for platforms to get rid of all content in a particular category, such as “extremism,” do not translate to meaningful policy making—unless the policy is a shotgun approach to online speech, taking down the good with the bad. To “go further and faster” in eliminating prohibited material, platforms can only adopt actual standards (more or less clear, and more or less speech-protective) about the content they will allow, and establish procedures (more or less fair to users, and more or less cumbersome for companies) for enforcing them.

On internet speech platforms, just like anywhere else, only implementable things happen. To make sound policy, we must take account of what real-world implementation will look like. This includes being realistic about the capabilities of technical filters and about the motivations and likely choices of platforms that review user content under threat of liability.

This is an important contribution to the discussion, and highly recommended. Go check it out.

41 Comments | Leave a Comment..

Posted on Techdirt - 18 June 2018 @ 3:23am

Norwegian Court Orders Website Of Public Domain Court Decisions Shut Down With No Due Process

from the this-is-messed-up dept

What's up Europe? We've been talking a lot about insanity around the new copyright directive, but the EU already has some pretty messed up copyright/related rights laws on the books that are creating absurd situations. The following is one of them. One area where US and EU laws differ is on the concept of the "database right." The US does not grant a separate copyright on a collection of facts. The EU does. Studies have shown how this is horrible idea, and if you compare certain database-driven industries in the US and the EU, you discover how much damage database rights do to innovation, competition and the public. But, alas, they still exist. And they continue to be used in positively insane ways.

Enter Hakon Wium Lie. You might know him as basically the father of Cascading Style Sheets (CSS). Or the former CTO of the Opera browser. Or maybe even as the founder of the Pirate Party in Norway. Either way, he's been around a while in this space, and knows what he's talking about. Via Boing Boing we learn that: (1) Wium Lie has been sued for a completely absurd reason of (2) helping a site publish public domain court rulings that (3) are not even protected by a database right and (4) the judge ruled in favor of the plaintiff (5) in 24 hours (6) before Lie could respond and (7) ordered him to pay the legal fees of the other side.

I've numbered these because I had to break out each absurd part separately just to start to try to comprehend just how ridiculous the whole thing is. And now, let's go through how each part is absurd in turn:

1. Wium Lie is being sued as an accomplice to the site rettspraksis.no by an operation called Lovdata. Wium Lie tells the entire history in his post, but way back in the early days of the web, while he was helping to create CSS, Wium Lie also helped put Norway's (public domain) laws online. At the time, that same company, Lovdata, was charging people $1-per-minute to access the laws. Really. Eventually, Lovdata dropped the fees and is the official free publishers of the laws in Norway. Of course, statutory law is just one part of "the law." Case law is also quite important and (thankfully) court orders (that make up the bulk of case law) are also in the public domain in Norway. However, Lovdata charges an absurd $1,500 per year to access those decisions. And, it claims a database right* on the collection it makes available online.

2. And yet, Wium Lie is still being sued. Why? When he saw that the website rettspraksis.no was trying to collect and publish these decisions, he borrowed Lovdata CD-ROMs from the National Library in Oslo. He borrowed the 2002 version of the CD-ROM. This date is important, because the EU's database rights last for... 15 years. 2002 databases (and, yes, Wium Lie points out that it's odd to call a stack of documents a database...) are no longer protected by the database rights.

3. So, yeah, the data is clearly in the public domain, and Wium Lie didn't violate anyone's copyright or database rights. Wium Lie notes that Lovdata didn't even try to contact him or rettspraksis.no before suing, but just told the court that they must be scraping the expensive online database:

I'm very surprised that Lovdata didn't contact us to ask us where we had copied the court decisions from. In the lawsuit, they speculate that we have siphoned their servers by using automated «crawlers». And, since their surveillance systems for detecting siphoning were not triggered, our crawlers must have been running for a very long time, in breach of the database directive. The correct answer is that we copied the court decisions from the old discs I found in the National Library. We would have told them this immediately if they had simply asked.

4. This is the most perplexing to me in all of this. I can't read the Norwegian verdict (which, for Lovdata's lawyers, I did not get from scraping your site!), and don't know enough about Norwegian law, but this seems positively bizarre to me. It seems to go against fundamental concepts of basic due process, but how could a judge come out with a verdict like this?

5. ?!?>#$@!%#!%!@!%!#%!!

6. Again: is this how due process works in Norway? In the US, of course, there are things like preliminary injunctions that might be granted pretty quickly, but even then -- especially when it comes to gagging speech, there is supposed to be at least some element of due process. Here there appears to have been something close to none. Furthermore, in the US, this kind of thing would only be allowed if one side could show irreversible harm from leaving the site up. It is difficult to see how anyone could legitimately argue irreversible harm for publishing the country's own (public domain) court rulings.

I find it shocking that the judge ordered the take down of our website, rettspraksis.no, within 24 hours of the lawsuit being filed and WITHOUT HEARING ARGUMENTS FROM US. (Sorry for switching to CAPS, but this is really important.) We were ready and available to bring forth our arguments but were never given the chance. Furthermore, upon learning of the lawsuit, we, as a precaution, had voluntarily removed our site. If the judge had bothered to check he would have seen that what he was ordering was already done. There should be a much higher threshold for judges to close websites that just the request of some organization.

7. And, even if this was the equivalent of an injunction, to also tell Wium Lie and rettspraksis.no that they need to pay Lovdata's legal fees is just perplexing.

the two of us, the volunteers, were slapped with a $12,000 fee to cover the fees of Lovdata's own lawyer, Jon Wessel-Aas. So, the judge actually ordered that we had to pay the lawyer from the opposite side, WITHOUT HAVING BEEN GIVEN A CHANCE TO ARGUE OUR CASE.

This whole situation is infuriating. Being sued is a horrible experience in the first place. But the details here pile absurd upon preposterous upon infuriating. The whole database rights concept is already a troublesome thing, but this application of it is positively monstrous. Wium Lie now has some good lawyers working for him, and hopefully this whole travesty will get overturned, but what a clusterfuck.

* A separate tangent that I'll just note here rather than cluttering up all of the above. I was a bit confused to read references to the EU's database directive/database rights, because Norway is not part of the EU. However, since it is a part of the European Economic Area (yes -- this can all get confusing), it has apparently agreed to enact legislation that complies with certain EU Directives, including the Copyright and Database Directives.

45 Comments | Leave a Comment..

Posted on Free Speech - 15 June 2018 @ 3:38am

UN Free Speech Expert: EU's Copyright Directive Would Be An Attack On Free Speech, Violate Human Rights

from the don't-let-it-happen dept

We've been writing a lot about the EU's dreadful copyright directive, but that's because it's so important to a variety of issues on how the internet works, and because it's about to go up for a vote in the EU Parliament's Legal Affairs Committee next week. David Kaye, the UN's Special Rapporteur on freedom of expression has now chimed in with a very thorough report, highlighting how Article 13 of the Directive -- the part about mandatory copyright filters -- would be a disaster for free speech and would violate the UN's Declaration on Human Rights, and in particular Article 19 which (in case you don't know) says:

Everyone has the right to freedom of opinion and expression; the right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media regardless of frontiers.

As Kaye's report notes, the upload filters of Article 13 of the Copyright Directive would almost certainly violate this principle.

Article 13 of the proposed Directive appears likely to incentivize content-sharing providers to restrict at the point of upload user-generated content that is perfectly legitimate and lawful. Although the latest proposed versions of Article 13 do not explicitly refer to upload filters and other content recognition technologies, it couches the obligation to prevent the availability of copyright protected works in vague terms, such as demonstrating “best efforts” and taking “effective and proportionate measures.” Article 13(5) indicates that the assessment of effectiveness and proportionality will take into account factors such as the volume and type of works and the cost and availability of measures, but these still leave considerable leeway for interpretation.

The significant legal uncertainty such language creates does not only raise concern that it is inconsistent with the Article 19(3) requirement that restrictions on freedom of expression should be “provided by law.” Such uncertainty would also raise pressure on content sharing providers to err on the side of caution and implement intrusive content recognition technologies that monitor and filter user-generated content at the point of upload. I am concerned that the restriction of user-generated content before its publication subjects users to restrictions on freedom of expression without prior judicial review of the legality, necessity and proportionality of such restrictions. Exacerbating these concerns is the reality that content filtering technologies are not equipped to perform context-sensitive interpretations of the valid scope of limitations and exceptions to copyright, such as fair comment or reporting, teaching, criticism, satire and parody.

Kaye further notes that copyright is not the kind of thing that an algorithm can readily determine, and the fact-specific and context-specific nature of copyright requires much more than just throwing algorithms at the problem -- especially when a website may face legal liability for getting it wrong. And even if the Copyright Directive calls for platforms to have remediation processes, that takes the question away from actual due process on these complex issues.

The designation of such mechanisms as the main avenue to address users’ complaints effectively delegates content blocking decisions under copyright law to extrajudicial mechanisms, potentially in violation of minimum due process guarantees under international human rights law. The blocking of content – particularly in the context of fair use and other fact-sensitive exceptions to copyright – may raise complex legal questions that require adjudication by an independent and impartial judicial authority. Even in exceptional circumstances where expedited action is required, notice-and-notice regimes and expedited judicial process are available as less invasive means for protecting the aims of copyright law.

In the event that content blocking decisions are deemed invalid and reversed, the complaint and redress mechanism established by private entities effectively assumes the role of providing access to remedies for violations of human rights law. I am concerned that such delegation would violate the State’s obligation to provide access to an “effective remedy” for violations of rights specified under the Covenant. Given that most of the content sharing providers covered under Article 13 are profit-motivated and act primarily in the interests of their shareholders, they lack the qualities of independence and impartiality required to adjudicate and administer remedies for human rights violations. Since they also have no incentive to designate the blocking as being on the basis of the proposed Directive or other relevant law, they may opt for the legally safer route of claiming that the upload was a terms of service violation – this outcome may deprive users of even the remedy envisioned under Article 13(7). Finally, I wish to emphasize that unblocking, the most common remedy available for invalid content restrictions, may often fail to address financial and other harms associated with the blocking of timesensitive content.

He goes on to point -- as we have -- that while large platforms may be able to deal with all of this, smaller ones are going to be in serious trouble:

I am concerned that the proposed Directive will impose undue restrictions on nonprofits and small private intermediaries. The definition of an “online content sharing provider” under Article 2(5) is based on ambiguous and highly subjective criteria such as the volume of copyright protected works it handles, and it does not provide a clear exemption for nonprofits. Since nonprofits and small content sharing providers may not have the financial resources to establish licensing agreements with media companies and other right holders, they may be subject to onerous and legally ambiguous obligations to monitor and restrict the availability of copyright protected works on their platforms. Although Article 13(5)’s criteria for “effective and proportionate” measures take into account the size of the provider concerned and the types of services it offers, it is unclear how these factors will be assessed, further compounding the legal uncertainty that nonprofits and small providers face. It would also prevent a diversity of nonprofit and small content-sharing providers from potentially reaching a larger size, and result in strengthening the monopoly of the currently established providers, which could be an impediment to the right to science and culture as framed in Article 15 of the ICESCR.

It's well worth reading the whole thing. I don't know if this will have more resonance with the members of the EU Parliament's Legal Affairs Committee, but seeing as they keep brushing off or ignoring most people pointing out these very same points, one hopes that someone in Kaye's position will at least get them to think twice about continuing to support such a terrible proposal.

17 Comments | Leave a Comment..

Posted on Techdirt - 14 June 2018 @ 1:27pm

Once Again Congress Votes Proactively To Keep Itself Ignorant On Technology

from the a-series-of-tubes dept

Four years ago, we wrote about the House voting to keep itself ignorant on technology, and unfortunately, I can now basically just rerun that post again, with a few small tweaks, so here we go:

The Office of Technology Assessment existed in Congress from 1972 until 1995, when it was defunded by the Newt Gingrich-led "Contract with America" team. The purpose was to actually spend time to analyze technology issues and to provide Congress with objective analysis of the impact of technology and the policies that Congress was proposing. Remember how back when there was the big SOPA debate and folks in Congress kept talking about how they weren't nerds and needed to hear from the nerds? Right: the OTA was supposed to be those nerds, but it hasn't existed in nearly two decades -- even though it still exists in law. It just isn't funded.

Rep. Mark Takano (in 2014 it was Rush Holt) thought that maybe we should finally give at least a little bit of money to test bringing back OTA and to help better advise Congress. While some would complain about Congress spending any money, this money was to better inform Congress so it stopped making bad regulations related to technology, which costs a hell of a lot more than the $2.5 million Takano's amendment proposed. Also, without OTA, Congress is much more reliant on very biased lobbyists, rather than a truly independent government organization.

The fact that we're seeing this kind of nonsense in Congress should show why we need it:

A quartet of tech experts arrived at a little-noticed hearing at the U.S. Capitol in May with a message: Quantum computing is a bleeding-edge technology with the potential to speed up drug research, financial transactions and more.

To Rep. Adam Kinzinger, though, their highly technical testimony might as well have been delivered in a foreign language. “I can understand about 50 percent of the things you say,” the Illinois Republican confessed.

But, alas, like so many things in Congress these days, the issue of merely informing themselves has become -- you guessed it --partisan. The amendment failed 195 to 217 on mostly partisan lines (15 Republicans voted for it vs. 211 against, and only 6 Democrats voted against it, while 180 voted for it). If there's any silver lining, that's slightly better than in 2014 when a similar vote failed 164 to 248. So... progress?

Either way, when Congress is ignorant, we all suffer. That so many in Congress are voting to keep themselves and their colleagues ignorant should be seen as a problem.

33 Comments | Leave a Comment..

Posted on Techdirt - 14 June 2018 @ 3:21am

European Citizens: You Stopped ACTA, But The New Copyright Directive Is Much, Much Worse: Speak Up

from the protect-the-internet dept

It's understandable that people are getting fatigued from all the various attacks on the internet, but as I've noted recently, one of the biggest threats to our open internet is the incredibly bad Copyright Directive that is on the verge of being voted on by the EU Parliament's Legal Affairs Committee. The Directive is horrible on many fronts, and we've been highlighting two key ones. First, the dangerous link tax and, second, the mandatory upload censorship filters. Each of these could have major ramifications for how the internet will function.

Incredibly, both are driven mainly by industry animus towards Google from legacy industries that feel left behind. The link tax is the brainchild of various news publishers, while the upload filters are mainly driven by the recording industry. But, of course, what should be quite obvious at this point is that both of these ideas will only make Google stronger while severely limiting smaller competitors. Google can pay the link tax. Google has already built perhaps the most sophisticated content filtering system (which still sucks). Nearly everyone else cannot. So, these moves don't hurt Google. They hurt all of Google's possible competitors (including many European companies).

Six years ago, there was another threat in the EU for a horrible copyright plan, which was the ACTA "anti-counterfeiting trade agreement" being pushed (note a pattern here) by legacy copyright industries, looking to expand copyright law in a misguided attack on Google. Like this time, the horrible plan was being mainly pushed by the EU Commission. But with ACTA, the EU Parliament stepped up and rejected ACTA. However, that only happened after citizens hit the streets all over Europe to protest ACTA.

It is impossible to expect that every time politicians are about to do something bad on the internet or with copyright law that everyone can take to the streets. That's not going to happen. But the new Copyright Directive is significantly worse than anything that was in ACTA, and if the EU Parliament doesn't realize that by next week, the internet we know and love may be fundamentally changed in a way that we will all come to regret. I mentioned these already, but check out SaveYourInternet.eu, ChangeCopyright.org and SaveTheLink.org.

You can (and should) also follow MEP Julia Reda who has been leading the charge against these awful proposals and who has been posting how to help stop it on her website and on her Twitter feed. You can also listen to Reda discuss all of this on our podcast.

7 Comments | Leave a Comment..

Posted on Techdirt - 13 June 2018 @ 3:40pm

'Transparent' FCC Doesn't Want To Reveal Any Details About Ajit Pai's Stupid Reese's Mug

from the bringing-transparency-back dept

One of FCC Chair Ajit Pai's claims about how he's changed the FCC is that he's making it more transparent. And, to be fair, he did make one key change that his predecessors failed to do: which is releasing the details of rulemakings before they're voted on. That was good. But in so many other ways, Pai has been significantly less than transparent. And this goes all the way down to incredibly stupid things, like his silly stupid giant Reese's coffee mug. That mug is so famous, that even John Oliver mocked it in his story on net neutrality:

Taylor Amarel had some questions about the mug, and made a FOIA request using Muckrock, that might shed some details on the mug (and, perhaps, a few other things):

I would like to obtain all emails sent to, from, or copied to Deanne Erwin, Executive Assistant, containing any of the following non-case-sensitive key-strings: “reeses”, “ethics”, “mug”, “liberals”, or “Reese’s” from January 1, 2017 to present day.

But the wonderfully "transparent" Ajit Pai... apparently didn't want that. The FCC's General Counsel sent back an oddly accusatory email to Amarel, demanding a ridiculous amount of completely unnecessary information -- claiming it needed that info to assess fees to respond to the FOIA request:

In our attempts to discern your fee categorization, we became aware that the name you provided, Taylor Amarel, is likely a pseudonym. In order to proceed with your request, please provide us with your name, your personal mailing address, and a phone number where you can be reached.... We ask that you provide this information by May 29, 2018. If we do not hear from you by then, we will assume you are unwilling to provide this information and will close your requests accordingly.

As Muckrock noted, there is no reason why anyone should need to prove that they are using their real name or to provide all this personal info to the FCC, and it feels like an intimidation technique. Muckrock does note that such info might be useful in determining if Amarel should be granted media status, which might help waive fees, but Amarel did not request to be covered under such status.

Amarel handed over the info... and was then told that it would cost $233 to get the emails related to Pai's Reese's mug. Using Muckrock's own crowdfunding platform, users chipped in to fund the money, so hopefully at some point the FCC will live up to its legally required transparency and tell us about that stupid mug.

26 Comments | Leave a Comment..

Posted on Techdirt - 13 June 2018 @ 10:44am

Hey Google: Stop Trying To Patent A Compression Technique An Inventor Released To The Public Domain

from the being-evil dept

For the most part, Google has actually been one of the good guys on patent issues. Unlike some other Silicon Valley companies, Google has long resisted using its patents to go after others, instead only using the patents defensively. It has also fought for patent reform and experimented with new models to keep its own patents out of the hands of patent trolls. But it's been involved in an ongoing fight to patent something that an earlier inventor deliberately released into the public domain, and it reflects incredibly poorly on Google to keep fighting for this.

A Polish professor, Jarek Duda, came up with a new compression technique known as asymmetric numeral systems (ANS) years back, and decided to release it to the public domain, rather than lock it up. ANS has turned out to be rather important, and lots of companies have made use of it. Last summer, Duda noticed that Google appeared to be trying to patent the idea both in the US and around the globe.

Tragically, this happened just weeks after Duda had called out other attempts to patent parts of ANS, and specifically said he hoped that companies "like Google" would stand up and fight against such attempts. Three weeks later he became aware of Google's initial patent attempt and noted "now I understand why there was no feedback" on his request to have companies like Google fight back against attempts to patent ANS. In that same thread, he details how there is nothing new in that patent, and calls it "completely ridiculous." Despite noting that he can't afford to hire a patent lawyer, he's been trying to get patent offices to reject this patent, wasting a bunch of time and effort.

While a preliminary ruling in Europe appeared to side with Duda, accepting his evidence of prior art, Google is still fighting against that ruling and is continuing its efforts to patent the same thing in the US. This is getting new attention now after Tim Lee at Ars Technica wrote about the story, but it's been covered elsewhere in the past, including getting lots of attention on Reddit a year ago and Hacker News soon after that.

Google's response to Lee at Ars Technica are simply ridiculous. First, it claimed that Duda's invention was merely "a theoretical concept" while it is trying to patent "a specific application of that theory that reflects additional work by Google's engineers." But if you read through the analysis by many people who understand the space, that doesn't appear to be the case. There's very little that appears "new" in the Google patent, or non-obvious based on what Duda and others had already disclosed.

Google's second response is even more nonsensical:

"Google has a long-term and continuing commitment to royalty-free, open source codecs (e.g., VP8, VP9, and AV1) all of which are licensed on permissive royalty-free terms, and this patent would be similarly licensed."

While that's true, that's no excuse for locking up what's in the public domain and promising to treat it nicely.

The thing is there is simply no reason for Google to continue down this path. Again, the company has almost never been an aggressor on patents preferring to use them defensively. And it can still do that here -- by just pointing to the public domain to invalidate anyone else's attempt to patent this. The fact that Google is being slammed in various forums over this (and has been since a year ago) should have clued the company in to the fact that (1) this isn't necessary and (2) harming its own reputation with engineers just to secure a patent it doesn't need is not a good idea.

Google has tons of patents. It doesn't need this one. If it really thinks that its own invention here goes beyond what Duda did -- and Ars Technica notes that Google ignored multiple requests to explain what is different in its patent application -- then the company needs to be much more transparent and upfront about what is different from Duda's work and the company can just as easily release the same information to the public domain as well. Yes, that would be giving up on one patent, but Google can survive donating a patentable idea to the public domain if it actually has one.

41 Comments | Leave a Comment..

Posted on Free Speech - 12 June 2018 @ 12:03pm

High School Student's Speech About Campus Sexual Assault Gets Widespread Attention After School Cuts Her Mic

from the streisand-high dept

It's that time of year when kids are graduating from high school, and the age old tradition of the valedictorian speech is happening all around the country. While exciting for the kids, families and other students, these kinds of speeches are generally pretty quickly forgotten and certainly tend not to make the national news. However, in nearby Petaluma, California, something different is happening, all because a bunch of spineless school administration officials freaked out that the valedictorian, Lulabel Seitz, wanted to discuss sexual assault. During her speech, the school cut her mic when she started talking about that issue (right after talking about how the whole community had worked together and fought through lots of adversity, including the local fires that ravaged the area a few months back). Seitz has since posted the video of both her mic being cut off and then with her being filmed giving the entire speech directly to a camera.

And, of course, now that speech -- and the spineless jackasses who cut the mic -- are getting national news coverage. The story of her speech and the mic being cut has been on NPR, CBS, ABC, CNN, Time, the NY Post, the Washington Post and many, many more.

In the ABC story, she explains that they told her she wasn't allowed to "go off script" (even pulling out of a final exam to tell her they heard rumors she was going to go off speech and that she wasn't allowed to say anything negative about the school) and that's why the mic was cut, even as the school didn't know what she was going to say. She also notes -- correctly -- that it was a pretty scary thing for her to continue to go through with the speech she wanted to give, despite being warned (for what it's worth, decades ago, when I was in high school, I ended up in two slightly similar situations, with the administration demanding I edit things I was presenting -- in one case I caved and in one I didn't -- and to this day I regret caving). Indeed, she deserves incredible kudos for still agreeing to give her speech, and it's great to see the Streisand Effect make so many more people aware of (1) her speech and (2) what a bunch of awful people the administrators at her school are for shutting her speech down.

As for the various administrators, their defense of this action is ridiculous. They're quoted in a few places, but let's take the one from the Washington Post:

“In Lulabel’s case, her approved speech didn’t include any reference to an assault,” [Principal David Stirrat] said. “We certainly would have considered such an addition, provided no individuals were named or defamed.”

As Seitz notes, she never intended to name names, and the school had told her so many times not to talk about these things it was obvious to her that she wouldn't have been able to give that speech if she had submitted the full version. In the ABC interview she explained that rather than just letting the valedictorian speak as normal, the school had actually told her she had to "apply" to speak.

Dave Rose, an assistant superintendent, told the Press Democrat that he could remember only one other time that administrators had disconnected a microphone during a student’s graduation speech in the past seven years, but said he believed it was legal.

“If the school is providing the forum, then the school has the ability to have some control over the message,” Rose said.

Actually, that's not how the First Amendment works. Schools can limit some things, but not if it's based on the content of the message, which appears to be the case here. Of course, I doubt that Seitz is going to go to court over this as it's not worth it, but thanks to the Streisand Effect, she doesn't need to. The world has learned about her speech... and about how ridiculous the administrators are in her school district.

76 Comments | Leave a Comment..

Posted on Techdirt - 12 June 2018 @ 9:29am

Ending The Memes: EU Copyright Directive Is No Laughing Matter

from the it's-bad dept

On Friday, I wrote about all of the many problems with the link tax part of the proposed EU copyright directive -- but that's only part of the problem. The other major concern is around mandatory upload filters. As we discussed with Julia Reda during last week's podcast, the upload filters may be even more nefarious. Even the BBC has stepped up with an article about how it could put an end to memes. While that might be a bit of an exaggeration, it's only just a bit exaggerated. Despite the fact that the E-Commerce Directive already makes it clear that platforms should not be liable for content placed on their platforms by users absent any notice and that there can be no "general monitoring" obligation, the proposal for Article 13 would require that all sites have a way to block copyright-covered content from being uploaded without permission of the copyright holder.

As per usual, this appears to have been written by those who have little understanding of how the internet itself works, or how this will impact a whole wide variety of services. Indeed, there's almost nothing that makes any sense about it at all. Even if you argue that it's designed to target the big platforms -- the Googles and Facebooks of the world -- it makes no sense. Both Google and Facebook already implement expensive filtering systems because they decided it was good for their business to do so at their scale. And even if you argue that it makes sense for platforms like YouTube to put in place filters, it doesn't take into account many factors about what copyright covers, and the sheer impossibility of making filters that work across everything.

How would a site like Instagram create a working filter? Could it catch direct 100% copies? Sure, probably. But what if you post a photo to Instagram of someone standing in a room that has a copyright-covered photograph or painting on the wall? Does that need to be blocked? What about a platform like Github where tons of code is posted? Is Github responsible for managing every bit of copyright-covered code and making sure no one copies any of it? What about sites that aren't directly about the content, but which involve copyright-covered content, such as Tinder. Many of the photos of people on Tinder are covered by copyright, often held by a photographer, rather than the uploader. Will Tinder need to put in place a filter that blocks all of those uploads? Who will that be helping exactly? How about a blog like ours? Are we going to be responsible to make sure no one posts a copyright-covered quote in the comments? How are we to design and build a database of all copyright-covered content to block such uploads (and won't such a database potentially create an even larger copyright question in the first place)? What about a site like Airbnb? What if a photo of a home on Airbnb includes copyright-covered content in the background? Kickstarter? Patreon? I'm not sure how either service (which, we should remind you, both help artists get paid) can really function if this becomes law. Would they need a filter to block creators from uploading their own works?

And that leaves out even more fundamental questions about how do filters handle things like fair use? Or parody? To date, they don't. Now making such filters mandatory even for smaller sites would be a complete and total disaster for how the internet works.

This is why it is not hyperbolic at all to suggest that this change to how the EU looks at copyright could have a massive consequence on how the internet functions. At the very least, it is likely to limit the places where users can participate, because that will price out tons of services. It takes the internet far, far away from its core as a communications platform and moves it more and more towards one that is broadcast only. Perhaps that's what the EU really wants, but at least the discussion should be honest on that point. So far, it is not. The debate goes over the usual grounds, claiming that copyright holders are somehow being ripped off by the internet -- though that is stated without evidence. If the EU wants to fundamentally change how the internet works, it should at least justify those changes with something real and be willing to explain why those changes are acceptable. To date, that has not happened.

Internet companies are trying to speak out about this, but many are so busy fighting other fires -- such as the net neutrality repeal here in the US -- that it's difficult to run over to Europe to point out just how moronic this is. Automattic (the people who do Wordpress) have put out a big statement about the problems of this plan that is well worth reading:

We’re against the proposed change to Article 13 because we have seen, first-hand, the dangers of relying on automated tools to police nuanced speech and copyright issues. Bots or algorithms simply cannot determine whether a blog post, photo in a news article, or video posted to a website is copyright infringement or legitimate use. This is especially true on a platform like wordpress.com, where copyrighted materials are legitimately posted in the context of news articles, commentary, criticism, remixing, memes — thousands of times per day.

We’ve also seen how copyright enforcement, without adequate procedures and safeguards to protect free expression, skews the system in favor of large, well-funded players, and against those who need protection the most: individual website owners, bloggers, and small publishers who don’t have the resources or legal wherewithal to defend their legitimate speech.

Based on our experience, the changes to Article 13, while well-intentioned will almost certainly lead to a flood of unintended, but very real, censorship and chilling of legitimate, important, online speech.

Reddit has also put out a statement:

Article 13 would force internet platforms to install automatic upload filters to scan (and potentially censor) every single piece of content for potential copyright-infringing material. This law does not anticipate the difficult practical questions of how companies can know what is an infringement of copyright. As a result of this big flaw, the law’s most likely result would be the effective shutdown of user-generated content platforms in Europe, since unless companies know what is infringing, we would need to review and remove all sorts of potentially legitimate content if we believe the company may have liability.

Finally, a bunch of internet luminaries, including Tim Berners-Lee, Vint Cerf, Brewster Kahle, Katherine Maher, Bruce Schneier, Dave Farber, Pam Samuelson, Mitch Kapor, Tim O'Reilly, Guido von Rossum, Mitchell Baker, Jimmy Wales and many, many more have put out quite a statement on how bad this is:

In particular, far from only affecting large American Internet platforms (who can well afford the costs of compliance), the burden of Article 13 will fall most heavily on their competitors, including European startups and SMEs. The cost of putting in place the necessary automatic filtering technologies will be expensive and burdensome, and yet those technologies have still not developed to a point where their reliability can be guaranteed.Indeed, if Article 13 had been in place when Internet’s core protocols and applications were developed, it is unlikely that it would exist today as we know it.

The impact of Article 13 would also fall heavily on ordinary users of Internet platforms—not only those who upload music or video (frequently in reliance upon copyright limitations and exceptions, that Article 13 ignores), but even those who contribute photos, text, or computer code to open collaboration platforms such as Wikipedia and GitHub.

Scholars also doubt the legality of Article 13; for example, the Max Planck Institute for Innovation and Competition has written that “obliging certain platforms to apply technology that identifies and filters all the data of each of its users before the upload on the publicly available services is contrary to Article 15 of the InfoSoc Directive as well as the European Charter of Fundamental Rights.”

It doesn't have to be this way. There are campaign pages for those in Europe to contact their MEPs at SaveYourInternet.eu and ChangeCopyright.org. As it stands, the EU's Legal Affairs Committee will vote on this proposal next week. If it passes in tact, it's very likely that this will become official and all EU member countries will need to change their laws to enable this ridiculous and counterproductive plan.

There are, of course, all sorts of threats to the internet. In the past, SOPA/PIPA and ACTA would have changed fundamental concepts. Here in the US, we've just dumped net neutrality in the garbage. How the internet is shaped post-GDPR is still being figured out. But I can't think of a greater threat to the basic functioning of the internet than the current proposal in the EU right now. And, yet, it seems to not be getting nearly as much attention as those other things. Perhaps we're all fatigued from the other threats to the internet. But we need to wake up and speak out, because this one is worse. It will fundamentally change massive parts of how the internet works -- and almost all of it is designed to make it incredibly difficult to run an internet site that allows for any public participation at all.

If you're not in the EU, you can still speak up, and hopefully some Members of the European Parliament will pay attention. The world is watching what the EU Parliament does next week to the internet. If it goes along with the plan it will stamp out innovation and free speech, and basically hand over a huge gift to a small group of large media players who never liked the disruptive nature of the internet in the first place, and are now gleeful that EU regulators have more or less gone along with their plan to stamp out what makes the internet so wonderful. We've heard that some EU Parliament members are getting at least a little concerned because of the noise people are making about this, but it's time to make them very concerned. They are trying to fundamentally change the internet, and they don't seem to care about or understand what this actually means.

53 Comments | Leave a Comment..

Posted on Techdirt - 12 June 2018 @ 3:41am

EU Explores Making GDPR Apply To EU Government Bodies... But With Much Lower Fines

from the good-for-the-goose,-not-so-good-for-the-gander dept

We recently wrote how various parts of the EU governing bodies were in violation of the GDPR, to which they noted that the GDPR doesn't actually apply to them for "legal reasons." In most of the articles about this, however, EU officials were quick to explain that there would be new similar regulations that did apply to EU governing bodies. Jason Smith at the site Indivigital, who kicked off much of this discussion by discovering loads of personal info on people hosted on EU servers, has a new post up looking at the proposals to apply GDPR-like regulations on the EU governing bodies itself.

There are two interesting points here. First, when this was initially proposed last year, the plan was to have it come into effect on the very same day as the GDPR went into effect: May 25, 2018, and that it was "essential" that the public understand that the EU itself was complying with the same rules as everyone else.

Essential however, from the perspective of the individual, is that the common principles throughout the EU data protection framework be applied consistently irrespective of who happens to be the data controller. It is also essential that the whole framework applies at the same time, that is, in May 2018, deadline for GDPR to be fully applicable.

Guess what didn't happen? Everything in the paragraph above. The EU forced everyone else to comply by May of this year. But gave itself extra time -- time in which it is not complying with the rules and brushing it off as no big deal, while simultaneously telling everyone else that it's easy to comply.

Also, while the GDPR puts incredible fines on those who fail to comply... the fines for if the EU doesn't comply (if this rule ever actually goes into effect) are much more limited. Under the GDPR, companies can be fined 20 million euros or 4% of revenue, whichever is higher, meaning that any smaller company can be put out of business, but the plan for the EU itself is for fines to top out at €50,000 per mistake, with a cap of €500,000 per year.

Must be nice when you're the government and can make different rules for yourself, while mocking anyone who thinks that the rules for everyone else are a bit too aggressive and onerous.

22 Comments | Leave a Comment..

More posts from Mike Masnick >>