Mike Masnick’s Techdirt Profile

mmasnick

About Mike MasnickTechdirt Insider

Mike is the founder and CEO of Floor64 and editor of the Techdirt blog.

He can be found on Twitter at http://www.twitter.com/mmasnick



Posted on Techdirt - 19 September 2018 @ 1:37pm

Hollywood Chamber Of Commerce Trademark Bullies Kevin Smith's Podcast Over Hollywood Sign

from the likelihood-of-confusion? dept

The Hollywood Chamber of Commerce is somewhat infamous for its constant trademark bullying over the famed Hollywood sign (you know the one). Its latest target is apparently the Hollywood Babble-On podcast that is done as a live show each week by radio/podcast guy Ralph Garman and filmmaker/entertainer Kevin Smith. Before the show this past weekend, Garman had tweeted out that it might be the last Hollywood Babble-On ever. In the opening minutes of their latest episode, Garman explains that they've received a cease and desist letter from the Hollywood Chamber of Commerce "re: unauthorized use of Hollywood stylized mark and Hollywood Walk of Fame mark."

While I haven't seen the full cease-and-desist letter, from what Garman said on the podcast, the issue is so ridiculous that the Hollywood Chamber of Commerce should be called out for blatant trademark bullying. You see, while this is the normal logo/image promoting the podcast:

At times, they've used other images, such as this one:

It's that image that is apparently part of the problem (even though it's not clear how often it was used). The Chamber of Commerce is using the Hollywood style lettering, which is an approximation of the famous Hollywood sign, and the star behind their heads (which it apparently believes is an implied reference to the stars on Hollywood's walk of fame), to argue that this is unauthorized use of their marks. Some trademark lawyers will likely disagree, but this seems like classic trademark bullying.

If you're unfamiliar with the podcast (and I'll confess to being a loyal listener from Episode 1 through the latest, and got to see the show once live at Kevin's invitation after he was on our podcast a few years ago), it's a fun (frequently not safe for work) show looking at some highlights from the week's entertainment news, mixed in with a series of re-occurring bits, frequently involving Garman's rotating cast of impressions. In short, it's two funny guys, who are both in show business and have been for many years, goofing off talking about show business, frequently mocking some of the crazier news stories coming out of that business.

In other words, there's no way in hell that anyone in their right mind thinks that this podcast is officially sanctioned by "Hollywood" as some sort of official Hollywood product. The whole thing is kind of gently mocking some of Hollywood's sillier foibles. Indeed, this seems like a perfect use case for the old standby in trademark law: the "moron in a hurry" test. And, to make it more relevant to the hobbies of choice of Ralph and Kevin, I think it could be argued that neither a drunk, nor a stoned "moron in a hurry" would ever face even the slightest "likelihood of confusion" that Hollywood somehow had endorsed the podcast, just because it briefly had images showing slightly askew letters and a star.

It remains one of the more frustrating aspects of trademark law that so many people believe that it means you get total control over the marks in question. That's not how it's supposed to work. It's only in cases where there is a likelihood of confusion that people would be confused and believe that the mark holder is behind (or otherwise endorses) the products and services in question. And here, that seems pretty difficult to believe. Of course, rather than fight these kinds of things out, it's frequently much easier to just pay up, which may be what the lawyers for the Hollywood Chamber of Commerce are banking on.

11 Comments | Leave a Comment..

Posted on Techdirt - 19 September 2018 @ 3:23am

Compromise Music Modernization Act Will Bring Old Sound Recordings into The Public Domain, Tiptoe Towards Orphan Works Solution

from the some-good-things dept

Earlier this year we wrote about the significant concerns we had with the CLASSICS Act, that sought to create a brand new performance right for pre-1972 sound recordings, requiring various internet platforms to pay for that additional right to stream such music. As we've discussed for years, pre-1972 sound recordings are kind of a mess in the copyright world. That's because they weren't covered by federal copyright law -- but rather a mess of state laws (some statutes, some common law). Historically, none of that included a performance right, but some courts have recently interpreted one to exist (while others have said it doesn't). On top of that, some of those state laws mean that certain works will remain covered by copyright for many decades after they would have gone into the public domain under federal copyright law.

Many people have advocated for "full federalization" of those pre-1972 works, taking them away from those state copyright laws, and putting them on an even playing field with all other copyright-covered works. There is an argument against this, which is that doing so also creates brand new rights for works that are decades old, which clearly goes against the purpose and intent of copyright law (incentivizing the creation of new works for the public), but given what a mess having two (very different) systems entailed, it seemed like full federalization was the most sensible way forward.

Of course, rather than pursue that path, the RIAA pushed through something much worse and totally one-sided. The CLASSICS Act created a new performance right for pre-1972 sound recordings, but left out the federalization part. In other words, the copyright holders would get all of the benefits of this new law, and the public would still be unable to have these recordings go into the public domain for many, many decades. Senator Wyden introduced an alternative bill, the ACCESS Act, which pushed for full federalization.

Over in the House, the CLASSICS Act was unfortunately merged with a separate bill, the Music Modernization Act (which is mostly uncontroversial) and voted through unanimously. However, it hit a stumbling block in the Senate -- leading to negotiations to create a compromise between Wyden's ACCESS Act and the original CLASSICS Act. That compromise has now been released and... it's actually fairly decent. To be clear, this is not how anyone would draw up copyright law from scratch, and there are still bits and pieces that concern me in the bill. But compared to where we were with the CLASSICS Act, this is a pretty big improvement. It does still create this brand new performance right for pre-1972 works, which seems to totally undermine the point of copyright law, but seeing as that was going to happen no matter what under the original CLASSICS Act, the compromise here seems much better -- as it makes sure that even as those works get this new right, they also will move into the public domain much faster than they otherwise would.

The key elements in this compromise bill include full federalization of pre-1972 sound recordings, putting all copyright works under the same system. There is a slightly weird tiered system for gradually moving pre-1972 sound recordings into the public domain where they belong. The new rules set a copyright term of 95 years after the date of publication -- bringing works into the public domain much sooner than they would have been if they remained under state law (where the term could have gone up to 190 years or so). And then there's a set of "transition" periods for works to get them into the public domain:

PRE-1923 RECORDINGS.—In the case of a sound recording first published before January 1, 1923, the transition period described in subparagraph (A)(i)(II) shall end on December 31 of the year that is 3 years after the date of enactment of this section.

1923 - 1946 RECORDINGS.--In the case of sound recordings first published during the period beginning on January 1, 1923, and ending on December 31, 1946, the transition period described in subparagraph (A)(i)(II) shall end on the date that is 5 years after the last day of the period described in paragraph (A)(i)(I).

1947-1956 RECORDINGS.--In the case of sound recordings first published during the period beginning on January 1, 1947, and ending December 31, 1956, the transition period described in subparagraph (A)(i)(II) shall end on the date that is 15 years after the last day of the period described in subparagraph (A)(i)(I).

The really key part here is that first batch. Those are works where they should already be in the public domain under US law, as pre-1923 works are deemed to be in the public domain under federal law. But, since state laws have gone on much longer, we've locked up tons of important early US sound recordings, especially a ton of early jazz recordings that almost no one can hear. Under this law, those works will come into the public domain three years after the law is in place. Some will argue (reasonably!) that this is already too long, and you'd be right (also, that it's weird to give those very old works a new right just for a three year period). But it's better than having to wait until 2067 for them to be freed up entirely.

Now there's a second important idea put into this new bill -- which is a very, very, very light touch "orphan works" proposal. For decades now, plenty of people in the copyright space have fretted over what to do with the orphan works issue. This is an issue created by our own stupid copyright policies, in which, because the law no longer requires registration, there are billions of works where it is unclear who holds the copyright on those works, or even if there's any copyright at all. It's been a problem for many years that can seriously impact our ability to preserve historical culture, among other things.

Of course, every time Congress (and the Copyright Office) suggested proposals to deal with this issue (even bad suggestions and really bad suggestions), some copyright holders (mainly photographers) would freak out, and misleadingly claim that orphan works laws were designed to strip them of their copyright.

So, this new amended bill creates a very minor tiptoe towards an orphan works concept, just with sound recordings and only for "certain noncommercial uses of sound recordings that are not being commercially exploited." This is way, way, way too limited, but it's a start. Under the rules, someone engaged in non-commercial use (and boy, I can't wait to see the litigation fights over what counts as commercial v. non-commercial use...), has to make a "good faith, reasonable search" to see if a work is being commercially exploited. Following that, they have to file a notice with the Copyright Office announcing their intention to use the sound recording, allowing a 90 day period for someone to object. If there are no objections then, the work may be used in such non-commercial projects. This is extremely limited (way too much so), but hopefully will be useful to sites like the Internet Archive and various libraries. It would be nice if it went much further, but considering that no attempt to deal with orphan works has ever gone anywhere, this seems like at least a tiny step in the right direction. At the very least, hopefully it can be used to show that the world doesn't collapse when there is a way to make use of orphan works when the copyright holder cannot be found.

Read More | 21 Comments | Leave a Comment..

Posted on Techdirt - 18 September 2018 @ 3:44pm

Congressional Research Service Reports Now Officially Publicly Available

from the huge-news dept

For many, many years we've been writing about the ridiculousness of the Congressional Research Service's reports being kept secret. If you don't know, CRS is a sort of in-house think tank for Congress, that does, careful, thoughtful, non-partisan research on a variety of topics (sometimes tasked by members of Congress, sometimes of its own volition). The reports are usually quite thorough and free of political nonsense. Since the reports are created by the federal government, they are technically in the public domain, but many in Congress (including many who work at CRS itself) have long resisted requests to make those works public. Instead, we were left with relying on members of Congress themselves to occasionally (and selectively) share reports with the public, rather than giving everyone access to the reports.

Every year or so, there were efforts made to make all of that research available to the public, and it kept getting rejected. Two years ago, two members of Congress agreed to share all of the reports they had access to with a private site put together by some activists and think tanks, creating EveryCRSReport.com, which was a useful step forward. At the very least, we've now had two years to show that, when these reports are made public, the world does not collapse (many people within CRS feared that making the reports public would lead to more political pressure).

Earlier this year, in the Consolidated Appropriations Act of 2018, there was a nice little line item to officially make CRS reports publicly available.

And, this week, it has come to pass. As announced by Librarian of Congress Carla Hayden, there is now an official site to find CRS reports at crsreports.congress.gov. It appears that the available catalog is still limited, but they're hoping to expand backwards to add older reports to the system (a few quick test searches only shows fairly recent reports). But all new reports will be added to the database.

The result is a new public website for CRS reports based on the same search functionality that Congress uses – designed to be as user friendly as possible – that allows reports to be found by common keywords. We believe the site will be intuitive for the public to use and will also be easily updated with enhancements made to the congressional site in the future.

Moving forward, all new or updated reports will be added to the website as they are made available to Congress. The Library is also working to make available the back catalog of previously published reports as expeditiously as possible.

This is a big deal. The public pays over $100 million every year to have this research done, and all of it is in the public domain. Starting now, we can actually read most of it, and don't need to rely on leaks to find this useful, credible research.

6 Comments | Leave a Comment..

Posted on Techdirt - 18 September 2018 @ 9:34am

CIA Game Now In Production: Last Chance To Order

from the and-off-we-go dept

Pre-order your copy of CIA: Collect It All today »

It's been a while since we've mentioned our version of the CIA's internal training card game, that we Kickstarted back in April. For those who backed it, you've been receiving all the various updates on it -- including allowing everyone to download the rulebook. And now the game is officially in production -- which also means this may be your last chance to purchase a physical copy. While we're printing up some extra copies beyond what's already been ordered, at this point we're only doing this one printing of the game, and that'll be it. You can still pre-order the game here, and as we get closer to selling out the initial run we'll turn it off.

If you don't recall, this project grew out of the CIA telling the world about some of the internal training card and board games it had developed, leading to some FOIA requests that revealed the heavily redacted details of some of these games. We picked one of them -- which the CIA calls Collection Deck -- and turned it into our own game, entitled: CIA: Collect It All. It's a fast paced card game, in which players take on the role of CIA analysts, trying to collect enough information, using a variety of different spycraft techniques, to deal with various crises. Of course, as with the real world, other forces seek to get in the way and block the analysts from collecting all of the information they need.

For our version of the game, we had to fill in a whole bunch of redacted cards, completely redo the design, and add in some other fun aspects to the game -- including an entirely different "storytelling" variant which allows you to use the same cards for a very different kind of game (still based on the CIA).

After completing the redesign, filling in all of the redacted bits, adding new rules, rewriting the entire rulebook from scratch and more, we finally received our first prototype, and have approved it going into full production. We've initially ordered more than we sold via Kickstarter, but not a huge amount, so if you want a physical copy, it makes sense to put your order in soon.

3 Comments | Leave a Comment..

Posted on Techdirt - 17 September 2018 @ 1:29pm

Surprise: Bill Introduced To Finally Make PACER Free To All

from the nice! dept

So this is somewhat unexpected, but Rep. Doug Collins has introduced HR 6714, a bill to make federal court records free to the public.

H.R. 6714, the Electronic Court Records Reform Act, would guarantee free public access to federal court records through the Public Access to Court Electronic Records (PACER) system, which currently charges the public a fee to access documents. The bill would also require updates to the PACER system, including adding a function to enable all users to search its catalog of court documents easily. Currently, litigants are handicapped because they cannot conduct research through the system.

The bill would further support legal professionals and the general public by consolidating the Case Management/Electronic Case Files (CM/ECF) system. The CM/ECF system was designed to increase efficiency for all stakeholders within the judicial system, but it is compartmentalized among different courts. This makes locating records and filing documents difficult and inefficient. The Electronic Court Records Reform Act would unify these disconnected systems under the Administrative Office of the U.S. Courts in order to ensure uniform access to all federal litigants.

This would be... amazing. We've spent years highlighting the massive problems with PACER, the federal court system that charges insane amounts for basically everything you do, just to access public records, and which functions very much like it was designed around 1995. There are a few court cases arguing that PACER fees are illegal and a recent ruling in one of those cases agreed. As we noted at the time, that was hardly the final word on the matter. A bill like the ones Collins introduced would be an amazing leap forward in giving public access to court documents.

Unfortunately, it's unclear if the bill has any support beyond Collins, but this is the kind of thing you would hope that Congress could get behind.

21 Comments | Leave a Comment..

Posted on Techdirt - 17 September 2018 @ 10:42am

Apple Didn't Delete That Guys iTunes Movies, But What Happened Still Shows The Insanity Of Copyright

from the different-but-still-bad dept

Last week we, like many others, wrote about the story of Anders G da Silva, who had complained on Twitter about how Apple had disappeared three movies he had purchased, and its customer service seemed to do little more than offer him some rental credits. There was lots of discussion about the ridiculousness -- and potential deceptive practices -- of offering a "buy" button if you couldn't actually back up the "purchase" promise.

Some more details are coming out about the situation with da Silva, and some are arguing that everyone got the original story wrong and it was incorrect to blame Apple here. However, looking over the details, what actually happened may be slightly different, but it's still totally messed up. Apple didn't just stop offering the films. What happened was that da Silva moved from Australia to Canada, and apparently then wished to redownload the movies he had purchased. It was that region change that evidently caused the problem. Because copyright holders get ridiculously overprotective of regional licenses, Apple can only offer some content in some regions -- and it warns you that if you move you may not be able to re-download films that you "purchased" in another region (even though it promises you can hang onto anything you've already downloaded).

And, here the situation is slightly more confusing because Apple actually does offer the same three movies -- Cars, Cars 2 and The Grand Budapest Hotel -- in both Australia and Canada, but apparently they may not be the identical "versions" of the film, as they may be slightly altered depending on the region.

And while this may be marginally better than completely removing his "purchased" films, it's still absolutely ridiculous. The CNET article linked above is sympathetic to the idea that Apple has to go to extreme lengths such as these to prevent "region hopping," and says that da Silva is just an "edge case" that "fell into a licensing crack." But, again, that's nonsense. This is digital content that he "purchased" using a "buy" button. It shouldn't matter where he is at some later date. He should still get access to those original files. That's what a purchase means. The fact that this might possibly in some cases mean that (OH MY GOSH!) someone in Canada can access a movie released in Australia when they're actually in Canada, well, uh, that seems like an "edge case" that a movie studio and Apple should deal with, rather than screwing over legitimate purchasers.

But, alas, we're left with yet another example of the insanity driven by excessive copyright, in which copyright holders get so overly focused on the notion of "control" that they feel the need to control absolutely everything -- including making sure that no wayward Canadians might (GASP!) purchase and download a movie meant for Australians. It's this overwhelming, obsessive desire to "control" each and every use that messes with so many people's lives -- including da Silva's -- and makes sure that the public has almost no respect at all for copyright. Give up a little control, and let the edge cases go, and maybe people wouldn't be so quick to condemn copyright for removing their own rights so frequently.

92 Comments | Leave a Comment..

Posted on Techdirt - 14 September 2018 @ 11:53am

Guy In Charge Of EU Copyright Directive Claims He Didn't Know What He Voted On, Needs To Fix Things

from the well,-that-builds-confidence dept

Following the decision earlier this week of the EU Parliament to vote for the destruction of the open web by putting in place some pretty awful copyright proposals, people began highlighting more and more problems with the bill. Most of the focus before the vote had been on two particular articles, Article 11 and Article 13. But there are many other problems in the Directive as well -- it was just getting to be overwhelming to get into the weeds on all of them. One area of concern was in Article 12, which included a special new form of copyright for sporting events. Specifically, with no debate or discussion the legal affairs committee of the EU Parliament added in text saying that sporting event organizers would gain absolute control over recording, sharing and presenting any film clips -- even those that would otherwise be deemed legal in other copyright contexts. And yes, the law implies that if you're at a sports event, you can't even film anything from your own seat as that is reserved solely to the event organizers.

Incredibly, after the vote approving the directive, reporter Emanuel Karisten of the Swedish publication Breakit, asked Voss about this and Voss gave a fairly astounding answer, stating that "this was kind of a mistake" and that "no one had been aware of this." Later he states that he didn't know it was in there and he'll have to fix it:

Voss: This was kind of mistake I think by the JURI committee. Someone amended this. No one had been aware of this.

Reporter: But it was passed...

... discussion by someone with Voss saying that it's really about gambling/betting operations before Voss jumps back in ...

Voss: I didn’t know that this was in the proposal so far, so of course I have to deal with it now.... I do not consider that the commission and council will have this inside the proposal.

Later he says "because of the time and pressure" they concentrated on other areas of the bill. Which... does not seem like a good excuse.

You can listen to the exchange here:

Meanwhile, MEP Julia Reda is calling bullshit on the claim that Voss was "unaware" that this was in the proposal, noting not only that she had written about the issue prior to the vote, but that she had raised it directly with Voss and his colleagues:

There are a few possibilities here, none of which make Voss look any good. He either voted for an amendment he hadn't read and/or didn't understand, or he's lying to this reporter. It also suggests that rather than taking the concerns of critics like Reda seriously, Voss just tuned them out and happily voted away for such horrible proposals.

We've raised questions before about Voss's views on all of this, as he seems almost hysterically uninformed about how actual copyright policy works, even as he drives forward such a horrible policy. This seems to be yet more evidence that a few special interests made it clear to Voss what they wanted to do, and he just agreed to do that, no matter what concerns anyone else had.

41 Comments | Leave a Comment..

Posted on Free Speech - 14 September 2018 @ 10:43am

White House Potentially Exploring Executive Order On 'Social Media Bias'

from the the-first-amendment-would-like-a-word dept

The White House may be preparing an executive order for the President, pushing for investigations of "bias" at social media companies. It is not definite, but someone has leaked us a draft two page executive order. We're not releasing the draft because, despite it coming directly from someone in the White House, others have insisted it's not an accurate document, even as the approach to some extent mirrors the announced plans of the DOJ to investigate bias. Another reason we're not releasing the document itself is that we're quite aware of reports saying that there are attempts to find "leakers" in the White House, and one common method of doing so is to put small indicators in documents. We cannot guarantee that this document is not such a document and thus will be reporting on the basic concept of what's in this draft, without revealing the full document.

But, to be clear, if this document is accurate, it would almost certainly lead to a huge First Amendment fight, which it seems likely the companies would win.

Obviously the issue of social media and supposed political bias has been a big topic in DC lately -- including with the President -- despite the near total lack of actual evidence to support these claims. Yes, there is evidence of people being kicked off these platforms... but there is no evidence that the reasons have anything to do with political bias (people of all political stripes have been removed from these platforms). And, yes, there is also evidence that employees at many internet companies may lean one way politically, but that too is overstated and says nothing about how the platforms actually work.

Recently, we noted that the DOJ and various state Attorneys General were talking about using antitrust law against social media companies over bias, and explained in fairly great detail why that would almost certainly run afoul of the First Amendment and a whole long list of Supreme Court cases detailing how the government cannot compel speech of this nature.

And that's where this executive order, as leaked, would almost certainly run into huge First Amendment issues. It tries to hide these behind antitrust claims, saying that it's about ensuring competition and preventing the exercise of market power that "harms consumers, including through the exercise of bias." The Executive Order itself doesn't hide the intent, as "bias in online platforms" is specifically in the title. Basically, the order would task the White House with "investigating" social media platforms for bias and then seek to use antitrust actions (or pass it off to the DOJ or FTC) to punish companies that show loosely defined "bias." The document takes as default that any kind of "bias" on major internet platforms should be taken as anti-competitive (which seems incredibly questionable) and then also requires that various agencies give the President a report on how to "address" social media bias.

I have trouble seeing how this could possibly be constitutional under the First Amendment, as it is, quite explicitly, the government trying to regulate speech, and clearly does not fall into any of the exceptions to the First Amendment. It's possible this executive order will never actually become anything -- perhaps someone in the White House will prevent it from moving forward (it's also clear that the draft I've seen is not complete, as there are still notes about what's being worked on). But the fact that this is even being considered is certainly notable.

I asked Ken White, well known around here as a former Assistant US Attorney and current First Amendment lawyer what he thought of the draft and he noted that the document seemed so weakly put together that he had a hard time believing it was something anyone was seriously considering, though, he noted "with this administration it's very difficult to tell." He also noted that it appeared to be "more posturing than substance" and designed to "preach to the choir" rather than anything serious. As for the substance, he noted that while it asserts that "bias" is a violation of antitrust law, that's not at all accurate:

That’s a distortion and exaggeration. Nothing in the document elaborates or supports it. To the extent antitrust is concerned with bias it’s not the "kick the Nazis off the platform" kind. It's more like a concern about, for instance, Google altering search results to prefer products and services it owns.

Indeed, the general point of antitrust is to deal with when a dominant player is abusing its position to favor its own offerings, not on how it handles general moderation duties. So, while I wouldn't put it past this administration to mock up this kind of executive order as an exercise in thinking through what it can "do" about the exaggerated and misleading claims of "political bias" in search results, I have a hard time believing the administration would bother pushing forward with this idea, and if it actually did get that far that it would have any luck in convincing anyone (who matters) that this was constitutional. That said, if the point is just, as Ken suggested, preaching to the choir, I also wouldn't put it past this administration to push out this executive order just to rile up Trump's most ardent supporters, who continue to scream to the heavens about political bias in search results, despite a near total lack of evidence to support such claims.

229 Comments | Leave a Comment..

Posted on Free Speech - 14 September 2018 @ 9:33am

EU Continues To Kill The Open Web: Massive Fines For Sites That Don't Censor Within An Hour

from the what-the-hell-is-going-on-in-brussels? dept

The EU really seems quite hellbent on absolutely destroying the open internet. Just as the EU Parliament was voting to approve the EU Copyright Directive, requiring that much of the internet be licensed and curated, rather than open for anyone, the EU Commission decided to move forward with an awful idea that it had first proposed earlier this year: that social media companies must disappear "terrorist content' within one hour.

Back when this was proposed, we pointed out how this was holding companies to an absolutely impossible standard... and it appears that the EU really just doesn't give a fuck, because they're super excited about putting this into practice:

The European Commission proposed new rules on Wednesday that would require internet platforms to remove illegal terror content within an hour of it being flagged by national authorities. Firms could be fined up to 4% of global annual revenue if they repeatedly fail to comply.

Got that? 4% of global revenue. As the article notes, that means if Google fucks up a single download, it could owe $4.4 billion to the EU. Facebook could owe $1.6 billion.

"You wouldn't get away with handing out fliers inciting terrorism on the streets of our cities — and it shouldn't be possible to do it on the internet, either," EU security commissioner Julian King said in a statement.

First of all, leaving aside the (very important!) broad free speech concerns around what counts as "terrorist" content, as opposed to just dissenting content, the statement by Julian King is idiotic. If you want to use that analogy, what the Commission is proposing here is the equivalent of if someone was handing out fliers inciting terrorism on the streets of a city, that the city would then get to seize all the buildings on that street. That's almost exactly what this proposal is stating. If you want to go after people distributing terrorist content, go after the people distributing terrorist content. Don't go after the tools they use to post it. That makes no sense.

And, of course, we already know how this is going to lead to massive and widespread censorship. No company is going to want to risk a fine that massive, and with merely 1 hour to respond, no company will have the capability or context to carefully adjudicate the takedown demands to make sure they are proper and aboveboard. Obviously, they'll just start pulling down content incredibly quickly. Indeed, we've already seen what a mess this kind of rule can create. We've talked about the German law that gives sites 24 hours to takedown "hate speech," and how that's already leading to censorship of political speech and satire. Now switch that to just one hour, with even more drastic consequences.

It is literally insane that anyone could possibly think this is a good idea.

Activists are already pointing out that this proposal has simply ignored its obligation to review how such a law would impact human rights, because apparently if you just wave your hands in the air screaming "terrorists' the EU will toss basic human rights out the window.

At some point you have to wonder if the EU really just wants the internet shut off completely.

85 Comments | Leave a Comment..

Posted on Free Speech - 13 September 2018 @ 1:29pm

Google Fights In EU Court Against Ability Of One Country To Censor The Global Internet

from the this-is-important dept

For quite some time now we've been talking about French regulators and their ridiculous assertion that Google must apply its "Right to be Forgotten" rules globally rather than just in France. Earlier this week, the company presented its arguments to the EU Court of Justice who will eventually rule on this issue in a way that will have serious ramifications for the global internet.

In a hearing at the EU Court of Justice, Google said extending the scope of the right all over the world was “completely unenvisagable.” Such a step would “unreasonably interfere” with people’s freedom of expression and information and lead to “endless conflicts” with countries that don’t recognize the right to be forgotten.

“The French CNIL’s global delisting approach seems to be very much out on a limb,” Patrice Spinosi, a French lawyer who represents Google, told a 15-judge panel at the court in Luxembourg on Tuesday. It is in “utter variance” with recent judgments.

Even if you absolutely despise everything about Google, the argument of French regulators should be of massive concern to you. France's argument is that if a French regulator determines that some content should be disappeared from the internet, it is necessary for it to be memory holed entirely and permanently, literally calling such deleting of history "a breath of fresh air."

“For the person concerned, the right to delisting is a breath of fresh air,” said Jean Lessi, who represents France’s data protection authority CNIL, told the court. Google’s policy “doesn’t stop the infringement of this fundamental right which has been identified, it simply reduces the accessibility. But that is not satisfactory.”

Where one can be at least marginally sympathetic to the French regulator's argument, it is in the issue of circumvention. If Google is only required to suppress information in France, then if someone really wants to, they can still find that information by presenting themselves as surfing from somewhere else. Which is true. But that limited risk -- which would likely only occur in the very narrowest of circumstances in which someone already knew that some information was being hidden and then went on a quest to search it out -- is a minimal "risk" compared to the very, very real risk of lots of truthful, historical information completely being disappeared into nothingness. And that is dangerous.

The broader impact of such global censorship demands can easily be understood if you just recognize that it won't just be the French looking to memory hole content they don't like. Other governments -- such as Russia, China, Turkey, and Iran -- certainly wouldn't mind making some information disappear. And if you think that various internet platforms will be able to say "well, we abide by French demands to disappear content, but ignore Russian ones," well, how does that work in actual practice? Not only that, but such rules could clearly violate the US First Amendment. Ordering companies to take down content that is perfectly legal in the US would have significant ramifications.

But, it also means that we're likely moving to a more fragmented internet -- in which the very nature of the global communications network is less and less global, because to allow that to happen means allowing the most aggressive censor and the most sensitive dictator to make the rules concerning which content is allowed. And, as much as people rightfully worry about Mark Zuckerberg or Jack Dorsey deciding whose speech should be allowed online, we should be much, much, much more concerned when its people like Vladimir Putin or Recep Erdogan.

104 Comments | Leave a Comment..

Posted on Techdirt - 13 September 2018 @ 10:43am

European Court Of Human Rights: UK Surveillance Revealed By Snowden Violates Human Rights

from the well-of-course-it-does dept

Yet another vindication of Ed Snowden. Soon after some of the documents he leaked as a whistleblower revealed that the UK's GCHQ was conducting mass surveillance, a variety of human rights groups filed complaints with the European Court of Human Rights. It's taken quite some time, but earlier today the court ruled that the surveillance violated human rights, though perhaps in a more limited way than many people had hoped.

At issue were three specific types of surveillance: bulk interception of communications, sharing what was collected with foreign intelligence agencies, and obtaining communications data (metadata) from telcos. The key part of the ruling was to find that the bulk interception of communications violated Article 8 of the Human Rights Act (roughly, but not exactly, analogous to the US 4th Amendment). It was not a complete victory, as the court didn't say that bulk interception by itself violated human rights, but that the lack of oversight over how this was done made the surveillance "inadequate." The court also rejected any claims around GCHQ sharing the data with foreign intelligence agencies.

In short, the court found that bulk interception could fit within a human rights framework if there was better oversight, and that obtaining data from telcos could be acceptable if there were safeguards to protect certain information, such as journalist sources. But the lack of such oversight and safeguards doomed the surveillance activity that Snowden revealed.

Operating a bulk interception scheme was not per se in violation of the Convention and Governments had wide discretion (“a wide margin of appreciation”) in deciding what kind of surveillance scheme was necessary to protect national security. However, the operation of such systems had to meet six basic requirements, as set out in Weber and Saravia v. Germany. The Court rejected a request by the applicants to update the Weber requirements, which they had said was necessary owing to advances in technology.

The Court then noted that there were four stages of an operation under section 8(4): the interception of communications being transmitted across selected Internet bearers; the using of selectors to filter and discard – in near real time – those intercepted communications that had little or no intelligence value; the application of searches to the remaining intercepted communications; and the examination of some or all of the retained material by an analyst.

While the Court was satisfied that the intelligence services of the United Kingdom take their Convention obligations seriously and are not abusing their powers, it found that there was inadequate independent oversight of the selection and search processes involved in the operation, in particular when it came to selecting the Internet bearers for interception and choosing the selectors and search criteria used to filter and select intercepted communications for examination. Furthermore, there were no real safeguards applicable to the selection of related communications data for examination, even though this data could reveal a great deal about a person’s habits and contacts.

Such failings meant section 8(4) did not meet the “quality of law” requirement of the Convention and could not keep any interference to that which was “necessary in a democratic society”. There had therefore been a violation of Article 8 of the Convention.

The court also found that acquiring data from telcos violated Article 8 as well, for similar reasons.

It first rejected a Government argument that the applicants’ application was inadmissible, finding that as investigative journalists their communications could have been targeted by the procedures in question. It then went on to focus on the Convention concept that any interference with rights had to be “in accordance with the law”.

It noted that European Union law required that any regime allowing access to data held by communications service providers had to be limited to the purpose of combating “serious crime”, and that access be subject to prior review by a court or independent administrative body. As the EU legal order is integrated into that of the UK and has primacy where there is a conflict with domestic law, the Government had conceded in a recent domestic case that a very similar scheme introduced by the Investigatory Powers Act 2016 was incompatible with fundamental rights in EU law because it did not include these safeguards. Following this concession, the High Court ordered the Government to amend the relevant provisions of the Act. The Court therefore found that as the Chapter II regime also lacked these safeguards, it was not in accordance with domestic law as interpreted by the domestic authorities in light of EU law. As such, there had been a violation of Article 8.

Both of those elements also ran afoul of Article 10's protection of free expression because journalists' communications had been swept up in the bulk data collection:

In respect of the bulk interception regime, the Court expressed particular concern about the absence of any published safeguards relating both to the circumstances in which confidential journalistic material could be selected intentionally for examination, and to the protection of confidentiality where it had been selected, either intentionally or otherwise, for examination. In view of the potential chilling effect that any perceived interference with the confidentiality of journalists’ communications and, in particular, their sources might have on the freedom of the press, the Court found that the bulk interception regime was also in violation of Article 10.

When it came to requests for data from communications service providers under Chapter II, the Court noted that the relevant safeguards only applied when the purpose of such a request was to uncover the identity of a journalist’s source. They did not apply in every case where there was a request for a journalist’s communications data, or where collateral intrusion was likely. In addition, there were no special provisions restricting access to the purpose of combating “serious crime”. As a consequence, the Court also found a violation of Article 10 in respect of the Chapter II regime.

On the final issue of passing on the info to foreign intelligence agencies, the court didn't find any human rights issues there:

The Court found that the procedure for requesting either the interception or the conveyance of intercept material from foreign intelligence agencies was set out with sufficient clarity in the domestic law and relevant code of practice. In particular, material from foreign agencies could only be searched if all the requirements for searching material obtained by the UK security services were fulfilled. The Court further observed that there was no evidence of any significant shortcomings in the application and operation of the regime, or indeed evidence of any abuse.

It would have been nice if there was more of a blanket recognition of the problems of bulk interception and mass surveillance. Unfortunately the court didn't go that far. But at the very least this has to be seen as a pretty massive vindication of Snowden whistleblowing on the lack of oversight to protect privacy and the lack of safeguards to prevent telcos from sharing information with the government that should have been protected.

Read More | 16 Comments | Leave a Comment..

Posted on Techdirt - 13 September 2018 @ 9:32am

Actual Research On Political Bias In Search Results Would Be Useful, But So Far It Doesn't Show Anything

from the not-how-it-works dept

A few weeks back, we explained why claims of political bias in moderation by tech companies was not accurate at all. I recognize this has upset people who seem to have staked their personal identity on the idea that big internet companies are clearly out to get them, but we like to deal in facts around here. Of course, soon after that post went up, PJ Media editor Paula Bolyard put out an article -- using what she admits isn't anything close to a scientific study -- to make dubious claims of bias in Google searches for Trump news.

There were all sorts of problems with her methodology (including using Google search, rather than Google News, and using an extraordinarily sketchy ranking of how liberal or conservative certain publications were). But the bigger issue, as we noted in another post this week was that it showed a fundamental misunderstanding of how search engines work. It was not -- as some commenters who clearly did not read the article claimed -- that algorithms are perfect and show no bias (because they obviously do). But that the search algorithm boosts sites that are more popular, and if you looked at the sites that Bolyard's test showed as appearing in her search results were... larger sites. And those included typically "conservative" news sites such as the Wall Street Journal and Fox News. In other words, Google wasn't biasing based on political viewpoint, but on popularity of the news site itself. Which... is how Google has worked since basically the beginning.

Unfortunately, our President did what our President does, and took Bolyard's confusing mess (as amplified by Lou Dobbs on Fox News) and claimed that it was now proven that Google biases its search results against conservatives. He's since posted a video claiming that Google didn't link to a live stream of his state of the union address -- a claim that has already been proven to be 100% false.

Of course, that is leading people to (as they should!) start to do research on whether or not there really is some political bias in search results -- which is a good thing. We should investigate that, but it should be done rigorously. Digital Third Coast is pushing some research claiming to show no bias based on Google search autocomplete, but that methodology strikes me as equally dubious to Bolyard's study.

A much more interesting, and scientifically rigorous study, however can be found at ScienceDirect, in a study by Efrat Nechustai and Seth Lewis entitled What kind of news gatekeepers do we want machines to be? Filter bubbles, fragmentation, and the normative dimensions of algorithmic recommendations. Lewis recently discussed the results on Twitter. The results don't fully get at whether or not the algorithm is biased, but it does throw a lot of cold water on the idea that the Google News (separate from Google search) algorithm creates filter bubbles that drive people deeper and deeper into their own echo chambers.

But what the study does seem to suggest, is that Google tends to recommend big traditional news organizations most of the time. That... shouldn't be a surprise. I am confident that Trump's loudest supporters will argue that this is a sign of political bias on its own, because they believe whatever nonsense he spews about the NY Times and CNN being "fake news," but that's silly. Those publications may not be great, and I have serious concerns about the way they cover news, but the issue is not one of political bias. And the evidence again just seems to suggest that these news organizations are large and extremely popular, which is why Google recommends them.

Some others are attempting to research this topic as well, and they all seem to be coming up empty when it comes to any evidence of actual political bias on Google. The site Indivigital tried to look more closely at the sites that were analyzed in the study that Trump tweeted about and found... that the sites that were dubbed "left wing" tended to get a lot more inbound links. And, as you hopefully know, much of Google's ranking algorithm is based on inbound links. So if there's "bias" it's the "bias" of basically everyone else on the internet to more frequently link to those sites. It also found that the supposedly "left wing" sites published a lot more -- again, leading to more links and more attention.

In response, Google’s parent company Alphabet stated: “Search is not used to set a political agenda and we don’t bias our results toward any political ideology”.

To address these claims, we analyzed each of the top 25 right-wing and left-wing websites listed in PJ Media’s study. We looked at the quantity of unique links pointing into each news website as well as the total content output by each news website over a 24 hour period.

The results are listed in the tables below. Overall, from the websites analyzed we discovered:

  1. Left-wing news websites attract more links than right-wing news websites; and
  2. Left-wing news websites create more content than right-wing news websites.

So there may be "bias" in there, but it's not "political bias due to crazy liberal Google engineers."

Then there's another study by sociologist Francesca Tripodi, who looked at whether or not political bias was showing up in Google's rankings and also found little to support the claims. Instead, she found that the wording of the search query mattered tremendously in what kinds of responses you got (which makes a lot of sense). So, some search queries might return you more "conservative" leaning stories, while others might return you more "liberal" ones.

My research demonstrates that Google can actually drive the public toward a silo of conservative thought. For example, users curious for more information on the connection between Nellie Ohr and the Department of Justice—a topic widely discussed both on the QAnon message board and Fox News—would have received predominantly conservative perspectives if they queried her name on August 6, 2018. The top result was a piece by conservative think tank the American Spectator, the second and third links are from Fox News, followed by two more links from conservative news sites.

Again, obviously there is "bias" involved in story selection on search. That's the entire point. You want a search engine to "bias" towards what will be most relevant. The question of whether or not there is "political bias" (especially as influenced by the political leanings of the employees at a company) is a different one -- and one that can and should be researched. But to just default to insisting there must be such bias, without any actual evidence showing that political bias is happening in search results, it really looks silly to keep claiming it's a fact.

And, as multiple people have pointed out, if this really is somehow unfairly tilting things to one side of the political spectrum, there is nothing stopping anyone in this free market system from setting up their own news search that skews towards whatever it is Trump fans think is fair and balanced. Of course, it's still worth doing more research on this topic, but so far there is little to suggest any actual political bias in search results beyond the bias of "big, popular media sites get more links."

110 Comments | Leave a Comment..

Posted on Techdirt - 13 September 2018 @ 3:16am

UK MP Thinks Secret Online Groups Are The Root Of All Evil Online, Promises To Regulate 'Large Online Groups'

from the good-luck-with-that dept

It's always fascinating to me when people try to condense the complex and varied reasons why people sometimes behave badly into a single factor for blame. This is especially true online. A commonly misdiagnosed "problem" is anonymity, despite the fact that studies show anonymous online users tend to be better behaved in online flame wars, than those using their real names.

British Member of Parliament Lucy Powell has come up with her own simplistic and ridiculous explanation for why people are bad online and has a plan to do something about it. In her mind, the real problem is... "large secret online groups." She's written a whole Guardian opinion piece on it, as well as given a Parliamentary speech on it, not to mention making the rounds to snippet of the actual proposal (the full bill hasn't been placed online as far as I can tell as I type this), it appears that she wants to ban secret groups over 500 members, requiring that for any online group that has more than 500 members, the moderators and administrators would be legally required to publish public information about the group (she insists not the members), but also "to remove certain content." What kind of content isn't explicitly stated, which should set off all sorts of censorship alarm bells.

In her speech to Parliament, she mentions racism, revenge porn, jokes about the holocaust, and conspiracy theories as the types of content she's concerned about. Also... um... bad advice for autistic parents? It seems kind of all over the map. Which is why most people find this all so ridiculous. First off, you can't stop people from saying stupid stuff. That's just asking for the impossible. But it's even more ridiculous to argue that non-public groups of over 500 individuals now suddenly are going to be legally liable for censorship of amorphous "bad content."

In both her speech and the op-ed, she insists that she's just trying to make these groups have the same responsibilities as news organizations:

Our newspapers, broadcasters and other publishers are held to high standards, yet online groups, some of which have more power and reach than newspapers, are not held to any standards, nor are they accountable. It is about time the law caught up. The Bill is an attempt to take one step towards putting that right. It would make those who run large online forums accountable for the material they publish, which I believe would prevent them from being used to spread hate and disinformation, and for criminal activity. It would also stop groups being completely secret.

But that doesn't make any sense at all. Newspapers, broadcasters and publishers are not open forums for members to post their own thoughts. They are top down organizations that have an editorial process. An open forum is an entirely different thing, and it makes no sense to regulate one like the other. I mean, she actually uses the following in her speech:

If 1,000-plus people met in a town hall to incite violence towards a political opponent, or to incite racism or hate, we would know about it and deal with it. The same cannot be said of the online world.

But, under her proposal, rather than blaming the people who actually incited violence in that situation we should blame... the mayor who set up the town hall? This whole thing shows a rather astounding lack of understanding of both technology and human nature. There are already laws on the books against those who incite violence. Forcing private groups to become public, and making the organizers of those groups liable for "bad stuff" doesn't fix anything. It's also silly and impossible. Secret groups will continue to just remain secret and avoid this whole law. And, of course, is many cases, this would be impossible anyway. Who would be "responsible" for large IRC channels or Usenet groups?

I know that politicians see "bad stuff" online and feel the need to "do something" about it, but actually understanding technology would go a long way towards not making utter fools out of themselves.

40 Comments | Leave a Comment..

Posted on Techdirt - 12 September 2018 @ 10:37am

You Don't Own What You've Bought: Apple Disappears Purchased Movies

from the bad-apple dept

Once again, copyright and the digitization of everything means you no longer "own" what you've "bought." I thought we'd covered all this a decade ago when Kindle owners discovered that, even though they'd "purchased" copies of the ebook of George Orwell's 1984, their books had been memory holed, thanks to Amazon losing a license. After there was an uproar, Amazon changed its system and promised such things would never happen again. You would think that other online stores selling digital items would remember this and design their systems not to do this -- especially some of the largest.

Enter Apple and its infamous iTunes store. On Twitter, Anders G da Silva has posted a thread detailing how three of the movies he "purchased" have now disappeared and how little Apple seems to care about this:

My guess is that with this tweet getting lots and lots of attention, Apple will eventually back down and "fix" the situation. But it shouldn't take going viral for you to not have the stuff you bought disappear thanks to a change in licensing. Indeed, it does seem like Apple telling users that they are "buying" content that might later disappear due to changes in licensing agreements could potentially be a deceptive practice that could lead to FTC or possibly state consumer protection claims:

Last year we had a podcast about a new book by two copyright professors about the "end of ownership" due to excessive copyright usage, and this is just yet another unfortunate example of what has happened when we lock everything up. You don't own what you've bought.

And, yes, it is not endorsing or advocating for piracy to note that this is one of the reasons why people pirate. Content that people pirate doesn't magically disappear when licenses change and giant multinational companies decide to reach into your library and memory hole your purchases. Don't want people to pirate so much? Stop doing this kind of anti-consumer bullshit.

142 Comments | Leave a Comment..

Posted on Techdirt - 12 September 2018 @ 9:35am

EU Gives Up On The Open Web Experiment, Decides It Will Be The Licensed Web Going Forward

from the this-is-bad dept

Well, this was not entirely unexpected at this point, but in the EU Parliament earlier today, they voted to end the open web and move to a future of a licensed-only web. It is not final yet, as the adopted version by the EU Parliament is different than the (even worse) version that was agreed to by the EU Council. The two will now need to iron out the differences and then there will be a final vote on whatever awful consolidated version they eventually come up with. There will be plenty to say on this in the coming weeks, months and years, but let's just summarize what has happened.

For nearly two decades, the legacy entertainment industries have always hated the nature of the open web. Their entire business models were based on being gatekeepers, and a "broadcast" world in which everything was licensed and curated was perfect for that. It allowed the gatekeepers to have ultimate control -- and with it the power to extract massive rents from actual creators (including taking control over their copyright). The open web changed much of that. By allowing anyone to publicize, distribute and sell works by themselves, directly to end users, the middlemen were no longer important.

The fundamental nature of the internet was that it was a communications medium rather than a broadcast medium, and as such it allowed for permissionless distribution of content and communication. This has always infuriated the legacy gatekeepers as it completely undermined the control and leverage they had over the market. If you look back at nearly every legal move by these gatekeepers over the last twenty five years concerning the internet, it has always been about trying to move the internet away from an open, permissionless system back towards one that was a closed, licensed, broadcast, curated one. There's historical precedence for this as well. It's the same thing that happened to radio a century ago.

For the most part, the old gatekeepers have not been able to succeed, but that changed today. The proposal adopted by the EU Parliament makes a major move towards ending the open web in the EU and moving to a licensed, curated one, which will limit innovation, harm creators, and only serve to empower the largest internet platforms and some legacy gatekeepers. As Julia Reda notes:

Today’s decision is a severe blow to the free and open internet. By endorsing new legal and technical limits on what we can post and share online, the European Parliament is putting corporate profits over freedom of speech and abandoning long-standing principles that made the internet what it is today.

The Parliament’s version of Article 13 (366 for, 297 against) seeks to make all but the smallest internet platforms liable for any copyright infringements committed by their users. This law leaves sites and apps no choice but to install error-prone upload filters. Anything we want to publish will need to first be approved by these filters, and perfectly legal content like parodies and memes will be caught in the crosshairs.

The adopted version of Article 11 (393 for, 279 against) allows only “individual words” of news articles to be reproduced for free, including in hyperlinks – closely following an existing German law. Five years after the ‘link tax’ came into force in Germany, no journalist or publisher has made an extra penny, startups in the news sector have had to shut down and courts have yet to clear up the legal uncertainty on exactly where to draw the line. The same quagmire will now repeat at the EU level – no argument has been made why it wouldn’t, apart from wishful thinking.

This is a dark day for the open internet in the EU... and around the world. Expect the same gatekeepers to use this move by the EU to put pressure on the US and lots of other countries around the world to "harmonize" and adopt similar standards in trade agreements.

I know that many authors, musicians, journalists and other content creators cheered this on, incorrectly thinking that was a blow to Google and would magically benefit them. But they should recognize just what they've supported. It is not a bill designed to help creators. It is a bill designed to prevent innovation, lock up paths for content creators to have alternatives, and force them back into the greedy, open arms of giant gatekeepers.

153 Comments | Leave a Comment..

Posted on Techdirt - 11 September 2018 @ 3:46pm

The Intellectual Dishonesty Of Those Supporting The Existing Text Of The EU Copyright Directive

from the ignorance-or-purposeful-misdirection dept

As the EU gets ready to vote (again) on various amendments for the EU Copyright Directive, there has been an incredibly dishonest push by supporters of the original directive (often incorrectly claiming they're thinking of creators' best interests), to argue that the warnings of those who think these proposals are dangerous are misleading. What they are doing is unfortunate, but it deserves to be called out -- because of just how dishonest it is. They usually involve misrepresenting the law and its impact in order to completely misrepresent what will happen.

There are numerous examples of this in practice, but I'll use this article in the German site FAZ as just one example of the kind of rhetoric being used, as it is an impressively intellectually bankrupt version of the argument I'm seeing quite a bit lately. It was written by a guy named Volker Rieck who has shown up in a bunch of places attacking critics of the EU Copyright Directive. He apparently runs some sort of anti-piracy organization, which perhaps shouldn't be surprising. But, that doesn't excuse the sheer dishonesty of his arguments.

Very early in the process, the only MEP from the Pirate Party, Julia Reda, began to fight the propositions. For her campaign, she made very strong use of distortion and simplification. The word "link tax"..., by way of which Reda wanted to stop Article 11 of the policy, may be catchy, but there is something unwittingly comical to the earnest suggestion that there is a tax, collected by the tax office, on using links to online pieces of writing.

This is... odd. The word "tax" is used in a variety of contexts to show excess costs of certain proposals. Nothing about it deliberately suggests a "tax office" will be involved. But the "link tax" is quite real. The whole point of Article 11 is to create a new form of license -- required for certain sites to have to pay for nearly every use of media content. Let's be clear, because it often gets lost in the discussion: all of this content is already covered by copyright. At issue is whether or not one can link to it and include a short summary of the contents without first having to pay a license above and beyond what they would have to pay to license the content itself. And this is not an ambiguous issue. In the latest draft of the proposal from MEP Axel Voss it's pretty explicit that the link tax is about "obtaining fair and proportionate remuneration for such uses." The following is directly from the text of Voss's proposed amendment (which is more or less the "default" plan for the Copyright Directive as he's the main MEP behind the Directive):

Online content sharing service providers perform an act of communication to the public and therefore are responsible for their content and should therefore conclude fair and appropriate licensing agreements with rightholders.

It is absolutely a tax to require a license for such uses. And while Voss has included this escape clause saying that this "does not extend to acts of hyperlinking with respect to press publications" it is left entirely vague as to how to distinguish when a link with some basic link text is allowed without a license and when it needs to be licensed. Indeed, Voss's only real limitation is that the rules "shall not extend to mere hyperlinks, which are accompanied by individual words." Individual words. What goes beyond "individual"? Considering that individual means "single" or "one," it seems clear that under Voss's definition, accompanying a link with two words, may now subject you to requiring a license to link. This is even worse than the awful German law, which only required licenses on something beyond "short" phrases (where even that was not clearly defined).

Back to the awful FAZ piece:

The polemical buzzword "upload filter", to oppose Article 13 of the policy, wasn't much better. Upload filters are not, and were never, part of the proposal, but the word works well in fueling fears. Indeed, Julia Reda managed to convince some of her supporters that if the policy on copyright law is passed, everything on the internet will be filtered, and memes – yes, those beloved memes – will be forbidden altogether.

The fact that the policy says something completely different was of no more than marginal interest. According to the actual proposal, web platforms – and only web platforms – would have been obliged to enter into license agreements with the individual right owners of user-uploaded content or the copyright collectives by which the content is maintained.

This is particularly galling in just how dishonest it is. Saying that this won't impact users, but merely platforms, is bullshit. How do most users communicate these days? On platforms. And saying that platforms then have to license all content, as if the "cost" of that is not then passed along to the users. And that "cost" isn't just in monetary terms. It will, undoubtedly be in terms of perfectly non-infringing works completely taken offline, either because of accidental identification or malicious takedown efforts.

Sure, some people could try to post content on their own sites, but how long will it take until those who support Article 13 move down the stack and argue that hosting companies who allow users to host their own websites are in the same classification as the platforms who are required to obtain licenses under the law?

It gets worse:

In this scenario, it’s the platforms who are responsible for license payments; users have nothing to do with it.

I mean, come on. The platforms are the arbiters of end-users speech in this case. Of course users have everything to do with it. If it's too costly, the platforms will default to blocking the content, rather than allowing it to happen. And, again, any costs will be passed on from the platforms to the users in some form or another.

It would simply have meant a duty for the platforms to be transparent in order to comprehensively account for the licensing and to correctly forward the payments to the respective right owners. If a platform didn’t want to enter such a license agreement, the EU policy would at least hold that platform responsible to keep its own website clean. How it achieves that is up to the platform itself, as long as it prevents copyright infringements.

This is also particularly dishonest. If a platform doesn't want to enter into such a license... they would be responsible for keeping their website clean. And how would they possibly do that? They'd be required to pay for an incredibly expensive (and ineffective) upload filter. So to claim that this isn't a proposal for upload filters is utter nonsense.

Also, the whole "it's up to the platform, as long as it prevents copyright infringement" is fantasy land thinking, as if there's some solution that magically stops all copyright infringement. Whoever wrote this is incredibly dishonest or ignorant of how the world works. There is no solution that prevents all copyright infringement -- other than not existing at all.

Unfortunately, though, many of those who have joined the discussion have refused to put in the intellectual effort to read the proposal in its updated form and understand its intention. This goes for everyone all the way from web associations of political parties to journalist Sascha Lobo, who wrote of "censorship machines"... in "der Spiegel". If only they had read what they publicly decry! Then maybe they would have realised that for the first time, users of platforms that don’t license content would have had substantial leverage, including a right to mediation in the case of the blocking of content. At that point, at the latest, it should have become clear that the term "censorship" misses the mark. Perhaps it was simply too complicated to get hold of and understand the current version of the document?

Leverage? What leverage? If the law requires you not to allow any infringement, you have no leverage at all. Second, the concern about censorship is not at all made up. We know it's real because we see it happen all the time under existing notice-and-takedown regimes, which are significantly less extreme and less draconian than what's required under Article 13. The censorship comes from platforms seeking to avoid significant liability (and costly trials). They are incentivized (heavily) into taking down content to avoid the risk and liability. And thus, they will take down lots and lots of content rather than risk it -- especially when held to ridiculous standards like preventing all infringement from appearing on their platforms.

The dishonesty continues:

But let’s talk about the platforms, since they are the ones affected by this. More specifically, let’s talk about one of the most successful platforms: Youtube. It’s exclusively platforms like Youtube that the policy addresses. Not start-ups, not online shops, and not open source platforms.

This is blatantly untrue. As we noted back in July, those behind the EU Copyright Directive explicitly said the opposite. Here's what they said:

Any platform is covered by Article 13 if one of their main purposes is to give access to copyright protected content to the public.

It cannot make any difference if it is a “small thief” or a “big thief” as it should be illegal in the first place.

Small platforms, even a one-person business, can cause as much damage to right holders as big companies, if their content is spread (first on this platform and possibly within seconds throughout the whole internet) without their consent.

That's from the Committee who voted on the Directive. So to say it only targets platforms like YouTube when the crafters of the law itself say that it applies to small platforms and even one-person businesses, shows just how dishonest supporters are concerning all of this. Separately, it's obvious that it doesn't just apply to YouTube because YouTube already complies with Article 13 via things like ContentID. To argue that the law is targeting them is ridiculous. Why write an entire new law to just say "that thing you're already doing, yeah, keep that up." The author of the FAZ piece then goes on to talk all sorts of nonsense about Content ID.

For years, Youtube has used a system called Content ID, which allows right owners who have uploaded their content to the platform to decide what happens to it if and when it’s used. This ranges from monetarisation – if, for instance, a user uploads a video which includes music, the right owner of that music receives a portion of the video’s ad revenue – to the blocking of the video. Above all else, it’s meant to prevent third parties from making money using other people’s content.

But it gets better still. A system called Copyright Match, which Youtube developed for its channel owners, is just now ready to be put into practice. It is, as it were, a "Content ID" light, and is mainly intended to assist Youtubers in reacting to identical videos. The user who uploaded the video first automatically receives a message and gets to decide what happens to the duplicate, including the possibility to block it.

Is there anybody out there who’d brand this "censorship"? Apparently not – after all, there have been no demonstrations against Content ID and Copyright Match. We haven’t seen public outrage against Youtube’s "censorship machine".

Anyone claiming that there hasn't been outrage over ContentID taking down all sorts of legitimate content simply has no legitimate argument for being part of this debate. There has been massive and sustained outrage over ContentID and how it takes down all sorts of legitimate content. We've had probably over a dozen posts on Techdirt alone of bogus takedowns via ContentID, and people have been highlighting the problems of ContentID leading to inappropriate censorship going back nearly a decade.

If someone is going to insist that (1) Article 13 only targets platforms like YouTube, even when the authors of the law insist that's not true, and (2) state that no one complains about ContentID takedowns, they have no business arguing that the attacks on the EU Copyright Directive are untruthful. They are ignorant or lying. Neither is a good look.

The rest of the article is out-and-out conspiracy theory talking, including (I kid you not) accusations of George Soros' involvement in fighting against the Copyright Directive. And yet, amazingly, some people are taking this shit seriously. It is not serious. It is blatantly dishonest and should be treated as such.

60 Comments | Leave a Comment..

Posted on Techdirt - 11 September 2018 @ 10:53am

Facebook Responds To Blackberry's Silly 117 Page Patent Lawsuit With Its Own Silly 118 Page Lawsuit

from the really-guys? dept

Blackberry, the Canadian company that briefly made semi-popular devices for people at companies thanks to their physical keyboards, has always been more of a patent troll. While the company was on the losing end of one of the most famous pure patent troll cases in the past few decades, we have noted in the past that the very reason the trolling operation NTP sued Blackberry (then RIM) was RIM/Blackberry's own ridiculously aggressive patent shakedowns of other companies, which caught the attention of NTP's principles in the first place. Since the demand for actual devices from Blackberry has shrunk to "wait, those guys still exist?" levels, it's focused again on patent shakedowns.

Back in March, the company sued Facebook claiming that Facebook was infringing with some fairly basic concepts related to mobile messaging. While there were a number of different patents and claims in the original 117-page complaint, many of them are clearly bonkers. There is no reason why this stuff should be patented at all. Take, for example, US Patent 8,209,634 for "Previewing a new event on a small screen device." Believe it or not, Blackberry has patented adding a little dot showing you how many unread messages you have. Really.

The Blackberry complaint goes on at length about just how amazing and unknown this kind of thing was before this patent (which is utter nonsense):

Given the state of the art at the time of the invention of the ’634 Patent, the inventive concepts of the ’634 Patent were not conventional, well-understood, or routine. The ’634 Patent discloses, among other things, an unconventional and technological solution to an issue arising specifically in the context of wireless communication devices and electronic messaging received within those devices. The solution implemented by the ’634 Patent provides a specific and substantial improvement over prior messaging notification systems, resulting in an improved user interface for electronic devices and communications applications on those devices, including by introducing novel elements directed to improving the function and working of communications devices such as, inter alia, the claimed “visually modifying at least one displayed icon relating to electronic messaging to include a numeric character representing a count of the plurality of different messaging correspondents for which one or more of the electronic messages have been received and remain unread” (claims 1, 7, and 13), “displaying on the graphical user interface an identifier of the correspondent from whom at least one of the plurality of messages was received” (claim 5), and “displaying on the graphical user interface at least one preview of content associated with at least one of the received electronic messages” (claim 6), “[executable / machine-readable] instructions which, when executed, cause the wireless communication device to visually modify the graphical user interface to include an identifier of the correspondent from whom at least one of the plurality of messages was received” (claims 11 and 17), “[executable / machine-readable] instructions which, when executed, cause the wireless communication device to visually modify the graphical user interface to include at least one preview of content associated with at least one of the received electronic messages” (claim 12 and 18).

That's a load of claptrap. The reason icons didn't historically show a number for unread messages had nothing to do with an "unconventional and technological solution," but because the resolution of small screens wasn't good enough to make this viable. Once the technology caught up the very obvious way to display such information became fairly standard fairly quickly. But, alas, Blackberry claims that Facebook is clearly infringing because of this:

What a load of nonsense. There's a lot more like this in the complaint, with patents that clearly never should have been granted, and are likely invalid patents post-Alice.

Anyway, that was all back in March. The reason I'm bringing it up again now is because Facebook has now sued Blackberry for patent infringement in a strikingly similar lawsuit. Indeed, I'd almost think that Facebook's lawyers at Cooley were trolling Blackberry's lawyers by making their complaint 118-pages to Blackberry's 117-page complaint about Facebook. You may recall, back in 2012, that Facebook (with an assist from Microsoft) bought a bunch of patents from a struggling AOL, in an effort to keep them out of the hands of trolls. The new suit involves claims that are suspiciously just as stupid and ridiculous as the ones in Blackberry's lawsuit against Facebook.

Again, if there weren't potentially billions of dollars at stake, I'd really think that Facebook was trolling Blackberry with these claims. Take, for example, the claims around US Patent 8,429,231 for "voice instant messaging." The heart of the patent is having a button on a text instant messaging app that allows you to shift the conversation to voice. An image example from the patent:

And... the corresponding image of how Blackberry is supposedly infringing on this patent:

I honestly can't decide which of the two examples above is a stupider patent -- the unread messages bubble or the click-to-call button.

It is entirely possible that, as was done in the good old days of patent nuclear wars, the intention here is just to get the two sides to agree to some sort of cross licensing deal and to walk away from the courthouse -- but what a massive waste of time, money and resources this is, all over some fairly basic UI features that never should have been patented in the first place.

Read More | 6 Comments | Leave a Comment..

Posted on Techdirt - 11 September 2018 @ 3:31am

Creators Supporting Link Taxes And Mandatory Filters Are Handing The Internet Over To The Companies They Hate

from the be-careful-what-you-wish-for dept

On Wednesday, the EU Parliament will vote yet again on the EU Copyright Directive and a series of amendments that might fix some of the worst problems of the Directive. MEP Julia Reda has a detailed list of many of the proposals and what they would do to the current proposals on the table. While there are a few attempts to "improve" Articles 11 and 13, many of those improvements are, unfortunately, very limited in nature, and will still create massive problems for the way the internet works.

Unfortunately, as with the situation earlier this year, many groups claiming to represent content creators are arguing in support of the original proposals, and spreading pure FUD about the attempts to fix them. Author Cory Doctrow has a very thorough debunking of each of their talking points. Here's just a snippet:

Niall says that memes and other forms of parody will not be blocked by Article 13's filters, because they are exempted from European copyright. That's doubly wrong.

First, there are no EU-wide copyright exemptions. Under the 2001 Copyright Directive, European countries get to choose zero or more exemptions from a list of permissible ones.

Second, even in countries where parody is legal, Article 13's copyright filters won't be able to detect it. No one has ever written a software tool that can tell parody from mere reproduction, and such a thing is so far away from our current AI tools as to be science fiction (as both a science fiction writer and a Visiting Professor of Computer Science at the UK's Open University, I feel confident in saying this).

But there's an even larger point that makes it so incredibly frustrating that we've been seeing content creators claim to support the existing draft in order to get back at Google and Facebook. And it's that these rules will lock in the giant internet companies as the only major internet platforms and block out any new upstarts that might compete with them. Cory's explains it this way:

Niall says Article 13 will not hurt small businesses, only make them pay their share. This is wrong. Article 13's copyright filters will cost hundreds of millions to build (existing versions of these filters, like Youtube's Content ID, cost $60,000,000 and only filter a tiny slice of the media Article 13 requires), which will simply destroy small competitors to the US-based multinationals.

What's more, these filters are notorious for underblocking (missing copyrighted works -- a frequent complaint made by the big entertainment companies...when they're not demanding more of these filters) and overblocking (blocking copyrighted works that have been uploaded by their own creators because they are similar to something claimed by a giant corporation).

Niall says Article 13 is good for creators' rights. This is wrong. Creators benefit when there is a competitive market for our works. When a few companies monopolise the channels of publication, payment, distribution and promotion, creators can't shop around for better deals, because those few companies will all converge on the same rotten policies that benefit them at our expense.

We've seen this already: once Youtube became the dominant force in online video, they launched a streaming music service and negotiated licenses from all the major labels. Then Youtube told the independent labels and indie musicians that they would have to agree to the terms set by the majors -- or be shut out of Youtube forever. In a market dominated by Youtube, they were forced to take the terms. Without competition, Youtube became just another kind of major label, with the same rotten deals for creators.

I'd argue that Cory's explanation even understates the problem here. The very design of these laws is to limit competition. What is often ignored in these discussions is that the record labels, movie studios and publishers pushing for these laws have always viewed the world in a particular way: where they "negotiate" against other big companies for how to best split up the pie. They don't want to negotiate with smaller companies. They want just a few companies they can negotiate with -- but hopefully they want the law in their favor so they can pressure that small list of companies to do their bidding. They certainly don't care what's in the best interests for actual creators, because their entire reason for being has been to take as much money out of actual creators' pockets and keep it for themselves.

The idea that Article 11 and Article 13 will, in any way, help creators, rather than legacy gatekeepers is laughable. The idea that it will somehow harm the internet giants is equally laughable. They can deal with it. What it will do is take upstart competitors out of the equation entirely and will significantly remove negotiating leverage for creators. Whereas, in the recent past, they didn't like the deals offered by the major labels, publishers and studios, internet platforms offered creators an excellent alternative, giving them negotiating power. But, with the EU Copyright Directive, those third party platforms will be limited, and thus actual creators will have much less negotiating leverage, many fewer options, and will get pushed back into exploitative contracts with the legacy gatekeepers. It's unfortunate, then, that at least some have been lead to believe these rules are actually in their interest, when they will do significant harm to them instead.

59 Comments | Leave a Comment..

Posted on Techdirt - 10 September 2018 @ 9:35am

DOJ And State Attorneys General Threatening Social Media Companies Over Moderation Practices Is A First Amendment Issue

from the that's-not-how-any-of-this-works dept

Earlier this month, President Trump made it explicitly clear that he expects the Jeff Sessions' DOJ to use its power for political purposes, protecting his friends and going after his enemies:

And, while the DOJ hasn't done that concerning indictments of Trump's friends and cronies, it appears that Sessions may be moving towards it with another "enemy" in the mind of Trump. Over the last few weeks Trump has also made it clear that he (incorrectly) believes that the big internet companies are deliberately targeting conservatives, and has threatened to do something about it.

On Wednesday, just after Twitter and Facebook appeared before Congress, the DOJ released a statement saying that it was investigating whether or not actions by the big internet companies was "intentionally stifling the free exchange of ideas." The full statement was short and to the point:

We listened to today's Senate Select Committee on Intelligence hearing on Foreign Influence Operations' Use of Social Media Platforms closely. The Attorney General has convened a meeting with a number of state attorneys general this month to discuss a growing concern that these companies may be hurting competition and intentionally stifling the free exchange of ideas on their platforms.

The competition question is one that the DOJ's antitrust division clearly has authority over, but alarms should be raised about the DOJ or state AGs arguing that these platforms are "stifling the free exchange of ideas on their platforms." Because while -- on its face -- that might sound like it's supporting free speech, it's actually an almost certain First Amendment violation by the DOJ and whatever state AGs are involved.

There are lots and lots of cases on the books about this, but government entities aren't supposed to be in the business of telling private businesses what content they can or cannot host. Cases such as Near v. Minnesota and Bantam Books v. Sullivan have long made it clear that governments can't be in the business of regulating the speech of private organizations -- though those are both about regulations to suppress speech.

But there are related cases on compelled speech. Most famously, perhaps, is West Virginia State Board of Education v. Barnette which said schools' can't make kids say the Pledge of Allegiance. In that case, the court ruled:

If there is any fixed star in our constitutional constellation, it is that no official, high or petty, can prescribe what shall be orthodox in politics, nationalism, religion, or other matters of opinion or force citizens to confess by word or act their faith therein.

Forcing platforms to carry speech would clearly go against that.

Miami Herald v. Tornillo actually seems even more directly on point. It was in response to a Florida state law demanding "equal space" for political candidates, but the court ruled, pretty definitively, that as private publications, the government could not compel them to host speech they did not want to host. The ruling even discussed the issue of a lack of competition -- which Sessions' statement alludes to -- and concludes that's not an excuse for compelling speech. In CBS v. the Democratic National Committee, the Supreme Court clearly noted:

The power of a privately owned newspaper to advance its own political, social, and economic views is bounded by only two factors: first, the acceptance of a sufficient number of readers -- and hence advertisers -- to assure financial success; and, second, the journalistic integrity of its editors and publishers.

In other words, if a private speech hosting platform is too one-sided, that is for the market to decide, not the government.

So, yeah, there are concerns raised here about freedom of expression... but it's by Attorney General Jeff Sessions and whichever State Attorneys General decide to participate in this clown show. Oh, and just to put a little more emphasis on why this is clearly a political move designed to suppress free speech rights? So far only Republican Attorneys General have been invited -- a point I'm sure any court would take note of.

145 Comments | Leave a Comment..

Posted on Techdirt - 7 September 2018 @ 1:39pm

Vermont's Revenge Porn Law Ruled Constitutional... With An Incredibly Confused Ruling

from the not-how-it-works dept

Revenge porn -- or, more accurately, "non-consensual pornography" -- is unquestionably bad. We've spent plenty of time mocking the jackasses who have been involved in these awful sites, and have been happy to see them flail around as the stench of their association with these sites sticks.

However, we have not supported the attempts by a small group of legal academics to criminalize running such a site for a variety of reasons. First, such an action would make plenty of protected speech illegal causing massive collateral damage to speech and internet platforms. Second, as we've repeatedly documented, these revenge porn sites don't seem to last very long, and those involved with them have a fairly permanent stain on their reputations. Third, in many cases, the type of people running these sites often seem to have already violated other laws, for which law enforcement is able to go after them.

In recent years, the Supreme Court has made it pretty clear that it has little interest in expanding the categories of speech that are exempted from the First Amendment. I've often pointed to lawyer Mark Bennett's 2014 blog post entitled First Amendment 101 in which he details out the very short list of speech that is not protected by the First Amendment. That post is actually about attempts to outlaw revenge porn and claims that it's not protected by the First Amendment, but the list is a useful one to point to any time anyone suggests that this or that speech shouldn't be subject to the First Amendment.

Some people insist that revenge porn would clearly be exempt from the First Amendment because it's so bad. But they ignore that, in recent years, the Supreme Court has made it clear that such awful content as video depictions of cruelty to animals and picketing military funerals with truly hateful signs is protected under the First Amendment. The Supreme Court has it's very short and narrow list of exceptions, and hasn't shown any indication that it's ready to expand that list.

Indeed, the very same Mark Bennett, earlier this year, helped get a Texas revenge porn law declared unconstitutional, as the court there recognized that the law ran afoul of the First Amendment, in that it was criminalizing a new category of speech not currently exempted, and was unable to survive strict scrutiny, as per the Supreme Court, for any legislation that includes content-based restrictions.

But Mark Bennett is now reasonably perturbed that the Supreme Court of Vermont has decided that that state's revenge porn law is constitutional. And part of the reason he's so perturbed is that the ruling is truly bizarre. It accurately notes that revenge porn does not fall into one of the delineated exceptions to the First Amendment... but (surprisingly) that it still can withstand strict scrutiny:

For the reasons set forth below, we conclude that “revenge porn” does not fall within an established categorical exception to full First Amendment protection, and we decline to predict that the U.S. Supreme Court would recognize a new category. However, we conclude that the Vermont statute survives strict scrutiny as the U.S. Supreme Court has applied that standard.

That's... very strange. Usually, once a court recognizes that something is not in an exempted bucket, it finds the law to be unconstitutional. Here, Vermont is carving new territory. Thankfully, as part of saying that revenge porn is not in an already exempted bucket is a good thing, as it wipes out the incorrect claim by some law professors that you could just say that revenge porn is obscene (which would be very problematic). The court correctly highlights how there are massive differences between what is obscene and what is revenge porn, and notes (correctly again) that the Supreme Court is loathe to expand the definition of obscene:

We recognize that some of the characteristics of obscenity that warrant its regulation also characterize nonconsensual pornography, but we take our cues from the Supreme Court’s reluctance to expand the scope of obscenity on the basis of a purpose-based analysis.

Next, the court (correctly!) says it's in no position to create a new category of exempted speech:

Although many of the State’s arguments support the proposition that the speech at issue in this case does not enjoy full First Amendment protection, we decline to identify a new categorical exclusion from the full protections of the First Amendment when the Supreme Court has not yet addressed the question.

Indeed, the Vermont Supreme Court highlights how frequently the US Supreme Court has been tossing out laws that try to create new categories of unprotected speech:

[W]e decline to predict that the Supreme Court will add nonconsensual pornography to the list of speech categorically excluded. We base our declination on two primary considerations: the Court’s recent emphatic rejection of attempts to name previously unrecognized categories, and the oft-repeated reluctance of the Supreme Court to adopt broad rules dealing with state regulations protecting individual privacy as they relate to free speech.

More than once in recent years, the Supreme Court has rebuffed efforts to name new categories of unprotected speech. In Stevens, the Court emphatically refused to add “depictions of animal cruelty” to the list, rejecting the notion that the court has “freewheeling authority to declare new categories of speech outside the scope of the First Amendment.” 559 U.S. at 472. The Court explained, “Maybe there are some categories of speech that have been historically unprotected, but have not yet been specifically identified or discussed as such in our case law. But if so, there is no evidence that ‘depictions of animal cruelty’ is among them.” Id. A year later, citing Stevens, the Court declined to except violent video games sold to minors from the full protections of the First Amendment. Brown, 564 U.S. at 790-93 (“[N]ew categories of unprotected speech may not be added to the list by a legislature that concludes certain speech is too harmful to be tolerated.”). And a year after that, the Court declined to add false statements to the list. Alvarez, 567 U.S. at 722 (affirming appeals court ruling striking conviction for false statements about military decorations).

More significantly, as set forth more extensively above... in case after case involving a potential clash between the government’s interest in protecting individual privacy and the First Amendment’s free speech protections, the Supreme Court has consistently avoided broad pronouncements, and has defined the issue at hand narrowly, generally reconciling the tension in favor of free speech in the context of speech about matters of public interest while expressly reserving judgment on the proper balance in cases where the speech involves purely private matters. The considerations that would support the Court’s articulation of a categorical exclusion in this case may carry great weight in the strict scrutiny analysis.... But we leave it to the Supreme Court in the first instance to designate nonconsensual pornography as a new category of speech that falls outside the First Amendment’s full protections.

So then why doesn't the court declare this law unconstitutional? Well, that has lawyers like Mark Bennett and Eric Goldman perplexed. To pass "strict scrutiny," the court has to find that the law was passed to further a "compelling government interest" and that the legislation must be "narrowly tailored" to address just the issue for which the government has such a compelling reason.

Here, the court finds that there is a compelling government interest, saying that revenge porn images are not a matter of public concern, and serious harms created by revenge porn make it so that the government has a compelling interest in outlawing such content. Fair enough. But what about the "narrowly tailored" part. That seems like where such a law should fall down, but nope:

Section 2606 defines unlawful nonconsensual pornography narrowly, including limiting it to a confined class of content, a rigorous intent element that encompasses the nonconsent requirement, an objective requirement that the disclosure would cause a reasonable person harm, an express exclusion of images warranting greater constitutional protection, and a limitation to only those images that support the State’s compelling interest because their disclosure would violate a reasonable expectation of privacy. Our conclusion on this point is bolstered by a narrowing interpretation of one provision that we offer to ensure that the statute is duly narrowly tailored. The fact that the statute provides for criminal as well as civil liability does not render it inadequately tailored.

But, of course, the real problem is that all of these laws criminalize tons of content that should otherwise be protected. And here, the court more or less ignores that, by saying that the potentially overbroad nature of the law wasn't raised by the defendant:

The Supreme Court has recognized that in a facial challenge to a regulation of speech based on overbreadth, a law may be invalidated if “a substantial number of its applications are unconstitutional, judged in relation to the statute’s plainly legitimate sweep.” Id. at 473 (quotation omitted). Defendant here does not frame his challenge to the statute as an overbreadth challenge but instead argues that insofar as the speech restricted by the statute is content-based, the statute is presumptively invalid and fails strict scrutiny review.

But, as Mark Bennett highlights, this is the court completely missing that "overbreadth" is the thing you check to see if a statute is "narrowly tailored." But that's not what happened. Here, the court said no one raised the "overbreadth" issue, and thus it doesn't need to bother. So, instead, it says that the law is narrowly tailored based on how the law is written with a "rigorous intent element." But, that's not how the test works. As Bennett explains:

To pass strict scrutiny, a restriction must be narrowly tailored. It is logically impossible for a statute to be both overbroad and narrowly tailored. Strict scrutiny and overbreadth are not separate analyses. If a content-based restriction is substantially overbroad—if it restricts a real and substantial amount of constitutionally protected speech—it is ipso facto not narrowly tailored, and it fails strict scrutiny.

This is a confused mess of a ruling. As Eric Goldman notes, it's possible this could be appealed to the US Supreme Court, though it's unlikely that such a petition would be granted. It does seem likely that eventually this issue would need to be looked over by the Supreme Court to clarify the confusion. But, in the meantime, the law in Vermont stands.

Read More | 33 Comments | Leave a Comment..

More posts from Mike Masnick >>