Mike Masnick’s Techdirt Profile

mmasnick

About Mike MasnickTechdirt Insider

Mike is the founder and CEO of Floor64 and editor of the Techdirt blog.

He can be found on Twitter at http://www.twitter.com/mmasnick



Posted on Techdirt - 22 February 2018 @ 10:42am

Disney's Stupid Lawsuit Against Redbox Results In Judge Saying Disney Is Engaged In Copyright Misuse

from the blowback dept

Well, well. For the past few months I've been meaning to write about Disney's silly lawsuit against Redbox, but other stuff kept coming up, and now a judge has ruled against Disney and said that Disney appears to be engaged in copyright misuse. This is in a case that Disney brought -- and it appears to be backfiring badly. Redbox, as you probably know, has kiosks where you can rent DVDs relatively cheaply. It's managed to stay alive despite the traditional DVD rental business disappearing most everywhere else. About a decade ago, Hollywood fought vigorously against Redbox, but the company survived (though being taken over by a private equity firm in 2016), relying heavily on first sale rights, enabling it to legally purchase DVDs and then rent them out.

Back in December, however, Disney sued Redbox over taking its business to the next level and including download codes that could be purchased at a Redbox kiosk. Though it took them basically forever, Hollywood studios have finally realized that offering online access with the purchase of movies is a good idea, but they only want the end consumer who is buying a DVD to get access to them. So, Redbox would buy the Disney "Combo Packs" that offered the DVD and a download code, and the would offer the paper codes in kiosks to let renters watch the movie online. They weren't just copying the code and letting anyone use it -- it was still a one-to-one limitation with the purchase in that they would buy the DVD with a paper code on it, and then stuff that paper code into their kiosk delivery pods. Disney argued that this was contributory copyright infringement, even though the code pointed to a legitimate/authorized version of the movie and was legitimately purchased.

Redbox hit back by arguing that the First Sale doctrine protected it (as it did with the physical rentals) and that it is free to use the codes in this manner as the legal purchaser. Disney's response to that was that First Sale does not apply to the download code because it's not the copyright-covered work.

But Redbox also hit back with a separate punch against Disney, arguing that it was engaged in copyright misuse, a concept we've discussed in the past, but that rarely shows up in cases these days (even though we've argued it should be used more often). The basic argument was that Disney was over-claiming what copyright allowed it to exclude in order to stamp out competition. And, (somewhat surprisingly), in the process of denying Disney's demand for a preliminary injunction, the court agrees that Disney is engaged in copyright misuse because it is using its copyright in the movies to restrict what happens to purchases.

Combo Pack purchasers cannot access digital movie content, for which they have already paid, without exceeding the scope of the license agreement unless they forego their statutorily-guaranteed right to distribute their physical copies of that same movie as they see fit. This improper leveraging of Disney’s copyright in the digital content to restrict secondary transfers of physical copies directly implicates and conflicts with public policy enshrined in the Copyright Act, and constitutes copyright misuse.

Because of this, the court finds that Disney has little chance of prevailing on its contributory copyright infringement claims and denies the injunction request.

The court then notes that it doesn't even need to get into the First Sale issues, but then suggests Redbox would have difficulty winning on a pure first sale argument, mainly because of the ReDigi decision that said you can't sell "used" MP3s. And then concludes that First Sale doesn't really come into play since it's the code that's at issue, rather than the copyright-covered content:

Notwithstanding ReDigi, the plain language of the statutes, and the important policy considerations described by the Copyright Office, Redbox urges this court to conclude that Disney’s sale of a download code is indistinguishable from the sale of a tangible, physical, particular copy of a copyrighted work that has simply not yet been delivered. Even assuming that the transfer is a sale and not a license, and putting aside what Disney’s representations on the box may suggest about whether or not a “copy” is being transferred, this court cannot agree that a “particular material object” can be said to exist, let alone be transferred, prior to the time that a download code is redeemed and the copyrighted work is fixed onto the downloader’s physical hard drive. Instead, Disney appears to have sold something akin to an option to create a physical copy at some point in the future. Because no particular, fixed copy of a copyrighted work yet existed at the time Redbox purchased, or sold, a digital download code, the first sale doctrine is inapplicable to this case.

There's a separate issue around whether or not Redbox's actions constituted a "breach of contract," and again the court is unimpressed. The key question is whether or not the text that Disney prints on its box about how "codes are not for sale or transfer" represents a contract. The court easily concludes that it does not:

The phrase “Codes are not for sale or transfer” cannot constitute a shrink wrap contract because, like the box at issue in Norcia, Disney’s Combo Pack box makes no suggestion that opening the box constitutes acceptance of any further license restrictions.... Although Disney seeks to analogize its Combo Pack packaging and language to the packaging and terms in Lexmark, the comparison is inapt. The thorough boxtop license language in Lexmark not only provided consumers with specific notice of the existence of a license and explicitly stated that opening the package would constitute acceptance, but also set forth the full terms of the agreement, including the nature of the consideration provided, and described a post-purchase mechanism for rejecting the license. Here, in contrast, Disney relies solely upon the phrase “Codes are not for sale or transfer” to carry all of that weight. Unlike the box-top language in Lexmark, Disney’s phrase does not identify the existence of a license offer in the first instance, let alone identify the nature of any consideration, specify any means of acceptance, or indicate that the consumer’s decision to open the box will constitute assent. In the absence of any such indications that an offer was being made, Redbox’s silence cannot reasonably be interpreted as assent to a restrictive license.

Of course, this almost certainly means that Disney is quickly reprinting the packaging on all its Combo Pack DVDs to make this language more legalistic to match the Lexmark standard.

Still, the court also notes that Disney makes other claims on the box that are clearly not true, which further undermine the claim that random sentences on the box represent a contract:

Indeed, the presence of other, similarly assertive but unquestionably non-binding language on the Combo Pack boxes casts further doubt upon the argument that the phrase “Not For Sale or Transfer” communicates the terms or existence of a valid offer. The packaging also states, for example, that “This product . . . cannot be resold or rented individually.”... This prescription is demonstrably false, at least insofar as it pertains to the Blu-ray disc and DVD portions of the Combo Pack.8 The Copyright Act explicitly provides that the owner of a particular copy “is entitled, without the authority of the copyright owner, to sell or otherwise dispose of the possession of that copy.” ... Thus, the clearly unenforceable “cannot be resold individually” language conveys nothing so much as Disney’s preference about consumers’ future behavior, rather than the existence of a binding agreement. At this stage, it appears that the accompanying “Not For Sale or Transfer” language plays a similar role.

While it's a bit disappointing to see the court buy into the ReDigi reasoning on First Sale, it's good to see it not buy the language on the box representing a contract and to call out the company for copyright misuse in leveraging copyrights to stifle other lawful activity. This case is likely far from over, though, so we'll see how things progress.

Read More | 16 Comments | Leave a Comment..

Posted on Techdirt - 22 February 2018 @ 9:32am

House Prepared To Rush Vote On Terrible Frankenstein SESTA, Which Will Harm Trafficking Victims & The Internet

from the big-massive-mistake dept

Things had been mostly quiet on the SESTA/FOSTA front for the past few weeks, but apparently that's about to change, as the House leadership has agreed to a plan to rush the bill to a full floor vote next week, by creating a terrible Frankenstein of a bill that solves none of the existing concerns people had -- but creates new ones. If you don't recall, there are competing bills in the House (FOSTA) and the Senate (SESTA) which purportedly both attempt to deal with the problem of human traffickers using internet services to enable illegal trafficking. Both bills have serious flaws in how they attack the problem -- with the potential to actually make the problem of trafficking worse while also screwing up how the internet works (especially for smaller internet services) at the same time.

Things had been at a standstill for the past couple months as the House pushed its approach with FOSTA, while the Senate stood by its approach with SESTA. SESTA works by changing Section 230 of the Communications Decency Act to create a huge hole saying that CDA 230 doesn't apply if a site "knowingly facilitates" a violation of sex trafficking laws. If you don't have much experience with how similar laws work on the internet, this might sound reasonable, but in practice it's not. There's a similar "knowledge" standard in copyright law, and we've seen that abused repeatedly to censor all sorts of content over the years. You just need to allege that something violates the law, and a platform seeking to avoid potentially crippling liability is likely to remove that content. As I've noted, if the law passes, almost every internet company will be put at risk, including anyone from small blogs like ours to Wikipedia. The bill's backers seem to think this is a benefit rather than a problem -- which is quite incredible.

Until now, the House had been pushing an alternative proposal, called FOSTA, which tried to achieve similar results without punching a giant hole in CDA 230. Instead, it focused on creating a new crime for those with the intent to promote or facilitate prostitution. The intent standard is a much stronger one than the "knowledge" standard. There were still a couple of problems with FOSTA, though. Rather than focusing on sex trafficking, it covered all prostitution, which is too frequently lumped in with trafficking, and worried many in the community of folks supporting the rights of sex workers. But, a larger issue was that this would still open a huge hole for state and local prosecutors to go on massive fishing expeditions if any sort of prostitution related content ended up on any website. Even if they couldn't show intent, they could still bog down almost any internet platform with charges and investigations for quite some time. I mean, even we get people trying to spam our comments all the time with what appear to be prostitution ads. We catch most of them, but what if a few get through and some law enforcement agency wants to make life difficult for us? Under FOSTA, that's a real possibility. Such laws can be abused.

Still, the approaches were so different that things appeared to mostly be at a standstill. However, as noted above, suddenly things are moving and moving fast... and in the worst possible way. House Leadership apparently decided that rather than convince the Senate to move to a FOSTA approach, they would just bolt SESTA onto FOSTA via an amendment. And then, suddenly the House bill has all the problems of both bills without fixing either.

That amendment was released yesterday and is being introduced by Rep. Mimi Walters of California. Her district includes Irvine, which houses a whole bunch of tech companies who should be absolutely furious that their own representative just made things much more difficult for them. Take, for example, JobzMall, an Irvine-based company for connecting workers and employers. It's not difficult to think of how some might try to abuse that tool for prostitution or trafficking -- and suddenly the site may face a ton of legal fights, fishing expeditions and criminal threats because of this. That seems like a huge, huge problem.

And, importantly, it cannot be stressed enough that nothing in either of these bills does anything at all to actually stop sex trafficking. Supporters of the bill keep insisting it's necessary to stop sex trafficking and that those opposed to the bills are somehow in favor of sex trafficking. That's just wrong. Those opposed to the bill know what happens when you have mis-targeted bills that hold platforms responsible for what users do with them: and it's not that the "bad stuff" goes away. Instead, the bad stuff tends to continue, and lots of perfectly acceptable things get censored.

A recent paper by one of the world's foremost experts on "intermediary liability," Daphne Keller, explains why the bill won't work based on years and years of studying how these kinds of intermediary liability laws work in practice:

SESTA’s confusing language and poor policy choices, combined with platforms’ natural incentive to avoid legal risk, make its likely practical consequences all too clear. It will give platforms reason to err on the side of removing Internet users’ speech in response to any controversy – and in response to false or mistaken allegations, which are often levied against online speech. It will also make platforms that want to weed out bad user generated content think twice, since such efforts could increase their overall legal exposure.

And, again, NONE of that does anything to actually go after sex traffickers.

As Keller notes in her paper:

SESTA would fall short on both of intermediary liability law’s core goals: getting illegal content down from the Internet, and keeping legal speech up. It may not survive the inevitable First Amendment challenge if it becomes law. That’s a shame. Preventing online sex trafficking is an important goal, and one that any reasonable participant in the SESTA discussion shares. There is no perfect law for doing that, but there are laws that could do better than SESTA -- and with far less harm to ordinary Internet users. Twenty years of intermediary liability lawmaking, in the US and around the world, has provided valuable lessons that could guide Congress in creating a more viable law.

But instead of doing that, Congress is pushing through with something that doesn't even remotely attempt to fix the problems, but bolts together two totally separate problematic bills and washes its hands of the whole process. And, we won't even bother getting into the procedural insanity of this suddenly coming to the House floor for a vote early next week, despite the Judiciary Committee only voting for FOSTA, but not this SESTA-clone amendment.

This is a terrible idea, done in a terrible way and Congress seems to be doing it because it wants to "do something" about sex trafficking, without realizing what it's doing won't help stop sex trafficking, and could create massive harm for the wider internet. It's the perfectly dumb solution to the wrong problem. It's a very Congress-like approach to things, in which "doing something" is much more important than understanding the issues or doing the right thing.

And, once again, while some are incorrectly insisting that "the big tech companies" are against this, they are not. Their trade group, the Internet Association, came out in favor of SESTA's approach and Facebook in particular has gone all in supporting the bill. In talking to people familiar with Facebook's thinking on this, they recognize that they can withstand whatever bullshit comes out of this, but they know that smaller platforms cannot. And to Facebook, that is one of the benefits of SESTA. It weakens the competition and hurts smaller companies.

So, no, this is not "big tech" fighting SESTA. Basically every smaller internet platform I've spoken to is upset about this and trying to figure out how they're going to handle the inevitable mess this causes. But few are willing to speak out publicly, because they know that SESTA supporters will vocally attack them and falsely claim that they're "in favor of sex trafficking." Incredibly, most of those attacks will come on platforms that only exist because of CDA 230's strong protections.

Read More | 6 Comments | Leave a Comment..

Posted on Techdirt - 21 February 2018 @ 3:29pm

Court Realizes It Totally Screwed Up An Injunction Against Zazzle For Copyright Infringement

from the oops dept

Last year we wrote about a bizarre and troubling DMCA case involving the print-on-demand company Zazzle, in which the judge in the district court bizarrely and wrongly claimed that Zazzle lost its DMCA safe harbors because the allegedly infringing works were printed on a t-shirt, rather than remaining digitally (even though it was the end user using the infringing work, and Zazzle's system just processed it automatically). To add insult to injury, in November, the judge then issued a permanent injunction against Zazzle for this infringement.

However, it appears that no one is more troubled about this permanent injunction issued by Judge Stephen Wilson... than Judge Stephen Wilson.

In early February, Wilson released a new order reversing his earlier order and chastising himself for getting things wrong.

The Court finds that there is a “manifest showing” of the Court’s “failure to consider material facts presented to the Court” because the Court did not provide any justification for the permanent injunction.

Yes, that's the judge chastising himself for not providing the necessary justification for an injunction after finding Zazzle to not have safe harbors (some of the background here involves a question about which rules -- federal or local -- the court should be using to reconsider the earlier ruling, which isn't that interesting). Unfortunately, Wilson doesn't go back and revisit the DMCA safe harbors question -- this new ruling just focuses on why he was wrong to issue a permanent injunction after finding that the DMCA safe harbors didn't apply.

For that, the court notes (correctly, this time!) that under the Supreme Court's important MercExchange standard, courts should be careful about issuing injunctions for infringement, and that the plaintiffs need to show irreparable harm and that other remedies aren't more appropriate (and that an injunction won't cause greater harm to the public). Here, the plaintiff failed to do all of those -- and somehow the court missed it.

In addition, the court does note that it never even considered the question about whether the artwork that is displayed on Zazzle's website, but not printed on t-shirts, still gives Zazzle safe harbor protections -- and the injunction would have applied to those works too, even though they might have been protected under the DMCA:

Plaintiff’s proposed permanent injunction was ambiguous and went beyond the issues at trial, facts which the Court did not consider when it granted the initial motion for a permanent injunction. Before trial, the Court never decided whether Zazzle had a viable DMCA defense as to images only displayed on Zazzle’s website and never physically manufactured. Dkt. 81. Plaintiff withdrew its claims as to such “display-only” artworks prior to trial, so the issue was not tried. Dkt. 110 at 2:11-25. As such, it is unclear whether the injunction applies to both the manufacture and distribution of physical goods, or also to display of images on the Zazzle website. It is also unclear if Zazzle must take “reasonable” steps to address the display of images on its website as well as its manufacture of products. The Court did not consider these material facts in determining the scope of the permanent injunction; upon reviewing these facts, the proposed injunctions go beyond the issues at trial.

It's good that the court has realized its own mistakes and fixed them -- though it would be nice if it went further to the point of recognizing the problems of saying that by printing an image on a physical good the DMCA protections disappear.

But, really, reading this new ruling, you almost (almost) feel bad for Judge Wilson as he complains about Judge Wilson's failings in this case:

The Court recognizes that it failed to consider material facts in granting the permanent injunction in October 2017. The Court also recognizes that it provided no rationale for the permanent injunction, manifestly showing the failure to consider such facts. Upon considering those facts, the Court finds no basis for a permanent injunction in this matter.

Don't be too hard on yourself, Judge. Admitting your mistakes is the first step.

Read More | 17 Comments | Leave a Comment..

Posted on Techdirt - 21 February 2018 @ 9:37am

Even If The Russian Troll Factory Abused Our Openness Against Us, That Doesn't Mean We Should Close Up

from the giving-them-what-they-wanted dept

Last week, we wrote about the Mueller indictment of 13 Russians and three Russian organizations for fraud in trying to sow discord among Americans and potentially influence the election by trolling them on social media. If you haven't read the indictment yet, I recommend doing so -- or at least reading Garrett Graff's impressive attempt at basically turning the indictment into one hell of a narrative story. The key point I raised in that article was that the efforts the Russians undertook to appear to be American shows how difficult-to-impossible it would be to demand that the various internet platforms magically block such trolling attempts in the future.

But, there's a larger issue here that seems worth exploring as well. Among the various attacks aimed at social media companies (mainly Facebook) it feels that many are using this as yet another excuse to demand more regulation of these platforms or to poke more holes in Section 230 of the CDA.

We've already spent many posts explaining why undermining CDA 230 will do a lot more harm than good, but it seems worth especially highlighting how undermining it here in response to Russian attacks would only help the Russians accomplish what it is they've set out to do. CDA 230 is a key aspect of enabling free speech online. It's what allows platforms to host our speech without having to carefully review it before it's allowed, or take it down at the first sign of complaint (allowing a heckler's veto). This is tremendously important in making the internet a platform for everyone, as opposed to just the elite and connected. And, yes, with that comes serious challenges, because some people will inevitably seek to abuse that openness to try to turn us against each other (as appears to have happened here).

But it would be quite an "own goal" to turn around and dismantle the tools that enable free speech in response to foreign attacks.

As Julian Sanchez points out at the NY Times, the Russian government is annoyed by the US criticizing them for online censorship -- so pushing social media companies to censor more in the US would help the Russians point out what hypocrites the Americans are and continue to suppress opposing political points of view:

No less than our “meddling” in their internal elections, Russia has long resented United States criticism of the country’s repressive approach to online speech. Their use of online platforms to tamper with our presidential race reads not only as an attack, but as an implicit argument: “The freedoms you trumpet so loudly, your unwillingness to regulate political speech on the internet, your tolerance for anonymity — all these are weaknesses, which we’ll prove by exploiting them.”

Urgent as it is for the United States to take measures to prevent similar meddling in the next election, we should be careful that our response doesn’t constitute a tacit agreement.

I'm not one to believe the idea that Russians are such implicitly brilliant tacticians that they'd deliberately play the US into taking the exact response they want, but we should be quite careful about undermining our own freedoms and our own services just because some people were able to exploit them. Not only does it harm our own society in the long run, it also gives a fairly explicit basis for lots of repressive regimes (including, but in no way limited to, Russia) to use that as something to point to as they push much greater suppression of free speech.

51 Comments | Leave a Comment..

Posted on Techdirt - 21 February 2018 @ 3:31am

German Court Says Facebook's Real Names Policy Violates Users' Privacy

from the really? dept

With more and more people attacking online trolls, one common refrain is that we should do away with anonymity online. There's this false belief that forcing everyone to use their "real name" online will somehow stop trolling and create better behavior. Of course, at the very same time, lots of people seem to be blaming online social media platforms for nefarious activity and trollish activity including "fake news." And Facebook is a prime target -- which is a bit ironic, given that Facebook already has a "real names" policy. On Facebook you're not allowed to use a pseudonym, but are expected to use your real name. And yet, trolling still takes place. Indeed, as we've written for the better part of a decade, the focus on attacking anonymity online is misplaced. We think that platforms like Facebook and Google that use a real names policy are making a mistake, because enabling anonymous or pseudononymous speech is quite important in enabling people to speak freely on a variety of subjects. Separately, as studies have shown, forcing people to use real names doesn't stop anti-social behavior.

All that is background for an interesting, and possibly surprising, ruling in a local German court, finding that Facebook's real names policy violates local data protection rules. I can't read the original ruling since my understanding of German is quite limited -- but it appears to have found that requiring real names is "a covert way" of obtaining someone's name which raises questions for privacy and data protection. The case was brought by VZBZ, which is the Federation of German Consumer Organizations. Facebook says it will appeal the ruling, so it's hardly final.

On the flip side, VZBZ is also appealing a part of the ruling that it lost. It had also claimed that it was misleading for Facebook to say that its service was "free" since users "pay" with their "data." The court didn't find that convincing.

It will certainly be interesting to see where the courts come out on this after the appeals process runs its course. As stated above, I think the real names policy is silly and those insisting that it's necessary are confused both about the importance of anonymity and the impact of real names on trollish behavior. However, I also think that should be a choice that Facebook gets to make on its own concerning how it runs its platform. So I'm troubled by the idea that a government can come in and tell a company that it can't require a real name to use its service. If people don't want to supply Facebook with their real name... don't use Facebook.

But, honestly, what's really perplexing is that this is all coming down at the same time that Germany -- especially -- has been trying to crack down on any "bad content" appearing on Facebook, demanding that Facebook wave a magic wand and stop all bad behavior from appearing on its site. I'd imagine that's significantly harder if it has to allow people to use the site anonymously. This is not to say that anonymity leads to more "bad" content (see above), but it certainly can make moderating users much more difficult for a platform.

So, if you're Facebook, at this point you have to wonder just what you have to do to keep the service running in Germany without upsetting officials. You can't let anything bad happen on the platform, and you can't get user's names. It increasingly seems that Germany wants Facebook to just magically "only allow good stuff" no matter how impossible that might be.

64 Comments | Leave a Comment..

Posted on Techdirt - 20 February 2018 @ 10:48am

Wired's Big Cover Story On Facebook Gets Key Legal Point Totally Backwards, Demonstrating Why CDA 230 Is Actually Important

from the bad-reporting dept

If you haven't read it yet, I highly recommend reading the latest Wired cover story by Nicholas Thompson and Fred Vogelstein, detailing the past two years at Facebook and how the company has struggled in coming to grips with the fact that their platform can be used by people to do great harm (such as sow discontent and influence elections). It's a good read that is deeply reported (by two excellent reporters), and has some great anecdotes, including the belief that an investigation by then Connecticut Attorney General Richard Blumenthal into Facebook a decade ago, was really an astroturfing campaign by MySpace:

Back in 2007, Facebook had come under criticism from 49 state attorneys general for failing to protect young Facebook users from sexual predators and inappropriate content. Concerned parents had written to Connecticut attorney general Richard Blumenthal, who opened an investigation, and to The New York Times, which published a story. But according to a former Facebook executive in a position to know, the company believed that many of the Facebook accounts and the predatory behavior the letters referenced were fakes, traceable to News Corp lawyers or others working for Murdoch, who owned Facebook’s biggest competitor, MySpace. “We traced the creation of the Facebook accounts to IP addresses at the Apple store a block away from the MySpace offices in Santa Monica,” the executive says. “Facebook then traced interactions with those accounts to News Corp lawyers. When it comes to Facebook, Murdoch has been playing every angle he can for a long time.”

That's a pretty amazing story, which certainly could be true. After all, just a few years later there was the famous NY Times article about how companies were courting state Attorneys General to attack their competitors (which later came up again, when the MPAA -- after reading that NY Times article -- decided to use that strategy to go after Google). And Blumenthal had a long history as Attorney General of grandstanding about tech companies.

But, for all the fascinating reporting in the piece, what's troubling is that Thompson and Vogelstein get some very basic facts wrong -- and, unfortunately, one of those basic facts is a core peg used to hold up the story. Specifically, the article incorrectly points to Section 230 of the Communications Decency Act as being a major hindrance to Facebook improving its platform. Here's how the law incorrectly described in a longer paragraph explaining why Facebook "ignored" the "problem" of "fake news" (scare quotes on purpose):

And then there was the ever-present issue of Section 230 of the 1996 Communications Decency Act. If the company started taking responsibility for fake news, it might have to take responsibility for a lot more. Facebook had plenty of reasons to keep its head in the sand.

That's... wrong. I mean, it's not just wrong by degree, it's flat out, totally and completely wrong. It's wrong to the point that you have to wonder if Wired's fact checkers decided to just skip it, even though it's a fundamental claim in the story.

Indeed, the whole point of CDA 230 is exactly the opposite of what the article claims. As you can read yourself, if you look at the law, it specifically encourages platforms to moderate the content they host by saying that the moderation choices they make do not impact their liability. This is the very core point of CDA 230:

No provider or user of an interactive computer service shall be held liable on account of any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected

This is the "good samaritan clause" of the CDA 230 and it's encouraging platforms like Facebook to "take responsibility for fake news" by saying that no matter what choices it makes, it won't make Facebook liable for looking at the content. Changing CDA 230 as many people are trying to do right now is what would create incentives for Facebook to put its head in the sand.

And yet, Thompson and Vogelstein repeat this false claim:

But if anyone inside Facebook is unconvinced by religion, there is also Section 230 of the 1996 Communications Decency Act to recommend the idea. This is the section of US law that shelters internet intermediaries from liability for the content their users post. If Facebook were to start creating or editing content on its platform, it would risk losing that immunity—and it’s hard to imagine how Facebook could exist if it were liable for the many billion pieces of content a day that users post on its site.

This one is half right, but half misleading. It's true -- under the Roommates case -- that if Facebook creates content that breaks the law, then it remains liable for that content. But not for editing or moderating content on its platform as that sentence implies.

Indeed, this is a big part of the problem we have with the ongoing debates around CDA 230. So many people insist that CDA 230 incentivizes platforms to "do nothing" or "look the other way" or, as Wired erroneously reports, to "put their head in the sand." But that's not true at all. CDA 230 not only enables, but encourages, platforms to be more active moderators by making it clear that the choices they make concerning moderating content (outside the context of copyright -- which uses a whole different set of rules), don't create new liability for them. That's why so many platforms are trying so many different things (as we recently explored in our series of stories on content moderation by internet platforms).

What's really troubling about this is that people are going to use the Wired cover story as yet another argument for doing away with (or at least punching giant holes in) CDA 230. They'll argue that we need to make changes to encourage companies like Facebook not to ignore the bad behavior on their platform. But the real lesson of the story -- which should have come out if the reporting were more carefully done -- is that CDA 230 is what we need to encourage that behavior. The fact that Facebook is able to and is willing to change and experiment in response to increasing public pressure, is only so because CDA 230 gives the company that freedom to do so. Adding liability for wrong decisions is actually what would make the problem worse, and would encourage platforms like Facebook to do less.

It's tragic that in such a high profile, carefully reported story, a key part of it -- indeed, a part on which much of the story itself hinges -- is simply, factually, wrong.

36 Comments | Leave a Comment..

Posted on Techdirt - 16 February 2018 @ 2:43pm

DOJ Russia Indictment Again Highlights Why Internet Companies Can't Just Wave A Magic Wand To Make Bad Stuff Go Away

from the troll-troll-troll-troll dept

As you've certainly heard by now, earlier today the Justice Department announced that it had indicted thirteen Russian individuals and three Russian organizations for various crimes related to trying to influence the US election. You should read the full indictment if you haven't already. Not surprisingly it focuses on the infamous Internet Research Agency (IRA), which was the giant Russian online trolling operation that we've discussed going back to 2015.

While many are trying to position the indictment as a "significant" bit of news, I have to admit to being a bit underwhelmed. It really does not reveal much that wasn't already widely known. It's been widely reported that the Russians had interest in disrupting our democracy and sowing discord, including setting up and pushing competing rallies from different political sides, and generally stoking fires of distrust and anger in America. And... the indictment seems to repeat much of that which has already been reported. Furthermore, this indictment actually reminds me quite a bit of a similar indictment four years ago aginst various Chinese officials for "hacking" crimes against the US. As we noted then, indicting the Chinese -- who the US would never be able to arrest anyway -- just seemed to be a publicity stunt, that had the potential to come back to haunt the US. It kinda feels the same here.

What is interesting to me, however, is that the indictment also demonstrates why all the hand-wringing against Facebook, Twitter and Google seems kind of misplaced. For months we've been seeing big articles and Congressional hearings questioning why the platforms allowed the Russians to use their services as propaganda tools -- even getting the companies to recently send out (useless, confusing) announcements to people about whether or not they saw or reposted Russian troll propaganda. But what the indictment makes pretty clear, is that the Russians made it nearly impossible for an internet service to ferret them out. The money used was spread out among many different banks and laundered through various means to make it more difficult to trace back. And it details just how far the trolls went to appear to be Americans, including traveling to the US, posing as Americans online to talk to actual US activists and push them in certain directions. And, of course, confusing the internet platforms into thinking they were Americans:

ORGANIZATION employees, referred to as "specialists," were tasked to create social media accounts that appeared to be operated by U.S. persons. The Specialists were divided into day-shift and night-shift hours and instructed to make posts in accordance with the appropriate U.S. time zone. The ORGANIZATION also circulated lists of U.S. holidays so that specialists could develop and post appropriate account activity. Specialists were instructed to write about topics germane to the United States such as U.S. foreign policy and U.S. economic issues. Specialists were directed to create "political intensity through supporting radical groups, users dissatisfied with [the] social and economic situation and oppositional social movements."

Defendants and their co-conspirators also created thematic group pages on social media sites, particularly on the social media platforms Facebook and Instagram. ORGANIZATION- controlled pages addressed a range of issues, including: immigration (with group names including "Secured Borders"); the Black Lives Matter movement (with group names including "Blacktivist"); religion (with group names including "United Muslims of America" and "Army of Jesus"); and certain geographic regions within the United States (with group names including "South United" and "Heart of Texas"). By 2016, the size of many groups had grown to hundreds of thousands of online followers.

Most of those groups (if not all?) had previously been revealed by the platforms or by news reports. But the extent to which the Russians went to cover their trails is more revealing.

To hide their Russian identities and ORGANIZATION affiliation, Defendants and their co- conspirators--particularly POLOZOV and the IT department--purchased space on computer servers located inside the United States in order to set up virtual private networks Defendants and their co-conspirators connected from Russia to the U.S.-based infrastructure by way of these VPNs and conducted activity inside the United States? including accessing online social media accounts, opening new accounts, and communicating with real U.S. persons--while masking the Russian origin and control of the activity.

Defendants and their co-conspirators also registered and controlled hundreds of web-based email accounts hosted by U.S. email providers under false names so as to appear to be U.S. persons and groups. From these accounts, Defendants and their co-conspirators registered or linked to online social media accounts in order to monitor them; posed as U.S. persons when requesting assistance from real U.S. persons; contacted media outlets in order to promote activities inside the United States; and conducted other operations, such as those set forth below.

Use of Stolen U.S. Identities

In or around 2016, Defendants and their co-conspirators also used, possessed, and transferred, without lawful authority, the social security numbers and dates of birth of real U.S. persons without those persons' knowledge or consent. Using these means of identification, Defendants and their co-conspirators opened accounts at PayPal, a digital payment service provider; created false means of identification, including fake driver's licenses; and posted on social media accounts using the identities of these U.S. victims. Defendants and their co-conspirators also obtained, and attempted to obtain, false identification documents to use as proof of identity in connection with maintaining accounts and purchasing advertisements on social media sites.

This was not just some run-of-the-mill "pretend to be Americans," this was a hugely involved process to make it very difficult to determine that they were not Americans.

I've seen some people online claiming that this shows why the platforms have to take more responsibility for who is using their platform:

But my read on it is exactly the opposite. It shows just how ridiculous such a demand is. Would any of us be using these various services if we were all forced to go through a detailed background check just to use a social media platform? That seems excessive and silly. Part of the reason why these platforms are so useful and powerful in the first place is that they're available for nearly everyone to use with little hurdles in the way. That obviously has negative consequences -- in the form of trolling and scams and malicious behavior -- but there's also a ton of really good stuff that has come out of it.

We should be pretty cautious before we throw away all of the value of these platforms just because some people used them for nefarious purposes. People are always going to be able to hide their true intentions from the various platforms -- and the response to that shouldn't be "put more blame on the platforms" -- it should be a recognition of why it's so silly to blame the tools and services for the actions of the users.

Yes, we should be concerned about foreign attempts to influence our elections (while noting that the US, itself, has a long history of doing the same damn thing in other countries -- so this is a bit of blowback). But blaming the technology platforms the Russians used seems to be totally missing the point of what happened -- and risks making the internet much worse for everyone else.

Read More | 66 Comments | Leave a Comment..

Posted on Techdirt - 16 February 2018 @ 10:41am

Everyone Creates: New Empirical Data Shows Just How Much The Internet Has Enabled A New Creative Economy

from the it's-not-tech-v.-hollywood dept

Visit EveryoneCreates.org to read stories of creation empowered by the internet, and share your own! »

Just last week we announced our new site EveryoneCreates.org, in which we showcase stories of people who rely on the open internet and various internet platforms to create artwork of all kinds -- from music to books to movies to photographs and more. It appears that we're not the only ones to be thinking about this. The Re:Create coalition has just now released some fantastic economic research about the large and growing population of people who use internet platforms to create and to make money from their creations. It fits right in with the point that we made, that contrary to the RIAA, MPAA and its front groups like "Creative Future," the internet is not harming creators, it's enabling them by the millions (and allowing them to make much more money as well).

Indeed, the report almost certainly significantly undercounts the number of content creators making money on the internet these days, as it only explores nine platforms: Amazon Publishing, eBay, Etsy, Instagram, Shapeways, Tumblr, Twitch, WordPress and YouTube. Those are all great, and probably cover a decent subset of creators and how they make money -- but it leaves off tons of others, including Kickstarter, Patreon, IndieGogo, Wattpad, Bandcamp, Apple, Spotify and many other platforms that have increasingly become central to the way in which creators make their money. Still, even with this smaller subset of creative platforms, the study is impressive.

14.8 million people used those platforms to earn approximately $5.9 billion in 2016.

Let's repeat that. The internet -- which some legacy entertainment types keep insisting are "killing" content creators and making it "impossible" to make money -- enabled nearly 15 million people to earn nearly $6 billion in 2016. And, again, that doesn't even include things like Kickstarter or Patreon (in 2016 alone, Kickstarter had $580 million in pledges...). In short, just as we've been saying for years, while those who rely on the old legacy gatekeeper system of waiting until you're "discovered" by a label/studio/publisher and then hoping they'll do all the work to make you rich and famous, maybe that's a bit more difficult these days. But, for actual creators, today is an astounding, unprecedented period of opportunity.

This does not mean that everyone discussed here is making a full-time living. Indeed, the report notes clearly that many people are using these platforms to supplement their revenue. But they're still creating and they're still making money off of their creations -- something that would have been nearly impossible not too long ago. And, just as the report likely undercounts the size of this economy due to missing some key platforms, it also misses additional revenue streams even related to the platforms it did count:

It is impossible to determine an average income for members of the new creative economy, because earnings vary so widely for each platform. As previously stated, this analysis includes only a single source of income for each of the nine platforms. For instance, based on the current data, we include a YouTube star’s earnings from YouTube but not revenues as influencers or advertisements on other social media platforms.

Also interesting is how the report found that creators are spread all over the US. While California, New York and Texas have the most creators, even those with the "smallest" numbers of creators (Wyoming and the Dakotas) still had tens of thousands of people using these platforms to make money. And, yes, in case you're wondering, the study excluded big time stars like Kim Kardashian using platforms like Instagram to make money, focusing instead on truly independent creators.

This is especially important, as it's coming at a time when the RIAA, MPAA and their friends continue their nonsensical claims that these very same internet platforms are somehow "harming" content creators, and that laws need to change to make it harder for everyday people to use these platforms to express their artwork and to make money off of it. It's almost as if those legacy gatekeepers don't like the competition or the fact that people are realizing they don't need to work with a gatekeeper to create and to make money these days.

So, once again, it's time to dump the ridiculous myth of "tech v. content." That's not true at all. As this report shows, these tech platforms have enabled many millions of people to earn billions of dollars that's only possible because they're open platforms that get past the old gatekeeper system.

Share your story at EveryoneCreates.org to let policymakers know how important an open internet and fair use is to your own creativity.

18 Comments | Leave a Comment..

Posted on Techdirt - 16 February 2018 @ 9:24am

Terrible Copyright Ruling Over An Embedded Tweet Undermines Key Concept Of How The Internet Works

from the this-is-bad dept

Just earlier this week we noted that a judge easily laughed Playboy's silly lawsuit out of court because merely linking to infringing content is not infringing itself. But a judge in New York, Judge Katherine Forrest, has ruled on a different case in a manner that is quite concerning, which goes against many other court rulings, and basically puts some fundamental concepts of how the internet works at risk. It's pretty bad. In short, she has ruled that merely embedding content from another site can be deemed infringing even if the new site is not hosting the content at all. This is wrong legally and technically, and hopefully this ruling will get overturned on appeal. But let's dig into the details.

The case involved a photographer, Justin Goldman, who took a photograph of quarterback Tom Brady on Snapchat. Somehow that image made its way from Snapchat to Reddit to Twitter. The photo went a bit viral, and a bunch of news organizations used Twitter's embed feature to show the tweet and the image. Goldman sued basically all the news publications that embedded the tweet -- including Breitbart, Vox, Yahoo, Gannett, the Boston Globe, Time and more. Now, multiple different courts around the country have said why this should not be seen as infringing by these publications. It's generally referred to as "the server test" -- in which to be direct infringement, you have to host the image yourself. This makes sense at both a technical and legal level because "embedding" an image is no different technically than linking to an image. It is literally the same thing -- you put in a piece of code that points the end user's computer to an image. The server at no point hosts or displays the image -- it is only the end user's computer. In the 9th Circuit, the various Perfect 10 cases have established the server test, and other courts have adopted it or similar concepts. In the 7th Circuit there was the famous Flavaworks case, where Judge Posner seemed almost annoyed that anyone could think that merely embedding infringing content could be deemed infringing.

But Judge Forrest has decided to carve a new path on this issue in Southern New York, teeing up (hopefully) an opportunity for the 2nd Circuit to tell her why she's wrong. Even more troubling, she actually relies on the awful Aereo "looks like a duck" test to come to this conclusion. Let's dig into her reasoning. The key issue here is the exclusive right to "display" a work under copyright, known as 106(5) under copyright law.

It's also important to note that this ruling is just at the summary judgment stage, and doesn't mean that the various publications will be found to have infringed -- it just means that the court is letting the case go forward, meaning that the various publications might now raise various defenses as to why their embedding is not infringing. It's still concerning, because given the "server test" in other jurisdictions, such a case would easily be tossed on a motion to dismiss or summary judgment because there's no legitimate claim of copyright infringement if no direct infringement can be shown. But here, Judge Forrest argues that because an embed leads an end user's computer to display an image, that somehow makes the publisher who included the embed code possibly liable for infringing the display right. Because it looks like a duck.

This is not a new issue by any means. I found a story from over a decade ago in which I warned that we'd see a lot more stupid lawsuits about embedding content from platforms, and have to admit I'm a bit surprised we haven't seen more. The reason that's the case is almost certainly because of the reliance of many courts on the server test, leading many to realize such an argument is a non-starter. Until now.

Forrest basically says that even though the image never touches the publisher's server, and the only thing the publisher is doing is linking to an image in a manner that makes the end-user's browser grab that image from another location and display it, it still counts as infringement -- because of the Aereo ruling. If you don't recall, Aereo involved a creative (if technically stupid) method for streaming over-the-air broadcast TV to users by setting up many local antennas that were legally allowed to receive the signals, and then transmitting them over the internet (which is also legal). But, the Supreme Court came up with a brand new test for why that's not allowed -- which we've called the "looks like a duck" test. The ruling found that because Aereo kinda looked like cable to the end user, the technical rigamarole in the background to make it legal simply doesn't matter -- all that matters is how things looked to the end user. Forrest argues the same is true here:

Moreover, though the Supreme Court has only weighed in obliquely on the issue, its language in Aereo is instructive. At heart, the Court’s holding eschewed the notion that Aereo should be absolved of liability based upon purely technical distinctions—in the end, Aereo was held to have transmitted the performances, despite its argument that it was the user clicking a button, and not any volitional act of Aereo itself, that did the performing. The language the Court used there to describe invisible technological details applies equally well here: “This difference means nothing to the subscriber. It means nothing to the broadcaster. We do not see how this single difference, invisible to subscriber and broadcaster alike, could transform a system that is for all practical purposes a traditional cable system into a ‘copy shop that provides patrons with a library card.’”

We were worried about the wider impact of the Aereo "duck" test -- and people told us it wasn't that big a deal. Indeed, until this ruling, Aereo hasn't been (successfully) cited very often. Many thought that the very specific nature of Aereo might limit that precedent to a very specific situation involving cable TV. This ruling suggests that the silly "duck" test may be spreading. And that's bad, because it's based on ignoring what's actually happening at the technological level, in which the technology may be designed specifically to not violate any of the exclusive rights of copyright law.

Also, it should worry people greatly that courts are using this "we don't care about what's actually happening, we just care what it looks like" standard for judging infringement. Because to infringe on a copyright requires a very specific set of facts. And here (as with Aereo) the court is saying "we don't care about whether or not it actually violates one of the exclusive rights granted under copyright, we only care if it looks like it infringes." That's... a huge change in the law, and it's not at all how copyright law has been judged in the past. It can and will be used to hamstring, limit, or destroy all sorts of unique and useful technological innovations.

Forrest also tries to distinguish this ruling from the Perfect 10 cases and the Flava Works case -- even admitting that other 2nd circuit courts have used the server test. But, she says, they were all different -- doing things like only using the server test for the distribution right, but not the display right, or not really endorsing the server test and ruling on other reasons.

Forrest also points to a trademark case that involved an embedded image which was found to be infringing -- but that's entirely different. The rules for trademark infringement are completely different than the exclusive rights related to copyright. With trademark, it's not as specific, and the use of someone else's logo broadly (as happened in the case cited) could easily be infringing on the trademark, but that doesn't get to the copyright question which involves much more carefully limited rights.

But, most troulbing of all, Forrest argues that the server test... is just wrong:

The Court declines defendants’ invitation to apply Perfect 10’s Server Test for two reasons. First, this Court is skeptical that Perfect 10 correctly interprets the display right of the Copyright Act. As stated above, this Court finds no indication in the text or legislative history of the Act that possessing a copy of an infringing image is a prerequisite to displaying it. The Ninth Circuit’s analysis hinged, however, on making a “copy” of the image to be displayed—which copy would be stored on the server. It stated that its holding did not “erroneously collapse the display right in section 106(5) into the reproduction right in 106(1).” Perfect 10 II, 508 F.3d at 1161. But indeed, that appears to be exactly what was done.

The Copyright Act, however, provides several clues that this is not what was intended. In several distinct parts of the Act, it contemplates infringers who would not be in possession of copies—for example in Section 110(5)(A) which exempts “small commercial establishments whose proprietors merely bring onto their premises standard radio or television equipment and turn it on for their customer’s enjoyment” from liability. H.R. Rep. No. 94-1476 at 87 (1976). That these establishments require an exemption, despite the fact that to turn on the radio or television is not to make or store a copy, is strong evidence that a copy need not be made in order to display an image.

Except... that's still very different. That's still a case where the "small commercial establishments" are showing the work. In this case -- and the very reason why the server test is so important -- the content in question is never on the publisher's premises or server. It only appears on the end user's browser, because that browser goes and fetches it.

Even more bizarre, Forrest argues that Perfect 10 and the server test are different because the image is displayed on the end user's computer:

In addition, the role of the user was paramount in the Perfect 10 case—the district court found that users who view the full-size images “after clicking on one of the thumbnails” are “engaged in a direct connection with third-party websites, which are themselves responsible for transferring content.” Perfect 10 I, 416 F. Supp. 2d at 843.

In this Court’s view, these distinctions are critical.

While this doesn't involve the end user "clicking" first to get the display, it's really no different. It is the end user who has the allegedly infringing content displayed on their computer, not the publisher. A direct connection is made between the end user and the hosting provider (in this case Twitter). The publisher never touches the actual content. Yet, Forrest argues that they can be direct infringers.

That's... wrong.

Despite the fact that EFF and others warned the court that this ruling would would massively upset the way the internet works, Forrest doesn't seem to believe them (or care)... because maybe fair use will protect people.

The Court does not view the results of its decision as having such dire consequences. Certainly, given a number as of yet unresolved strong defenses to liability separate from this issue, numerous viable claims should not follow.

In this case, there are genuine questions about whether plaintiff effectively released his image into the public domain when he posted it to his Snapchat account. Indeed, in many cases there are likely to be factual questions as to licensing and authorization. There is also a very serious and strong fair use defense, a defense under the Digital Millennium Copyright Act, and limitations on damages from innocent infringement.

That's... also wrong. Yes, publishers may be protected by fair use or other defenses. But fair use is much harder to get a ruling on at an early (summary judgment) stage in a case (a few courts are starting to allow this, but it's not all that common). Having the server test be good law would prevent a flood of these kinds of cases from being filed. Without it, people can troll media sites that embed tweets and go after them, leading to long and costly litigation, even if they have strong fair use defenses. Also, the reference above to releasing the image "into the public domain" is nonsensical. No one is arguing that the image was in the public domain. It is clearly covered by copyright.

Given what a total and complete mess this ruling will cause on the internet should it stand, I fully expect a robust appeal. The 2nd circuit can be a mixed bag on copyright, but often does a pretty good job in the end. One hopes that the 2nd circuit reverses this ruling, endorses the server test, and keeps the internet working as it was designed -- where embedding and linking to content doesn't magically make one liable for infringement.

Read More | 40 Comments | Leave a Comment..

Posted on Techdirt - 16 February 2018 @ 3:18am

Top ICE Lawyer Accused Of Identity Fraud Against Detained Immigrants

from the such-a-lovely-organization dept

For many, many years we've questioned the bizarre lawless nature of ICE -- Immigration and Customs Enforcement -- going back to the days when it was illegally seizing blogs, based on false claims of copyright infringement. We questioned what ICE had to do with censoring blogs in the first place. Of course, in the last year, ICE has been getting a lot more negative attention for something that is clearly under its purview: enforcement of immigration laws. Specifically, ICE has been almost gleefully demonstrating how they are thuggish bullies who are eager to deport as many people as possible. It's disgusting and inhumane -- and if you're going to be one of those people who pop up in our comments to say something ignorant about how if someone is here illegally they have no rights and should be booted as quickly as possible, go somewhere else to spout your nonsense. Also, seriously: take stock of your own priorities and look deeply at why you are so focused on destroying the lives of people who are almost certainly less well off and less privileged than you are, and who are seeking a better way of life.

But ICE's violent, gleeful thuggery seems to come easy to the organization -- and thus it should be little surprise that one of ICE's top lawyers has been charged with identity fraud and wire fraud in trying to use the identities of at least seven immigrants who were being processed by ICE. The indictment against Raphael Sanchez, the chief counsel for ICE in Seattle is quite a read.

Beginning in or about October, 2013, and continuing until on or about October 25, 2017, in the Western District of Washington, the defendant,

RAPHAEL A. SANCHEZ,

devised and intended to devise a scheme and artifice to defraud financial institutions, including American Express Company, Bank of America Corporation, Capital One Financial Corporation, Citibank, Discover Financial Services, and JPMorgan Chase Co., by using the personally identifying information of seven aliens in various stages of immigration proceedings with the United States Immigrations and Customs Enforcement to obtain money and property by means of materially false and fraudulent pretenses, representations, and promises, and in doing so, transmitted and caused to be transmitted by means of wire communications in interstate or foreign commerce, writings, signals, and email communications for the purpose of executing such scheme and artifice to defraud; including but not limited to the following email "that SANCHEZ caused to be sent via interstate wires:

April 18, 2016: Email message sent from Raphael.Sanchez@ice.dhs.gov to Raphael.Sanchez@ice.dhs.gov and Raphael_sanchez@yahoo.com, containing a Puget Sound Energy bill addressed to R.H. for service at 3516 South Webster Street #A Seattle, Washington, and an image of a United States permanent resident card and the biographical page of a Chinese passport issued to R.H., originating in Washington and utilizing email servers in West Virginia and Mississippi.

That's the wire fraud part. The identity fraud part includes:

On or about July 5, 2016, in the Western District of Washington, the defendant,

RAPHAEL A. SANCHEZ,

did knowingly transfer, possess, and use, without lawful authority, a means of identification of another person, including the name, Social Security number, and birth date of R.H., a real person, during and in relation to a felony violation enumerated in 18 U.S.C. § 1028A(c), to wit, wire fraud in violation of 18 U.S.C. § 1343, as charged in Count One of this Information, in violation of 18 § U.S.C. 1028A(a)(1). .

I assume as the case against Sanchez moves forward, more details will come out about what exactly happened here. But, remember, this is at the very same time as ICE is asking to be reclassified from a law enforcement agency to an intelligence agency, giving it much greater access to surveillance data -- without a warrant. Just imagine the kinds of identity fraud ICE lawyers could pull off with that access....

Read More | 140 Comments | Leave a Comment..

Posted on Techdirt - 15 February 2018 @ 1:34pm

Court Shakes Off Dumb Copyright Lawsuit Against Taylor Swift

from the lawyers-gonna-lawyer,-judges-gonna-judge dept

For an industry that talks up how important copyright law is, it's fairly astounding how frequently there are really dumb lawsuits filed between musicians. Lately, because of the ridiculous "Blurred Lines" verdict, there have been tons of lawsuits filed over "sounds like" songs, or even "inspired by" songs, as lawyers (and some musicians) see a chance to cash in on the actual success of others. But we've also seen a bunch of really dumb lawsuits filed over the use of similar phrases. A few years ago there was the case where Rick Ross sued LMFAO because they had the line "Everyday I'm shufflin'" in a song that he claimed was infringing his "Everyday I'm hustlin'." The court was not impressed.

Last year a similar case was filed (which I'd meant to write about when it was filed, but a million other things got in the way), in which Sean Hall sued Taylor Swift claiming that her lyrics in "Shake it Off" were similar to a song he wrote called "Playas Gon' Play." The songs themselves were not similar, but both used lines about how "playas gonna play" and "haters gonna hate" (though not even exactly in the same way). Thankfully, once again, the court hearing the case is not at all impressed:

The allegedly infringed lyrics are short phrases that lack the modicum of originality and creativity required for copyright protection. Accordingly, if there was copying, it was only of unprotected elements of Playas Gon’ Play.

This is pretty core, basic copyright 101 stuff. Copyright does not attach to short phrases that don't have any originality or creativity. Indeed, while the judge, Michael Fitzgerald, will allow Hall to try again with an amended complaint, he makes it clear that he sees little likelihood of success and hints strongly that trying again could lead to sanctions against the lawyer:

While the Court is extremely skeptical that Plaintiffs will – in a manner consistent with Rule 11 – be able to rehabilitate their copyright infringement claim in an amended complaint, out of an abundance of forbearance it will give Plaintiffs a single opportunity to try. Any future dismissal will be without leave to amend.

The mention of "Rule 11" is significant, because that's the rule that establishes how lawyers are expected to act in court, and allows for sanctioning of lawyers who don't follow it. Saying explicitly that the court doesn't see how Hall's lawyers can be "consistent with Rule 11" in any refiling is basically saying, "Not only do I not think you have a case, this case is so dumb that you lawyers may have to pay up for filing such a frivolous lawsuit."

And that doesn't even touch on the fact that with copyright cases, it's much easier to get legal fees paid for filing silly lawsuits. Even if Rule 11 isn't used against the lawyers, the court could still order Hall to pay Swift's legal fees for this silly case. Indeed, as the ruling points out in great detail, this is a very silly case.

As reflected in Defendants’ RJN, and as Plaintiffs acknowledge, by 2001, American popular culture was heavily steeped in the concepts of players, haters, and player haters. Although Plaintiffs recognize as much, they allege that they “originated the linguistic combination of playas/players playing along with hatas/haters hating…” .... Plaintiffs explain that the plethora of prior works that incorporated “the terms ‘playa’ and hater together all revolve about the concept of ‘playa haters’” – a “playa” being “one who is successful at courting women,” and a “playa hater” being “one who is notably jealous of the ‘playas’” success.” .... Plaintiffs explain that Playas Gon’ Play “used the terms in the context of a third party, the narrator of a song who is neither a ‘playa’ nor a hater, stating that other people will do what they will and positively affirming that they won’t let the judgment of others affect them.”...

The concept of actors acting in accordance with their essential nature is not at all creative; it is banal. In the early 2000s, popular culture was adequately suffused with the concepts of players and haters to render the phrases “playas … gonna play” or “haters … gonna hate,” standing on their own, no more creative than “runners gonna run,” “drummers gonna drum,” or “swimmers gonna swim.” Plaintiffs therefore hinge their creativity argument, and their entire case, on the notion that the combination of “playas, they gonna play” and “haters, they gonna hate” is sufficiently creative to warrant copyright protection. ...

“It is true, of course, that a combination of unprotectable elements may qualify for copyright protection… But it is not true that any combination of unprotectable elements is eligible for copyright protection… [A] combination of unprotectable elements is eligible for copyright protection only if those elements are numerous enough and their selection and arrangement original enough that their combination constitutes an original work of authorship.” Satava, 323 F.3d at 811 (internal citations omitted; emphasis in original).

Looking at this case from a combination-of-unprotected-elements perspective, Plaintiffs’ combination of “playas, they gonna play” and “haters, they gonna hate” – two elements that would not have been subject to copyright protection on their own – is not entitled to protection. See id. at 812 (“The combination of unprotectable elements in Satava’s sculpture falls short of this standard. The selection of clear glass, oblong shroud, bright colors, proportion, vertical orientation, and stereotyped jellyfish form, considered together, lacks the quantum of originality needed to merit copyright protection.”); Lamps Plus, Inc. v. Seattle Lighting Fixture Co., 345 F.3d 1140, 1147 (9th Cir. 2003) (“Lamps Plus’s mechanical combination of four preexisting ceiling-lamp elements with a preexisting table-lamp base did not result in the expression of an original work as required by § 101 [of the Copyright Act].”). Two unprotectable elements that, given pop culture at the time, were inextricably intertwined with one another, is not enough.

And the court concludes, again, with a warning that refiling an amended complaint is risky, as the court has trouble seeing how there's any chance of success:

In sum, the lyrics at issue – the only thing that Plaintiffs allege Defendants copied – are too brief, unoriginal, and uncreative to warrant protection under the Copyright Act. In light of the fact that the Court seemingly “has before it all that is necessary to make a comparison of the works in question,” Peter F. Gaito Architecture, 602 F.3d at 65, the Court is inclined to grant the Motion without leave to amend. However, out of an abundance of caution, the Court will allow Plaintiffs one opportunity to amend, just in case there are more similarities between Playas Gon’ Play and Shake it Off than Plaintiffs have alleged thus far (which Plaintiffs’ counsel did not suggest at the hearing). If there are not, the Court discourages actual amendment. The more efficient course would be for Plaintiffs to consent to judgment being entered against them so that they may pursue an appeal if they believe that is appropriate.

Read More | 13 Comments | Leave a Comment..

Posted on Techdirt - 15 February 2018 @ 9:33am

FBI Director Still Won't Say Which Encryption Experts Are Advising Him On His Bizarre Approach To Encryption

from the perhaps-there's-a-reason-he-won't-say... dept

For the past few months, we've talked about how FBI Director Chris Wray has more or less picked up where his predecessor, James Comey, left off when it came to the question of encryption and backdoors. Using a contextless, meaningless count of encrypted seized phones, Wray insists that not being able to get into any phone the FBI wants to get into is an "urgent public safety issue."

Of course, as basically every security expert has noted, the reverse is true. Weakening encryption in the manner that Wray is suggesting would create a much, much, much bigger safety issue in making us all less safe. Hell, even the FBI used to recommend strong encryption as a method to protect public safety.

Last month, we wrote about a letter sent by Senator Ron Wyden to Wray, simply asking him to list out the names of encryption experts that he had spoken to in coming to his conclusion that it was possible to create backdoors to encryption without putting everyone at risk.

I would like to learn more about how you arrived at and justify this ill-informed policy proposal. Please provide me with a list of the cryptographers with whom you've personally discussed this topic since our July 2017 meeting and specifically identify those experts who advised you that companies can feasibly design government access features into their products without weakening cybersecurity. Please provide this information by February 23, 2018.

Technically, Wray still has a week or so to answer, but earlier this week during an open Senate hearing involving the heads of various law enforcement and intelligence agencies, Wyden asked Wray when he might get that list and Wray sidestepped the question entirely, other than saying he'd discuss it later (in a closed session):

If you can't see that, here's my quick transcript (though I do recommend watching the video just to see the smartass smirk on Wray's face through much of it).

Wyden: On encryption. Director Wray, as you know, this isn't a surprise because I indicated, I would ask you about this. You have essentially indicated that companies should be making their products with backdoors in order to allow you all to do your job. And we all want you to protect Americans and at the same time, sometimes there are these policies that make us less safe and give up our liberties. And that's what I think we get with what you all are advocating which is weak encryption. Now this is a pretty technical area, as you and I have talked about it. And there's a field known as cryptography. I don't pretend to be an expert on it. But I think there is a clear consensus among experts in the field against your position to weaken strong encryption. So I have asked you for a list of the experts that you have consulted. I haven't been able to get it. Can you give me a date this afternoon when you will give me... this morning, a sense of when we will be told who are these people who are advising you to pursue this route. Because I don't know of anybody who is respected in this field who is advising that it is a good idea to adopt your position to weaken strong encryption. So can I get that list?

Wray: I would be happy to talk more about this topic this afternoon. My position is not that we should weaken encryption. My position is that we should be working together -- the government and the private sector -- to try to find a solution that balances both concerns.

Wyden: I'm on the program for working together. I just think we need to be driven by objective facts, and the position you all are taking is out of sync with what all the experts in the field are saying and I'd just like to know who you all have been consulting, and we'll talk more about it this afternoon.

So, a few points on this. First, Wray doesn't answer the actual question of when he'll be giving Wyden a list, but rather suggests he'll discuss this topic in the closed session. But the question of when he'll be delivering his list of experts he's consulted shouldn't be a classified piece of information. It's just a date. Second, Wray immediately misrepresents the issue, by saying he's not asking to weaken encryption. Because he has to realize by now that that's exactly what he's asking to do. If he doesn't recognize that then it's clear he doesn't understand the first thing about how encryption actually works. Third, he's incorrectly talking about "balancing both concerns." But there's no balancing question here. It is not a "balance" between "security" and "civil liberties" as some keep trying to make it out to be. This is a concern between good security and bad security that makes everyone less safe (oh, and also has the potential to violate civil liberties).

It does not inspire confidence to have Wray have trouble answering such a basic question and then totally misrepresent how this all works, even in his two sentence answer.

34 Comments | Leave a Comment..

Posted on Techdirt - 14 February 2018 @ 2:28pm

Judge Dismisses Playboy's Dumb Copyright Lawsuit Against BoingBoing

from the with-leave-to-amend dept

Well, that was incredibly quick. The district court judge hearing the case that Playboy filed against BoingBoing back in November has already dismissed it, though without prejudice, leaving it open for Playboy to try again. The judge noted that, given the facts before the court so far, it wasn't even necessary to hold a hearing, since BoingBoing was so clearly in the right and Playboy so clearly had no case. While the ruling does note that Playboy and its legal team can try again, it warns them that it's hard to see how there's a case here:

The court will grant defendant’s Motion and dismiss plaintiff’s First Amended Complaint... with leave to amend. In preparing the Second Amended Complaint, plaintiff shall carefully evaluate the contentions set forth in defendant’s Motion. For example, the court is skeptical that plaintiff has sufficiently alleged facts to support either its inducement or material contribution theories of copyright infringement.... see Tarantino v. Gawker Media, LLC, 2014 WL 2434647, *3 (C.D. Cal. 2014) (“An allegation that a defendant merely provided the means to accomplish an infringing activity is insufficient to establish a claim for copyright infringement. Rather, liability exists if the defendant engages in personal conduct that encourages or assists the infringement.”) (internal citations omitted); Perfect 10, Inc. v. Giganews, Inc., 847 F.3d 657, 672 (9th Cir.), cert. denied, 138 S.Ct. 504 (2017) (“We have described the inducement theory as having four elements: (1) the distribution of a device or product, (2) acts of infringement, (3) an object of promoting its use to infringe copyright, and (4) causation.”) (internal quotation marks omitted).

It will be interesting to see what happens next. As we noted in our original post, the lawyers representing Playboy, Donger and Burroughs have been making every effort over the last year or so to move beyond their reputation as fabric copyright trolls, and seeking out opportunities for high profile, if silly, cases including "sounds like" music cases. While one of the two partners, Scott Burroughs, has busied himself over at Above the Law (who really should think more carefully about the lawyers they bring in as posters) to post increasingly silly things about copyright law -- including trying to argue that linking is infringing and the EFF is wrong to argue that it's not.

That article -- written about the same time that the BoingBoing lawsuit was filed -- looks particularly bad now that a court has rejected the same argument in a case in which Burroughs is listed as a lawyer for Playboy, and in which EFF helped write the Motion to Dismiss that said that Burroughs was wrong. Just days ago, another lawyer posting at Above the Law explained why Burroughs' own case had no chance (without mentioning Burroughs' own writings on the site).

I'm guessing that Playboy will file an amended complaint, though as we noted earlier, in copyright law, it's much easier to have legal fees awarded for filing frivolous cases, and as the quote above notes, the judge is "skeptical" that Playboy has any case at all.

Read More | 21 Comments | Leave a Comment..

Posted on Techdirt - 14 February 2018 @ 9:20am

Smart Meter Company Landis+Gyr Now Using Copyright To Try To Hide Public Records

from the what-they-don't-want-you-to-see dept

Back in 2016 we wrote about how Landis+Gyr, a large multinational company owned by Toshiba, completely freaked out when it discovered that documents about its smart energy meters, which the city of Seattle had contracted to use, were subject to a FOIA request. As we noted, Landis+Gyr went legal and did so in perhaps the nuttiest way possible. First it demanded the documents be taken down from Muckrock -- the platform that makes it easy for journalists and others to file FOIA requests. Then it demanded that Muckrock reveal the details of anyone who might have seen the documents in question. It then sued Muckrock and somehow got a court to issue a temporary restraining order (TRO) against Muckrock for posting these public records.

Eventually, with help from EFF, Landis+Gyr agreed to a settlement that stated that these documents were (a) public records and (b) the company would no longer attempt to take down the copies that Muckrock had obtained. From the settlement agreement posted on the public docket in the case:

Plaintiffs agree that they will take no further action against Defendants Mocek, Muckrock.com, and Morisy with respect to two public records previously released by the City to Muckrock.com on behalf of Mocek and automatically published on MuckRock.com.

This all ended in the summer of 2016. And, indeed, you can still find the documents hosted on Muckrock's website today. Here is the Managed Services Report 2015 and the Security Overview. Even though Landis+Gyr went to court over this and then agreed via its settlement that (1) these were public records that (2) could be left online, the company apparently doesn't want you reading them.

Last week, we received a notice from DocumentCloud -- which we use to host various documents as part of our reporting efforts -- that it had received a DMCA notice from lawyer Heather McNay of Landis+Gyr, demanding that it take down the copies of those very same public records that we had uploaded as part of our reporting on this story. It seems fairly clear to us that our posting of these public records as part of our reporting and commentary on a dispute created by Landis+Gyr itself was quintessential fair use for news reporting. And, of course, there a number of court rulings in various locations noting that copyright law cannot be used to prohibit the copying of public records (notably, that case involves a very similar situation involving a public records request in Washington State).

Either way, given that Landis+Gyr has promised in its settlement with Muckrock not to take any actions at all against Muckrock for hosting these public records, we'll note the incredible futility of the company then sending DMCA notices targeting those same public records, and scratch our collective heads over what the company is thinking when all it's doing is reminding everyone that (1) these documents exist online and (2) apparently the company would prefer you not look at these public records about its own systems.

Read More | 18 Comments | Leave a Comment..

Posted on Techdirt - 13 February 2018 @ 3:29pm

Cloudflare Gets An Easy, Quick And Complete Win Over Patent Troll

from the good-news dept

Last year, we wrote about how a relatively new patent trolling operation had pretty clearly picked the wrong target in suing internet infrastructure provider Cloudflare with a sketchy patent (US Patent 6,453,335 on "providing an internet third party data channel.") Cloudflare decided not only to fight the case, but to fight all of Blackbird's patents, crowdsourcing and funding searches into prior art on any patent held by Blackbird Technologies, and arguing that the company was engaging in questionable legal practices -- acting both as a patent holding company and a law firm, while sometimes pretending not to be a law firm (despite employing mostly lawyers) to avoid some serious ethics questions.

On Monday, Cloudflare received a fairly complete victory, with the judge easily dismissing the case and pointing out that the '335 patent was clearly invalid:

Abstract ideas are not patentable. The '335 patent is directed to the abstract idea of monitoring a data stream and modifying that data when a specific condition is identified.... The limitations in representative claims 1 and 18 "recite generic computer, network and Internet components, none of which is inventive by itself." ... Both claims describe a "processing device" that monitors a preexisting data stream between a server and a client for a specific condition and modifies that stream when that condition is present. But the patent makes clear the processing device can be generic hardware, such as a filter, router, or proxy, or generic software.

Dependent claims 8 and 24 identify a specific condition for the processing device to monitor: a data transmission rate below a set threshold. Identifying a specific condition narrows the scope of these claims. But this additional limitation is not inventive; it is simply a conventional application of the broader idea.... A patent that uses generic components can contain an inventive concept if those generic pieces are arranged in a "non-conventional and non-generic" way.... But the '335 patent does not attempt to patent a discrete and non-conventional means of monitoring and modifying a data stream. In fact, the claims make clear the processing device used to monitor and modify data can be nearly anything and can be placed nearly anywhere, so long as the processing device is not the server that originates the data stream. In other words, the patent attempts to monopolize the abstract idea of monitoring a preexisting data stream between a server and a client for a specific condition and modifying that stream when that condition is present.

Patent cases -- even ones that should be easy -- are pretty famous for forcing the defendant to go through a long and expensive process to conclude. Normally there are extensive back and forth and filings and hearings between the parties as the court determines just what the patents cover and what the defendants are alleged to have done. Here, however, Cloudflare made an early motion to dismiss based on the claim that the patent itself is clearly invalid under the Supreme Court's Alice ruling that abstract ideas are not patentable. The court found this so persuasive that it tossed the case and the patent at this early stage (and did so in just two quick pages).

Blackbird may appeal, but it's difficult to see any appeal getting very far either. And, given how Cloudflare is still asking for prior art on all of Blackbird's other patents, the company may be interested in getting as far away as possible from Cloudflare as quickly as possible. But, then again, no one said that the people who run patent trolling operations are very smart.

Either way, kudos to Cloudflare for hitting back hard and getting an early victory against patent troll Blackbird.

Read More | 21 Comments | Leave a Comment..

Posted on Techdirt - 13 February 2018 @ 10:44am

How We Got To The Point That Hollywood Is Trying To Attack The Internet Via NAFTA

from the a-little-history-lesson dept

Visit EveryoneCreates.org to read stories of creation empowered by the internet, and share your own! »

Last week we announced our new site EveryoneCreates.org, featuring stories from many different creators of music, books, movies and more about how important the internet and fair use have been to their creations. As we noted, the reason for the site is that the legacy copyright gatekeepers at the MPAA and the RIAA have been using the Trump-requested NAFTA renegotiations to try to undermine both fair use and internet safe harbors by positing a totally false narrative that the internet has somehow "harmed" content creators.

Yet, as we know, and as the stories from various artists show, nothing is further from the truth. For most artists and content creators, the internet has been a huge boon. It has helped them create new art, share it and distribute it to other people, build a fan base and connect with them, and make money selling either their work or related products and services. As we've discussed before, in the past, for most artists, if you did not find a giant gatekeeper to take you on, you were completely out of the market. There was very little "long tail" to be found in most creative industries, because you either were "chosen" by a gatekeeper or you went home and did something else. But the internet has changed that. It has allowed people to go directly to their audiences, or to partner with platforms that help anyone create, distribute, promote and monetize. Indeed, the internet has undoubtedly helped everyone reading this to create art -- whether for profit or just for fun. And if that's the case with you, please share your story.

But it is worth taking a step back and asking an even larger question: how the hell did we get here? How did we get to the point that the MPAA and the RIAA are using NAFTA negotiations to try to undermine the internet. Rest assured: there's a long, long history at play here, and it's important to learn about it. The idea that you can or should regulate the internet or intellectual property in trade agreements should seem strange to most people -- especially as most trade agreements these days are about increasing free trade by removing barriers to trade, and copyright by its very nature is mercantile-style trade protectionism that places artificial limits and costs on trade that might otherwise be cheaper.

An excellent history on this topic comes from the aptly named 2002 book Information Feudalism: Who Owns the Knowledge Economy by Peter Drahos and John Braithweaite. It tells the story of how a concerted effort by legacy copyright maximalist organizations laid the groundwork for making sure that copyrights and patents were always included in trade agreements, by getting them in as a key part of the World Trade Organization and by the creation of TRIPS -- Trade-Related Aspects of Intellectual Property Rights. The book details how the legacy industries turned "intellectual property" from a question of benefiting the public to a solely commercial arena of corporate ownership and trade.

Once that was in place, these same industries wasted little time in exploiting the reframing of issues around copyright and patents. Famously, the DMCA itself was created in this manner. The record labels and movie studios had a friend in the Clinton White House in Bruce Lehman, who wrote a white paper in 1995 requesting draconian changes to copyright law targeting the internet. However, he found little support for it in Congress. Five years ago, Lehman himself admitted that when Congress refused to act he did "an end-run around Congress" by going to Geneva and pushing for a trade agreement via the World Intellectual Property Organization (WIPO) which required DMCA-like copyright rules.

With that treaty in hand, Lehman and his Hollywood friends came back to Congress, insisting that our "international obligations" now required Congress to create and pass the DMCA, or we'd suddenly face all sorts of trade and diplomatic problems for failing to live up to those "international obligations" that they themselves had put into the trade agreement. Indeed, ever since then, nearly every international trade agreement has included some crazy provisions related to copyright and patents and other IP rights -- all designed to effectively launder these laws through the highly opaque international trade negotiation process, and then insist that legislatures in various countries simply must ratchet up their laws to meet those obligations.

Given all that, there's at least some irony in the fact that these same groups that forced the DMCA on Congress through an international trade agreement back in the mid-1990s are now trying to use a different trade agreement 20 years later to force changes to that very same law (and others). Once again, the process is opaque. And once again, the industry is well connected and represented on a variety of the "Industry Trade Advisory Committees" (ITACs), giving them much greater access to the details of the negotiations while the public is kept in the dark.

But the history here is clear. Moving copyright into trade agreements was a purposeful move, pushed for by legacy industries so they could promote their favored protectionist laws around the globe, in part by moving them away from being designed for the public's benefit and towards a world in which information and knowledge was considered to be privatized, owned, and locked up by default. It ignored the fact that, often, the public can benefit the most when information is open and widely shared. And, decades later, we're still dealing with the fallout from these bad decisions.

And that's why it's so important for policy makers to understand that it's complete hogwash to argue that the RIAA and MPAA are "representing artists" in trying to undermine the internet this way. Most artists recognize that the internet and various platforms are a key part of their ability to create, distribute, share, and support their artwork these days -- and they are not being represented at the NAFTA negotiating table.

Share your story at EveryoneCreates.org to let policymakers know how important an open internet and fair use is to your own creativity.

53 Comments | Leave a Comment..

Posted on Techdirt - 12 February 2018 @ 9:31am

Waymo And Uber's Settlement Is A Good Thing: Focus On Innovating, Not Litigating

from the took-too-long-already dept

Back in December, right before the Waymo/Uber trial was supposed to begin (before it got delayed due to an unexpected bombshell about withholding evidence that... never actually came up at the trial), I had a discussion with another reporter about the case, in which we each expressed our surprise that a settlement hadn't been worked out before going to trial. It seemed as though part of the case was really about the two companies really disliking each other, rather than there being a really strong legal case.

A year ago, when the case was filed, I expressed disappointment at seeing Google filing this kind of lawsuit. My concern was mainly over the patent part of the case (which were dropped pretty early on), and the fact that Google, historically, had shied away from suing competitors over patents, tending to mostly use them defensively. But I had concerns about the "trade secrets" parts of the case as well. While there does seem to be fairly clear evidence that Anthony Levandowski -- the ex-Google employee at the heart of the discussion -- did some sketchy things in the process of leaving Google, starting Otto, and quickly selling Otto to Uber, the case still felt a lot like a backdoor attempt to hold back employee mobility.

As we've discussed for many years, a huge part of the reason for the success of Silicon Valley in dominating the innovation world has to do with the ease of employee mobility. Repeated studies have shown that the fact that employees can switch jobs easily, or start their own companies easily, is a key factor in driving innovation forward. It's the sharing and interplay of ideas that allows the entire industry to tackle big problems. Individual firms may compete around those big breakthroughs, but it's the combined knowledge, ideas, and perspective sharing that results in the big breakthroughs.

And even though that's widely known, tech companies have an unfortunate history of trying to stop employees from going to competitors. While non-competes have been ruled out in California, a few years back there was a big scandal over tech companies having illegal handshake agreements not to poach employees from one another. It was a good thing to see the companies fined for such practices.

However, the latest move is to use "trade secrets" claims as way to effectively get the same thing done. The mere threat of lawsuits can stop companies from hiring employees, and can limit an employee's ability to find a new job somewhere else. That should concern us all.

However, in this lawsuit, everything was turned a bit upside down. Part of it was that there did appear to be some outrageous behavior by Levandowski. Part of it was that, frankly, there are few companies out there disliked as much as Uber. It does seem that if it were almost any other company on the planet, many more people would have been rooting against Google as the big incumbent suing a smaller competitor. But, in this case, many many people seemed to be rooting for Google out of a general dislike of Uber itself.

My own fear was that this general idea of "Uber = bad" combined with "Levandowski doing sketchy things" could lead to a bad ruling which would then be used to limit employee mobility in much more sympathetic settings. Thankfully, that seems unlikely to happen. As Sarah Jeong (who's coverage of this case was absolutely worth following) noted, despite all the rhetoric, it wasn't at all clear that Waymo proved its case. Lots of people wanted Google/Waymo to win for emotional reasons, but the legal evidence wasn't clearly there.

And now the case is over. As the trial was set to continue Friday morning, it was announced that the two parties had reached a settlement, in which Uber basically hands over a small chunk of equity to Waymo (less than Waymo first tried to get, but still significant). As Jeong notes in another article, both sides had ample reasons to settle -- but the best reason of all to settle is so that they can focus on just competing in the market, rather than the courtroom and in not setting bad and dangerous precedent concerning employee mobility in an industry where that's vital.

11 Comments | Leave a Comment..

Posted on Techdirt - 9 February 2018 @ 7:39pm

Twitter & Facebook Want You To Follow The Olympics... But Only If The IOC Gives Its Stamp Of Approval

from the what-the-fuck-twitter? dept

It is something of an unfortunate Techdirt tradition that every time the Olympics rolls around, we are alerted to some more nonsense by the organizations that put on the event -- mainly the International Olympic Committee (IOC) -- going out of their way to be completely censorial in the most obnoxious ways possible. And, even worse, watching as various governments and organizations bend to the IOC's will on no legal basis at all. In the past, this has included the IOC's ridiculous insistence on extra trademark rights that are not based on any actual laws. But, in the age of social media it's gotten even worse. The Olympics and Twitter have a very questionable relationship as the company Twitter has been all too willing to censor content on behalf of the Olympics, while the Olympic committees, such as the USOC, continue to believe merely mentioning the Olympics is magically trademark infringement.

So, it's only fitting that my first alert to the news that the Olympics are happening again was hearing how Washington Post reporter Ann Fifield, who covers North Korea for the paper, had her video of the unified Korean team taken off Twitter based on a bogus complaint by the IOC:

And Twitter complied even though the takedown is clearly bogus. Notice Fifield says that it is her video? The IOC has no copyright claim at all in the video, yet they filed a DMCA takedown over it. The copyright is not the IOC's and therefore the takedown is a form of copyfraud. Twitter should never have complied and shame on the company for doing so. Even more ridiculous: Twitter itself is running around telling people to "follow the Olympics on Twitter." Well, you know, more people might do that if you weren't taking down reporters' coverage of those very same Olympics.

Oh, and it appears that Facebook is even worse. They're pre-blocking the uploads of such videos:

This is fucked up and both the IOC and Facebook should be ashamed. The IOC can create rules for reporters and can expel them from the stadium if they break those rules, but there is simply no legal basis for them to demand such content be taken off social media, and Twitter and Facebook shouldn't help the IOC censor reporters.

45 Comments | Leave a Comment..

Posted on Techdirt - 8 February 2018 @ 9:26am

End Of An Era: Saying Goodbye To John Perry Barlow

from the pioneer dept

I was in a meeting yesterday, when the person I was meeting with mentioned that John Perry Barlow had died. While he had been sick for a while, and there had been warnings that the end might be near, it's still somewhat devastating to hear that he is gone. I had the pleasure of interacting with him both in person and online multiple times over the years, and each time was a joy. He was always, insightful, thoughtful and deeply empathetic.

I can't remember for sure, but I believe the last time I saw him in person was a few years back at a conference (I don't even recall what conference), where he was on a panel that had no moderator, and literally seconds before the panel was to begin, I was asked to moderate the panel with zero preparation. Of course, it was easy to get Barlow to talk, and to make it interesting, even without preparation. But that day the Grateful Dead's Bob Weir (for whom Barlow wrote many songs -- after meeting as roommates at boarding school) was in the audience -- and while the two were close, they disagreed on issues related to copyright, leading to a public debate between the two (even though Weir was not on the panel). It was fascinating to observe the discussion, in part because of the way in which Barlow approached it. Despite disagreeing strongly with Weir, the discussion was respectful, detailed and consistently insightful.

Lots of people are, quite understandably, pointing to Barlow's famous Declaration of the Independence of Cyberspace (which was published 22 years ago today). Barlow later admitted that he dashed most of that off in a bar during the World Economic Forum, without much thought. And that's why I'm going to separately suggest two other things by Barlow to read as well. The first was his Wired piece, The Economy of Ideas from 1994, the second year of Wired's existence, and where Barlow's wisdom was found in every issue. Despite being written almost a quarter of a century ago, The Economy of Ideas is still fresh and relevant today. It is more thoughtful and detailed than his later "Declaration" and, if anything, I would imagine that Barlow was annoyed that the piece is still so relevant today. He'd think we should be way beyond the points he was making in 1994, but we are not.

The other piece is more recent I've seen a few people pointing to is his Principles of Adult Behavior, which are a list of 25 rules to live by -- rules that we should be reminded of constantly. Rules that many of us (and I'm putting myself first on this list) fail to live up to all too frequently. Update I stupidly assumed that was a more recent writing by Barlow, but as noted in the comments (thanks!) it's actually from 1977 when Barlow turned 30.

Cindy Cohn, who is now the executive director of EFF, which Barlow co-founded, mentions in her writeup how unfair it is that Barlow (and, specifically his Declaration) are often held up as the kind of prototype for the "techno-utopian" vision of the world that has become so frequently mocked today. Yet, as Cohn points out, that's not at all how Barlow truly viewed the world. He saw the possibilities of that utopia, while recognizing the potential realities of something far less good. The utopianism that Barlow presented to the world was not -- as many assume -- him claiming these things were a sort of manifest destiny, but rather by presenting such a utopia, we might all strive and push and fight to actually achieve it.

Barlow was sometimes held up as a straw man for a kind of naive techno-utopianism that believed that the Internet could solve all of humanity's problems without causing any more. As someone who spent the past 27 years working with him at EFF, I can say that nothing could be further from the truth. Barlow knew that new technology could create and empower evil as much as it could create and empower good. He made a conscious decision to focus on the latter: "I knew it’s also true that a good way to invent the future is to predict it. So I predicted Utopia, hoping to give Liberty a running start before the laws of Moore and Metcalfe delivered up what Ed Snowden now correctly calls 'turn-key totalitarianism.'”

Just yesterday, before I learned of Barlow's passing, we officially launched a new website, EveryoneCreates.org, which discusses just how ridiculous the myth -- pushed by the RIAA and MPAA and their friends -- that there's some sort of "war" between "content and tech." According to that narrative, the internet has done much to harm content creators. Yet, everywhere we look, we see the opposite. How content creators have been enabled by these technologies to create, to share, to distribute and, yes, to make money from their creations. Barlow was one of the first, if not the first, content creators from the "old" world, to wholeheartedly see the promise of the internet, and spent his life dedicated to making the internet such a powerful place for all of us content creators.

Either way, this is an end of an era. We're in an age now where the general narrative making the rounds is, once again, touching on the moral panic of how terrible everything in technology is. Barlow spent decades teaching us about the possibilities of a better world on the internet, and nudging us, sometimes gently, sometimes forcefully, in that direction. And, now, just at a point where that vision is most at risk, he's left us to continue that fight on our own. The internet world has many challenges ahead of it -- and we should all strive to be guided both by Barlow's principles and his vision of constantly pushing to mold the technology world into that world we want it to be -- not ignoring the negatives, but looking for ways to get beyond them and expand the opportunities for the good to come out. It will be harder without him being there to help guide us.

6 Comments | Leave a Comment..

Posted on Techdirt - 7 February 2018 @ 9:00am

On The Internet, Everyone Is A Creator

from the it's-not-a-broadcast-medium dept

Visit EveryoneCreates.org to read stories of creation empowered by the internet, and share your own! »

One theme that we've covered on Techdirt since its earliest days is the power of the internet as an open platform for just about anyone to create and communicate. Simultaneously, one of our greatest fears has been how certain forces -- often those disrupted by the internet -- have pushed over and over again to restrict and contain the internet, and turn it into something more like a broadcast platform controlled by gatekeepers, where only the chosen few can use it to create and share. This is one of the reasons we've been so adamant over the years that in so many policy fights, "Silicon Valley v. Content" is a false narrative. It's almost never true -- because the two go hand in hand. The internet has made it so that everyone can be a creator. Internet platforms have made it so that anyone can create almost any kind of content they want, they can promote that content, they can distribute it, they can build a fan base, and they can even make money. That's in huge contrast to the old legacy way of needing a giant gatekeeper -- a record label, a movie studio, or a book publisher -- to let you into the exclusive club.

And yet, those legacy players continue to push to make the internet into more of a broadcast medium -- to restrict that competition, to limit the supply of creators and to push things back through their gates under their control. For example, just recently, the legacy recording and movie industries have been putting pressure on the Trump administration to undermine the internet and fair use in NAFTA negotiations. And, much of their positioning is that the internet is somehow "harming" artists, and needs to be put into check.

This is a false narrative. The internet has enabled so many more creators and artists than it has hurt. And to help make that point, today we're launching a new site, EveryoneCreates.org which features stories and quotes from a variety of different creators -- including bestselling authors, famous musicians, filmmakers, photographers and poets -- all discussing how important an open internet has been to building their careers and creating their art. On that same page, you can submit your own stories about how the internet has helped you create, and why it's important that we don't restrict it. Please add your own stories, and share the site with others too!

The myth that this is "internet companies v. creators" needs to be put to rest. Thanks to the internet, everyone creates. And let's keep it that way.

Visit EveryoneCreates.org to read stories of creation empowered by the internet, and share your own! »

19 Comments | Leave a Comment..

More posts from Mike Masnick >>