Mike Masnick’s Techdirt Profile


About Mike MasnickTechdirt Insider

Mike is the founder and CEO of Floor64 and editor of the Techdirt blog.

He can be found on Twitter at http://www.twitter.com/mmasnick

Posted on Techdirt - 19 March 2018 @ 10:32am

Hollywood's Behind-The-Scenes Support For SESTA Is All About Filtering The Internet

from the you-know-it dept

Over at the EFF blog, Joe Mullin has an excellent discussion on why Hollywood is such a vocal supporter of SESTA, despite having nothing to do with Hollywood. It's because the bill actually accomplishes a goal that Hollywood has dreamed about for years: mandatory filtering of all content on the internet.

For legacy software and entertainment companies, breaking down the safe harbors is another road to a controlled, filtered Internet—one that looks a lot like cable television. Without safe harbors, the Internet will be a poorer place—less free for new ideas and new business models. That suits some of the gatekeepers of the pre-Internet era just fine.

The not-so-secret goal of SESTA and FOSTA is made even more clear in a letter from Oracle. “Any start-up has access to low cost and virtually unlimited computing power and to advanced analytics, artificial intelligence and filtering software,” wrote Oracle Senior VP Kenneth Glueck. In his view, Internet companies shouldn’t “blindly run platforms with no control of the content.”

That comment helps explain why we’re seeing support for FOSTA and SESTA from odd corners of the economy: some companies will prosper if online speech is subject to tight control. An Internet that’s policed by “copyright bots” is what major film studios and record have advocated for more than a decade now. Algorithms and artificial intelligence have made major advances in recent years, and some content companies have used those advances as part of a push for mandatory, proactive filters. That’s what they mean by phrases like “notice-and-stay-down,” and that’s what messages like the Oracle letter are really all about.

There's a lot more in Mullin's post, but it actually goes much beyond that. Every rock you lift up in looking at where SESTA's support has come from, you magically find Hollywood people scurrying quietly around. We've already noted that much of the initial support for SESTA came from a group whose then board chair was a top lobbyist for News Corp.. And, as we reported last month, after a whole bunch of people we spoke to suggested that much of the support for SESTA was being driven by former top News Corp. lobbyist, Rick Lane, we noticed that a group of people who went around Capitol Hill telling Congress to support SESTA publicly thanked their "partner" Rick Lane for showing them around.

In other words, it's not just Hollywood seeing a bill that gets them what it wants and suddenly speaking up in favor of it... this is Hollywood helping to make this bill happen in the first place as part of its ongoing effort to remake the internet away from being a communications medium for everyone, and into a broadcast/gatekeeper dominated medium where it gets to act as the gatekeeper.

And if you think that Hollywood big shots are above pumping up a bogus moral panic to get their way, you haven't been paying attention. Remember, for years Hollywood has also pushed the idea that the internet requires filters and censorship for basically any possible reason. Back during the SOPA days, it focused on "counterfeit pharmaceuticals." Again, not an issue that Hollywood is actually concerned with, but if it helped force filters and stopped user-generated content online, Hollywood was quick to embrace it.

Remember, after all, that the MPAA set up Project Goliath to attack Google, and a big part of that was paying its own lawyers at the law firm of Jenner & Block to write demand letters for state Attorneys General, like Mississippi Attorney General Jim Hood, who sent a bogus subpoena and demand letter to Google (written by the MPAA's lawyers and on the MPAA's bill). And what did Hood complain about to Google in that letter written by the MPAA's lawyers? You guessed it:

Hood accused Google of being “unwilling to take basic actions to make the Internet safe from unlawful and predatory conduct, and it has refused to modify its own behavior that facilitates and profits from unlawful conduct.” His letter cites not just piracy of movies, TV shows and music but the sale of counterfeit pharmaceuticals and sex trafficking.

The MPAA has cynically been using the fact that there are fake drugs and sex trafficking on the internet for nearly decade to push for undermining the core aspects of the internet. They don't give a shit that none of this will stop sex trafficking (or that it will actually make life more difficult for victims of sex trafficking). The goal, from the beginning was to hamstring the internet, and return Hollywood to what it feels is its rightful place as the gatekeeper for all culture.

Indeed, our post earlier about Senator Blumenthal's bizarre email against a basic SESTA amendment from Senator Wyden to fix the "moderator's dilemma" aspect was quite telling. He falsely claimed that adding in that amendment -- that merely states that the act of doing some moderation or filtering doesn't append liability to the site for content they fail to filter or moderate (which is the crux of CDA 230's "Good Samaritan" language) -- would create problems for Hollywood. Indeed, a key part of Blumenthal's letter is that this amendment "has the potential to disrupt other areas of the law, such as copyright protections."

But that makes zero sense at all. CDA 230 does not apply to copyright. It doesn't apply to any intellectual property law, as intellectual property is explicitly exempted from all of CDA 230 and has been from the beginning. Nothing in the Wyden amendment changes that. And... it does seem quite odd for Blumenthal to suddenly be bringing up copyright in a discussion about CDA 230, unless it's really been Hollywood pushing these bills all along, and thus in Blumenthal's mind, SESTA and copyright are closely associated. As Prof. Eric Goldman notes, talking nonsensically about copyright in this context appears to be quite a tell by Senator Blumenthal.

4 Comments | Leave a Comment..

Posted on Techdirt - 19 March 2018 @ 9:14am

SESTA's Sponsors Falsely Claim That Fixing SESTA's Worst Problem Harms Hollywood

from the say-what-now? dept

Earlier today, in discussing a long list of possible fixes for SESTA, we noted that the only one that even has a remote chance (i.e., the only fix that actually has the potential of being considered by the Senate) is Senator Wyden's amendment, which is designed to solve the "moderator's dilemma" issue by clarifying that merely using a filter or doing any sort of moderation for the sake of blocking some content does not automatically append liability to the service provider for content not removed. Senator Portman -- the sponsor of the bill -- has insisted (despite the lack of such language in the bill) that this is how SESTA should be interpreted. Specifically, Portman stated that SESTA:

...does not amend, and thus preserves, the Communications Decency Act’s Good Samaritan provision. This provision protects good actors who proactively block and screen for offensive material and thus shields them from any frivolous lawsuits.

Except, that's not what the bill actually says. Which is why the language in the Wyden amendment is so important. It basically adds into the law what Portman pretends is already there.

Thus, you would think that Portman and the other Senators backing SESTA should also support the Wyden amendment. They do not. Senator Richard Blumenthal -- who has spent years attacking the internet, and who has already stated that if SESTA kills small internet businesses he would consider that a good thing -- is opposed to the amendment, and sent out a letter supposedly co-signed by other SESTA supporters:

Senators Blumenthal, McCaskill and the other bipartisan sponsors of SESTA oppose the Wyden amendments. These amendments threaten to derail the bill and they would make it even more difficult than current law to hold websites that sexually traffic minors like Backpage.com accountable….The safe harbor amendment would provide websites like Backpage.com with even stronger legal protections than they enjoy today. It also has the potential to disrupt other areas of the law, such as copyright protections. This “bad Samaritan” amendment is not a clarification or a protection for good actors–it is an additional tool to protect traffickers and illegal conduct online.

Here's the problem with that. Almost everything stated above is 100% factually wrong. And not just a little bit wrong. It's so wrong that it raises serious questions about whether Blumenthal understands some fairly fundamental issues in the bill he's backing. Professor Eric Goldman has a pretty concise explanation of everything that's wrong with the statement, noting that it -- somewhat incredibly -- shows that SESTA's main sponsors don't even understand the very basic aspects of CDA 230, as they insist on changing the law.

There are at least three obvious problems with this email. First, the amendment would indeed protect good actors because it would eliminate the Moderator’s Dilemma. The authors of this email still don’t understand, or have decided to ignore, the Moderator’s Dilemma. Second, the proposed amendment would not help Backpage–at all. The Senate Investigative Committee report highlighted voluminous facts about Backpage’s knowledge, so I can’t see how Backpage’s purported filtering would come up in any SESTA/FOSTA enforcement.

Third, the email indicates that the amendment “has the potential to disrupt other areas of the law, such as copyright protections.” This is where the screen freezes, the record scratches, and the narrator says in a deadpan, “No, it wouldn’t.” Section 230(e)(2) expressly carves out “intellectual property” claims–including copyright–from Section 230’s coverage. Anyone with even a basic understanding of Section 230 knows this. Yet, the sponsors, on the eve of a decisive vote with monumental stakes for Section 230, appear to be demonstrating a fundamental misunderstanding of what Section 230 says and does. That is very, very confidence-rattling.

Worse, the email has it precisely backwards. The amendment would HELP, not DISRUPT, copyright protection efforts. If services stuck in the Moderator’s Dilemma decide to turn off proactive moderation efforts, that will include turning off copyright filtering. In other words, SESTA/FOSTA may have the unwanted consequence of encouraging Internet services to do LESS copyright filtering. (This is just one of many examples of my claim that SESTA/FOSTA may counterproductively increase anti-social content). The amendment would fix that by not holding their copyright filtering efforts against Internet services for sex trafficking or prostitution promotion purposes, i.e., by filtering for copyright, the Internet services won’t fear that a court will ask why their filters missed promotions for sex trafficking or prostitution. So if Congress wants to avoid “disrupting” efforts to combat online copyright infringement, the amendment is essential.

This isn't a matter of differing opinions. This is the main backers of a bill to drastically change CDA 230 insisting that (1) their bill does something it does not and (2) a fix to their bill that would bring it into line with what they claim their bill does... actually does a bunch of things it absolutely does not.

At this point, you have to start wondering what the hell is happening in the Senate, and in particular in Senator Blumenthal's office. He is not just doing a big thing badly -- he is gleefully spouting the exact opposite of basic facts about both the existing law, and the bill he sponsored. I know that politicians aren't exactly known for their honesty, but he seems to be taking this to new levels -- and causing massive harm in the process.

7 Comments | Leave a Comment..

Posted on Techdirt - 19 March 2018 @ 6:13am

Both Facebook And Cambridge Analytica Threatened To Sue Journalists Over Stories On CA's Use Of Facebook Data

from the this-is-bad dept

I'm going to assume that you weren't living in an internet-proof cave this weekend, and caught at least some of the stories about Cambridge Analytica and Facebook. The news first kicked off with the announcement of a data protection lawsuit filed against Cambridge Analytica in the UK on Friday evening (we'll likely have more on that lawsuit soon), followed quickly by an attempt by Facebook to get out ahead of the coming tidal wave by announcing that it was suspending Cambridge Analytica and some associated parties from its platforms, claiming terms of service violations. This was quickly followed on Saturday with two explosive stories. The first, from Carole Cadwalladr at The Guardian, revealing a "whistleblower" from the very early days of Cambridge Analytica (who more or less set up how it works with data profiles) named Christopher Wylie. This was quickly followed up by another story at the NY Times, which was a bit more newsy, providing more details on how Cambridge Analytica got data on about 50 million people out of Facebook.

Admittedly -- much of this isn't actually new. The Intercept had reported something similar a year ago, though it only said it was 30 million Facebook users, rather than 50 million. And that story built on the work of a 2015 (yes, 2015) story in the Guardian discussing how Cambridge Analytica was using data from "tens of millions" of Facebook users "harvested without permission" in support of Ted Cruz's presidential campaign.

There's a lot of heat on this story right now, and a lot of accusations being thrown around, and I'll admit that I'm not entirely sure where I come down on the details yet. I assume people on basically both sides of this issue will scream at me and call me names over this, but there's too much going on to fully understand what happened here. I will note that, in that Guardian story in 2015, Cruz told the publication that this data collecting and targeting effort was "very much the Obama model." And political consultant Patrick Ruffini has a well worth reading Twitter thread arguing that people are overreacting to much of this, and that the 2012 Obama campaign did the exact same thing, and was celebrated for its creative use of data and targeting on the internet. Ad tech guy Jay Pinho makes the same point as well. Here's a Time article from 2012 excitedly talking up how the Obama campaign used Facebook in the same way:

That’s because the more than 1 million Obama backers who signed up for the app gave the campaign permission to look at their Facebook friend lists. In an instant, the campaign had a way to see the hidden young voters. Roughly 85% of those without a listed phone number could be found in the uploaded friend lists.

Of course, there is one major difference between the Obama one and the Cambridge Analytica one, which involves the level of transparency. With the Obama campaign, people knew they were giving their data (and friend's data) to the cause of re-electing Obama. Cambridge Analytica got its data by having a Cambridge academic (who the new Guardian story revealed for the first time is also appointed to a position at St. Petersburg University) set up an app that was used to collect much of this data, and misled Facebook by telling them it was purely for academic purposes, when the reality is that it was setup and directly paid for by Cambridge Analytica with the intent of sucking up that data for Cambridge Analytica's database. Is that enough to damn the whole thing? Perhaps.

As for the claims that this is just the same old Facebook model of selling everyone's data... that was not true and still is not accurate. Facebook doesn't sell your data. It sells access to its users via the data it has on you. That may not seem different, but it is different. But the lines do seem to get a bit blurry, as it appears that Cambridge Analytica, via its partnership with the professor Dr. Aleksander Kogan (who apparently briefly changed his name to -- I kid you not -- Dr. Spectre) and his "Global Science Research," basically paid people via Amazon's Mechanical Turk to do a "personality assessment" on Facebook that, as part of the process, exposed information about their entire social graph, which GSR apparently hoovered up and passed along to Cambridge Analytica.

At the very least, it can be said that Facebook should have recognized much earlier that this could and would be done, and to understand the potential privacy problems related to it. Facebook has a fairly long and painful history of not quite realizing how what it does impacts people's privacy, and this is one more example.

But, it's raising a bigger question, as well, and it's one that caused Facebook to do something that I'll definitively call as "incredibly stupid," which is that it threatened to sue the Guardian over its story, mainly because the Guardian story refers to this whole mess as a "data breach" for Facebook's data.

And, of course, Facebook wasn't the only one who threatened to sue. Cambridge Analytica did too:

The Observer also received the first of three letters from Cambridge Analytica threatening to sue Guardian News and Media for defamation.

There are issues of terminology here. Facebook, in its post, is adamant that what happened is not a "breach"

The claim that this is a data breach is completely false. Aleksandr Kogan requested and gained access to information from users who chose to sign up to his app, and everyone involved gave their consent. People knowingly provided their information, no systems were infiltrated, and no passwords or sensitive pieces of information were stolen or hacked.

There are legal reasons why Facebook is so concerned about whether or not this is a "breach" and, let's face it, the company is about to face a million and a half lawsuits over this, not to mention government investigations (already Senator Amy Klobuchar has demanded Mark Zuckerberg's head on a platter testimony before the Senate and Massachusetts' Attorney General Maura Healey has announced the opening of an investigation, and there have also been rumblings out of the UK and the EU, as well as the FTC). But, there are also some fairly important legal obligations if this was a "breach" in the traditional sense, such as disclosing that to those impacted by the breach.

I'm not entirely sure where I come down on the breach question. It doesn't feel like a traditional breach. It wasn't that Facebook coughed up this info, it was its users coughed up the info... and Facebook just made it easy for this outside "academic" to hoover up all that info by paying a bunch of people to take dopey personality quizzes. However, as the Guardian's Alex Hern points out, how do you distinguish what Kogan/GSR/Cambridge Analytica did from social engineering to get information.

Of course, there is something of a difference: it still wasn't Facebook per se coughing up the info. It was Facebook's own users. And, you might even argue that if you believe that Facebook doesn't "own" all this data in the first place, that it was actually those Facebook users coughing up a bunch of their own data -- including lots of data about their friends. Needless to say, this is a mess where a lot more transparency might help, and that transparency is going to be forced upon Facebook with a sledgehammer in the near near future.

But, regardless of where you come down on all of this, Facebook threatening defamation against the Guardian for calling this a data breach is ludicrous and Facebook should be ashamed and apologize. Even as it clearly disagrees with how the Guardian characterized much of the story, that's no excuse to whip out defamation threats. Not only is it incredibly stupid from a Facebook PR perspective (and makes the company look like a giant bully), it suggests that the company still has absolutely no fucking clue how to communicate with the press and the public about how its own platform works.

It's actually quite incredible to recognize just how big Facebook has gotten in the face of how little it seems to understand about what its own platform does.

34 Comments | Leave a Comment..

Posted on Techdirt - 19 March 2018 @ 3:14am

Can SESTA Be Fixed?

from the will-it-be-fixed? dept

It appears that sometime this week (or even possibly today), the Senate is unfortunately likely to vote (perhaps by an overwhelming margin) for SESTA, despite the fact that it's a terribly drafted bill which no one can explain how it will actually stop sex trafficking. Indeed, it's a bill that many victims advocates are warning will not just make problems worse, but will put lives in danger. And that's leaving aside all of the damage it will do to free speech and tons of websites on the internet.

Much of this could have been avoided if anyone in Congress were actually interested in understanding how the internet worked, and how to write a bill that actually addressed problems around sex trafficking -- rather than buying into a false narrative (pushed mainly by Hollywood) that the liability protections of CDA 230 were magically responsible for sex traffickers using the internet. Two academics who are probably the most knowledgeable experts on intermediary liability, Daphne Keller at Stanford and Eric Goldman at Santa Clara University, have each posted thoughts on how to "salvage" SESTA. If Congress were serious, it would listen to them. But that's a big "if."

Let's start with Keller's suggestions that she helpfully put into a Twitter thread:

First up, she takes on the problematic "knowledge" standard used in SESTA/FOSTA. Again, a key part of the bill is that internet sites can become liable if they have "knowledge" of sex trafficking activity that is done on the platform. But what the hell is meant by "knowledge"? In other parts of the law, even when it's more spelled out, there are examples of legal cases lasting years while everyone wrangles over what "knowledge" means. In the copyright context, Viacom sued YouTube and were in court for more than half a decade, with much of that being over the simple question of whether knowledge meant "specific" knowledge or "general" knowledge. SESTA could solve many of its problems if it made its knowledge standard clear -- and, as Keller notes, one that wouldn't require "teams of lawyers."

Indeed, this is perhaps the largest problem with SESTA (and may also doom the bill in court). Prosecutors and the DOJ have already raised concerns about the standards in the bill, and even the politicians supporting it toss out very, very different definitions. Senator Rob Portman has claimed it requires "intent." Meanwhile, Rep. Cathy McMorris Rodgers claims that the standard is "knowingly turning a blind eye." That's... extremely different. Senator Cory Booker claims its "a high standard" that requires "proving beyond a reasonable doubt." All of those mean very different things, and when you have the politicians backing the bill all spouting nonsense, and the law itself doesn't clarify, you're making a huge mess.

Keller's second suggestion is to add in real and meaningful penalties for bad faith accusers as well as an appeals process for the accused. This is also a big deal. Again, looking at the DMCA, we've talked about how the one part of that law dealing with bad faith accusations is basically toothless and almost never useful. And thus, the DMCA is abused all the time. We have all those lessons to learn from -- and it appears that Congress is ignoring them.

Up next would be a clear statement that the law does not require monitoring all speech. Such a mandatory monitoring system would have tremendous First Amendment issues -- but unfortunately it seems likely that some may read the bill to require mandatory filtering (oddly, others will read it as saying you shouldn't use filters at all to avoid knowledge -- and that dichotomy of results should just emphasize how poorly the bill was drafted).

Fourth, Keller suggests making it clear that merely monitoring should not be deemed as knowledge (this could be seen as related to clarifying the knowledge standard as well). On that front, there may be an amendment on the table that could help (see below...).

Fifth: the bills should make it clear that it applies to service providers that are end user facing, rather than further up the stack. Again, here's a lesson that we've learned from takedowns in the copyright space. As Hollywood got more and more upset about various things online it continued to move up the stack beyond services to hosting companies, data centers, registrars and even ICANN itself. We shouldn't allow SESTA to allow for the same nonsense.

Finally, Keller suggests that if we must go through with such a bad bill, there should be some requirements on transparency about the impacts for both tech platforms and government agencies, so that we can look back on the bill and determine what it did -- both good and bad.

Will Congress take any of these steps? It doesn't look like it.

As for Goldman, his post focuses on an amendment that Senator Wyden is offering. Last I heard, it appears that the Senate may actually consider this amendment. And it's an amendment similar to one that Goldman himself suggested -- with a very modest addition to SESTA clarifying the whole question of does "monitoring" equal "knowledge." Specifically the amendment would add the following language:

The fact that a provider or user of an interactive computer service has undertaken any efforts (including monitoring and filtering) to identify, restrict access to, or remove material the provider or user considers objectionable shall not be considered in determining the criminal or civil liability of the provider or user for any material that the provider or user has not removed or restricted access to.

As Goldman notes, this one amendment would fix the worst problems of SESTA (while still leaving in place plenty of others). If you at least support making SESTA less horrible, he suggests calling your Senators and letting them know:

If you think this is a meritorious fix to a bad bill, then *immediately* call your Senators (you have 2, remember!) and tell them:

1) You oppose SESTA/FOSTA because it’s not clear the law actually helps sex trafficking victims; and

2) You want your Senator to support Sen. Wyden’s proposed content moderation amendment because it ensures online services will keep being the first line of defense in the fight against sex trafficking.

Note 1: This issue could be moot as early as Monday afternoon, so literally CALL NOW.

Note 2: CALL, not email. The EFF has made it easy for you to do.

It seems quite likely the bill is going to pass very soon and then get signed into law. The fact that there are simple and reasonable ways to improve on the bill, which Congress is blatantly ignoring, is problematic.

Read More | 15 Comments | Leave a Comment..

Posted on Techdirt - 15 March 2018 @ 11:54am

How Trump's Lawyer's Silly Lawsuit Against Buzzfeed May Free Stormy Daniels From Her Non Disclosure Agreement

from the own-goals dept

We've written about Trump's long-term personal lawyer Michael Cohen a few times before. The first time was back in 2015 when he made a particularly stupid threat against reporters for reporting on Cohen's own stupid comments. In case you don't remember:

“I will make sure that you and I meet one day while we’re in the courthouse. And I will take you for every penny you still don’t have. And I will come after your Daily Beast and everybody else that you possibly know,” Cohen said. “So I’m warning you, tread very fucking lightly, because what I’m going to do to you is going to be fucking disgusting. You understand me?”

“You write a story that has Mr. Trump’s name in it, with the word ‘rape,’ and I’m going to mess your life up… for as long as you’re on this frickin’ planet… you’re going to have judgments against you, so much money, you’ll never know how to get out from underneath it,” he added.

That lawsuit never materialized.

The second time Cohen was written about here was when he did sue the press. Earlier this year he actually filed a lawsuit against Buzzfeed over Buzzfeed's decision to publish the infamous Christopher Steele dossier. As we noted, this lawsuit was particularly nonsensical, as he's suing Buzzfeed for statements in the dossier made by someone else.

But, now it appears that that lawsuit may backfire in a way so spectacular, I don't think any novelist could create a twist this diabolical.

You see, Cohen is also at the center of the whole Stormy Daniels mess. If you somehow have been under a giant rock for the past month or so, Cohen has admitted to paying $130,000 to Daniels (real name: Stephanie Clifford). As multiple places have reported, Daniels was apparently paid the money as part of an agreement to buy her silence over an affair she had with Donald Trump a decade or so ago. There are a huge list of important questions around all of this, including whether the whole thing violated campaign finance laws (which it very likely did).

A big part of the fight is over whether or not Daniels can really tell her story. We've noted that Trump lawyers are threatening to go to court to stop CBS from airing an interview, while Daniels' lawyers have argued that the agreement is not valid as Trump never signed it -- while also offering to pay back the $130,000 to break the agreement (which... uh... is not exactly how it works). And I won't even get into the hilariously meaningless "private" temporary restraining order that Cohen went to an arbitration firm to get, without even notifying Daniels.

Enter Buzzfeed: one of its lawyers on the Cohen case, Katherine Bolger from powerhouse law firm Davis, Wright, Tremaine, just sent a letter to Daniels' lawyer, Michael Avenatti, asking Daniels to preserve the documents at issue (i.e., the gag agreement), noting that this may be relevant to their own defense against Cohen. This suggests a plan to subpoena this information, which would likely free it from the gag order (and hand Buzzfeed one hell of a story). The preservation demand covers a lot of potentially interesting info:

This includes without limitation all relevant ESI (including but not limited to e-mail), banking records, Word documents, spreadsheets, PDFs, reports, articles, books, memos, letters, calendar entries, handwritten notes, text messages, chats, phone messages, phone logs, audio recordings, or any other type of document or communication, final or draft, in either written or electronic format.

"ESI" in the above stands for "electronically stored information." The letter also asks for details of "any and all payments made by Mr. Cohen or Essential Consultants, LLC to Ms. Clifford, including but not limited to documents that would show the means by which the funds were transferred and/or the payments were made."

So why does Buzzfeed argue this is relevant to their own case? Well, because Cohen's lawsuit against Buzzfeed argues that Buzzfeed defamed him by implying that he had some role in possible Russian connections with the Trump campaign -- and Buzzfeed argues that cash payments Cohen was making to someone to silence them around the campaign is directly relevant to the questions at play in the lawsuit:

In his Complaint... Mr. Cohen asserts a claim for defamation based on an article published by Defendant BuzzFeed in January 2017 entitled "These Reports Allege Trump Has Deep Ties to Russia".... The Article contained an embedded document file containing a 35-page colleciton of memoranda that primarily discuss Russian efforts to influence the 2016 U.S. Presidential election, including alleged ties between Russia and President Trump's campaign... The memoranda in the Dossier contain certain references to Mr. Cohen that Mr. Cohen alleges falsely imply that he played a role in facilitating Russian interference in the election...

Mr. Cohen's role in President Trump's 2016 campaign, including but not limited to any payments he made or facilitated to third parties during or in connection with the campaign, is therefore directly relevant to the Action.

Who knows if this move will actually work, but if it does, that would be quite an incredible "own goal" by Cohen in which his own silly lawsuit unravels the other legal mess that he's been trying to keep under wraps. This is the kind of plot twist most novelists can only dream about (or reject for sounding to implausible to be real)...

Read More | 44 Comments | Leave a Comment..

Posted on Techdirt - 15 March 2018 @ 9:38am

As Trump Nominates Torture Boss To Head CIA, Congresswoman Suggests It's Sympathizing With Terrorists To Question Her Appointment

from the say-what-now? dept

Update: Late this evening ProPublica retracted and corrected a story from last year, saying that Haspel was in charge of the Thai CIA prison site while Zubaydah was tortured. That does suggest that some of the accusations against Haspel actually should be blamed on her predecessor. As the correction notes, she did not arrive to run the base until October of 2002, after Zubaydah's torture had concluded. However the report quotes the NY Times saying that she did still oversee the torture of Abd al-Rahim al-Nashiri and that she was still involved in the destruction of the video tapes of the torture sessions -- both of which should be disqualifying from the job.

In addition, these kinds of mistakes wouldn't be made if the government actually came clean over what it did and who did it. Revealing who ran that prison site and what they did would not harm national security. It would provide an accurate accounting of what really went down. I'm sure that some Haspel supporters will argue that this correction mean that all of the concerns about Haspel are "fake news" even though that's clearly not true at all. Instead, this seems like even more evidence for why the details of her involvement needs to be declassified prior to facing confirmation hearings in the Senate. Our original article is below.

As you've probably heard, with the latest in the neverending rotating cast of characters that makes up the current Trump administration, a set of dominoes has been knocked over with the tweeted firing of Secretary of State Rex Tillerson and the nomination of CIA boss (and former Congressional Rep/longtime defender of surveillance and torture) Mike Pompeo to replace him. While Pompeo was a vocal supporter of the CIA's torture program, he didn't actually have any hand in running it. Instead, that distinction goes to Gina Haspel, whom Trump has nominated to take Pompeo's place. Haspel not only oversaw parts of the CIA's torture program, she was also directly involved with the destruction of the video tapes showing the torture procedures. The still classified 6,700 page Senate report on the program apparently contains a lot of details about the program that Haspel ran while running a CIA blacksite in Thailand. Annabelle Timsit has helpfully pulled together some details of what is currently known from the heavily redacted declassified executive summary (you may recall we spent years writing about the fight to just release that summary). What's stunning is that the program so disgusted CIA employees that some were at the "point of tears and choking up" and multiple people on site asked to be moved to other locations if the CIA was going to continue these torture techniques. From the report (see the update above, noting that these quotes were from a couple months before Haspel took over):

CIA personnel at DETENTION SITE GREEN reported being disturbed by the use of the enhanced interrogation techniques against Abu Zubaydah. CIA records include the following reactions and comments by CIA personnel:

  • August 5, 2002: “want to caution [medical officer] that this is almost certainly not a place he’s ever been before in his medical career. … It is visually and psychologically very uncomfortable.”
  • August 8, 2002: “Today’s first session … had a profound effect on all staff members present … it seems the collective opinion that we should not go much further … everyone seems strong for now but if the group has to continue … we cannot guarantee how much longer.”
  • August 8, 2002: “Several on the team profoundly affected ... some to the point of tears and choking up.”
  • August 9, 2002: “two, perhaps three [personnel] likely to elect transfer” away from the detention site if the decision is made to continue with the CIA’s enhanced interrogation techniques.
  • August 11, 2002: Viewing the pressures on Abu Zubaydah on video “has produced strong feelings of futility (and legality) of escalating or even maintaining the pressure.” Per viewing the tapes, “prepare for something not seen previously.”

In other words, for all the people out there who insist this was not torture, even the CIA people working on the program clearly felt that it went way beyond the line.

Perhaps even more incredible is that Ali Soufan, the former FBI agent who interrogated Abu Zubaydah before the CIA's team of torturers took over, has written a damning article about that program:

I know firsthand how brutal these techniques were—and how counterproductive. In 2002, I interrogated an al-Qaeda associate named Abu Zubaydah. Using tried-and-true nonviolent interrogation methods, we extracted a great deal of valuable intelligence from Zubaydah—including the identities of the 9/11 mastermind Khalid Sheikh Mohammed and the would-be “dirty bomber” Jose Padilla, both of whom would be arrested shortly after. Yet some officials later tried to manipulate the record to make it seem as if this intelligence was gained through torture, even going so far as to misstate the date of Padilla’s arrest, which in fact occurred before Zubaydah or any other al-Qaeda suspect was waterboarded.

Unsurprisingly, the CIA’s own inspector general concluded that the torture program failed to produce any significant actionable intelligence; and I testified to the same effect under oath in the Senate. What’s worse, the program has gotten in the way of justice: To this day, we cannot prosecute terrorists such as the masterminds behind the USS Cole and 9/11 attacks, in large part because the evidence against them is tainted by torture.

Soufan also calls out Haspel's role in destroying the evidence of torture.

In 2005, Jose Rodriguez, the CIA’s counterterrorism chief, ordered the destruction of some 92 videotapes of the harsh methods being used on al-Qaeda suspects that the black site Haspel had once run. Rodriguez issued this order in defiance not only of the CIA’s own general counsel at the time, John Rizzo, but also of a federal court order. And to draft the cable ordering the tapes to be thrown into an “industrial-strength shredder,” Rodriguez turned to his then-chief of staff—Haspel.

Rodriguez was later criticized for his actions by the CIA’s inspector general; but true accountability—for the torture program itself, as well as for the destruction of evidence—has proved elusive. This gives rise to another set of questions that will need to be pressed in the Senate. Was Haspel pleased with the order she drafted, or troubled by it? Does she stand by Rodriguez’s public justification, that he was protecting the lives of his operatives, or his private one, documented in declassified emails, that the tapes would make him and his group “look terrible”? Above all, if the torture program was so valuable and necessary, why destroy the tapes at all?

Soufan also reiterates (as mentioned above) that "many professionals within the agency courageously chose to stand up against the enhanced techniques, walking away from black sites in protest and registering a large number of complaints." Haspel was a willing participant and leader in the effort. Soufan also notes that the CIA used the intelligence he obtained, not via torture, and lied to Congress about it, pretending that it came about via its failed and morally repulsive torture program.

Plenty of information about Haspel's involvement in both the torture program and the cover-up is still classified -- leading at least some Senators to call for declassifying that information. Rand Paul has been the most vocal opponent to the appointment of Haspel:

Paul said he is opposing Haspel due to her involvement in the enhanced interrogation program during the George W. Bush administration. He said she showed "joyful glee at someone who is being tortured."

"I find it just amazing that anyone would consider having this woman at the head of the CIA," Paul said.

This is a principled stand. And yet, he is being attacked for it. The most incredible attack came from Rep. Liz Chaney (whose father helped set up and defend the torture program), who directly claimed that Rand Paul questioning whether or not we want a torturer to lead the CIA was "defending and sympathizing with terrorists."

Let the insanity of that statement sink in for a moment. Here you have a member of Congress claiming that a Senator is "defending and sympathizing with terrorists" for merely suggesting that we shouldn't support having someone who ran the CIA torture program as the next CIA director. Even if you believe -- against all evidence, and against basic human decency -- that torture is a good thing to use against anyone, how is it possibly "sympathizing with terrorists" to suggest that such a person is not qualified to be CIA director? Does Cheney also believe that Soufan, the former FBI agent who actually got intelligence out of terrorists without torturing them is also "defending and sympathizing with terrorists" in stating:

And yet today, the candidate for the top job at the agency is someone who willingly participated in both the program and the attempted cover-up. We need to consider what kind of message this sends to people in the intelligence community and the wider government. Do things right, stand up for American values, and you will be ignored. Flout them, and you will be rewarded.

What kind of sick mind is so supportive of torture that she would argue that merely questioning whether this person should head the CIA is somehow siding with the terrorists? Politicians make really stupid statements all the time, but Liz Cheney's statement is positively jaw dropping in its blind obedience to what many have argued are war crimes by the US government. This kind of logic is the kind of logic that leads to very dangerous outcomes. It's beyond Machiavellian. It is not even that the ends justify the means (which would be bad enough), because the ends did not justify the means with the CIA's torture program. It's that merely questioning the means somehow makes you sympathetic to the cause of terrorists. That's a recipe for disaster. It allows no questioning. It allows no dissent. It allows no conscience. It is pure authoritarian evil.

43 Comments | Leave a Comment..

Posted on Techdirt - 14 March 2018 @ 1:24pm

Just As Everyone's Starting To Worry About 'Deepfake' Porn Videos, SESTA Will Make The Problem Worse

from the what-is-congress-thinking? dept

Over the last few months, if you haven't been hiding under a tech news rock, you've probably heard at least something about growing concerns about so-called "Deepfakes" which are digitally altered videos, usually of famous people edited into porn videos. Last month, Reddit officially had to ban its deepfakes subreddit. And, you can't throw a stone without finding some mainstream media freaking out about the threat of deepfakes. And, yes, politicians are getting into the game, warning that this is going to be used to create fake scandals or influence elections.

But, at the same time, many of the same politicians suddenly concerned about deepfakes are still pushing forward with SESTA. However, as Charles Duan notes, if SESTA becomes law, it will make it much more difficult for platforms to block or filter deepfakes:

Under it, websites that have “knowledge” that some posted material relates to illegal sex-trafficking can be deemed legally responsible for that material. What it means for a website to have “knowledge” remains an open question, especially if the site uses automatic or artificial intelligence systems to review user posts. Therefore, this language opens the door to a potentially wide range of lawsuits and prosecution.

The worst case scenario is that, to avoid having “knowledge” of sex trafficking, Internet services will stop content-moderation entirely. This scenario, which some experts call the “moderator’s dilemma,” would most likely affect smaller websites—including message boards and forums that serve special interests—that can’t afford the advanced filtering systems or armies of content editors that the big sites use. These smaller sites have already faced difficult problems with content moderation, and would be even less likely to spend resources on cleaning up after their users if doing so might lead to a lawsuit.

Duan points out that it goes beyond just the moderator's dilemma aspect of this. Even if sites do decide to moderate, at this point Congress is making it clear that whatever moral panic of the day that excites it may lead to new laws demanding action. But if they're desperately chasing the last problem, they have even less time to deal with the new one.

One of the good things about CDA 230, is that it actually allows platforms to experiment and try out different ways of moderating content. If they fail (as they often do!), they hear about it from their users, or the press, or from politicians. In short, they're allowed to experiment, but have incentives to try to find the right balance. But if Congress is enacting carve-outs that make any failure to properly filter a crime, then it becomes almost impossible, and the incentive is just to avoid doing anything at all. That's not at all healthy.

25 Comments | Leave a Comment..

Posted on Techdirt - 14 March 2018 @ 9:34am

Sex Workers And Survivors Raising The Alarm About SESTA: It Will Literally Put Their Lives In Danger

from the granstanding-putting-people's-lives-at-risk dept

Last week I asked for anyone to explain how SESTA would (in any way) reduce sex trafficking? Not a single person even tried to answer. Because there is no answer. Sex trafficking is already illegal, and yet people do it. Nothing in SESTA makes sex trafficking more illegal. Nothing in SESTA makes it easier for law enforcement to find or crack down on sex trafficking or to help the victims of sex trafficking. Indeed, as we've detailed, it does the exact opposite. It puts criminal liability on internet sites that are somehow used in conjunction with prostitution (going beyond just trafficking, thanks to the FOSTA addition to SESTA), and uses a vague, poorly drafted, unclear "knowledge" standard that none of SESTA's supporters can adequately explain or define. As we noted, from our experience in covering what happens when you pin liability on a platform instead of its users -- especially using vague and unclear standards -- bad things usually result.

But over the past few days, it's becoming increasingly clear just how dangerous this bill could actually be. Last week we wrote about Alana Massey's powerful article on just how much damage SESTA could actually do to sex workers, including shutting down the various resources that they use to protect themselves, keep safe, or even get information to get out of sex work (for those who wish to do so). It also will mean that sites that provide tools and information for victims of sex trafficking may also be forced to shut down. It's hard to see how that's a good thing.

Over at Jezebel, Tracy Clark-Flory notes that the bill "is a disaster for basically everything it touches," and highlights how survivors of sex work have set up their own Survivors Against SESTA website that lays out in stark detail just how dangerous SESTA will be for everyone.

Shutting down websites that sex workers use to work indoors and screen clients more safely does not stop traffickers. To the contrary, this only drives sex workers, including those who are trafficked, to find clients on the street where they face higher rates of violence, HIV, Hepatitis C and sexually transmitted infections, and exploitation.

These websites hold vital resources for trafficking investigations.

There are no industry standards to stop traffickers from using websites for exploitation. This legislation does not get us closer to that goal, and instead makes it harder for police, prosecutors, or websites to identify and help victims.

They also note that SESTA will disproportionately harm those in the LGBTQ community. It highlights a letter from the National Center for Transgender Equality, the National Center for Lesbian Rights and a bunch of other groups noting:

Meaningful anti-trafficking work should not make those in the sex trade more susceptible to violence and exploitation. After the closure of RedBook and Rentboy.com, sex workers were instantly thrown from the online spaces and communities which provided the ability to screen clients, find out safety and health information and form community. The ability to access online platforms to advertise means that sex workers are able to screen clients for safety, negotiate boundaries such as condom use, and work in physically safer spaces. A 2017 study from West Virginia University and Baylor University found a 17% drop in female homicide rates correlated to Craigslist opening its Erotic section – because it made sex work safer. Taking away online platforms moves sex workers into more vulnerable and violent conditions, including street-based work where rates of physical and sexual violence and exploitation are significantly higher.

Over and over again supporters of SESTA have framed anyone against is as somehow being supporters of sex trafficking -- which is both wrong and blatantly intellectually dishonest. One can be very much against the exploitation of trafficking victims while simultaneously recognizing that SESTA and FOSTA are horrible ways to try to deal with those issues, and to highlight how those bills will not help, and will cause an awful lot of very real damage, including putting people's lives at risk. In response, some SESTA supporters will rightly claim that sex trafficking victims lives are also at risk -- which is true... but that brings us back to the simple fact that nothing in SESTA actually helps victims of sex trafficking. It just maks it harder for law enforcement to find them, help them and to arrest those responsible for the trafficking in the first place. Instead, it gives law enforcement incentives to go after internet companies, while the sex trafficking continues, often in places that are more difficult for law enforcement to track, and while making it much harder for those involved to get access to the information they need.

Elsewhere, groups that work with victims of sex trafficking are speaking out on how much damage these bills would do to the actual victims. They even point out that the demonized Backpage was essential in helping bring traffickers to justice:

Megan Mattimoe, executive director and staff attorney at Advocating Opportunity, which assisted 150 victims of trafficking this past year, says she has seen Backpage provide information about trafficking victims captured in ads along with data on advertisers to aid in prosecutions. “In our cases,” she says, “Backpage not only complied with prosecutors’ requests, but they would also send someone to trial to testify that those business records were authentic.” Since Backpage closed its adult advertising section in January 2017, Mattimoe says, her organization has seen “victims advertised on sites housed outside the U.S.,” where federal prosecutors have neither subpoena power nor Backpage’s cooperation.

Again, it is difficult to see how this is helping victims of sex trafficking in any way at all.

Many more people are speaking out on Twitter using the hashtag #LetUsSurvive. One of the organizers of that campain, Lola Li, gave a fascinating interview in which she discusses why she is so worried about SESTA/FOSTA and the impact it will have on survivors and marginalized communities:

...the laws that prosecutors need to go after traffickers ALREADY EXIST. We don’t need more laws. That’s not going to address the root causes of the problem. This bill is not about fighting trafficking. It’s a way for self-interested politicians and self-interested “anti-trafficking” (rolling my eyes, as if anyone could be pro-trafficking) groups to pat themselves on the back while actually doing nothing to help.

Unfortunately, from all indications, almost no one in the Senate cares about what this bill will actually do. They've decided that since the bill says it's against sex trafficking, it must actually be against sex trafficking, and no matter how many times people point to the damage it will actually do to victims, they're going to vote for it, and then hide from and ignore the very real damage they've created.

40 Comments | Leave a Comment..

Posted on Techdirt - 13 March 2018 @ 12:07pm

Twitter's Attempt To Clean Up Spammers Meant That People Sarcastically Tweeting 'Kill Me' Were Suspended

from the not-helpful dept

Just recently, Senator Amy Klobuchar suggested that the government should start fining social media platforms that don't remove bots fast enough. We've pointed out how silly and counterproductive (not to mention unconstitutional) this likely would be. However, every time we see people demanding that these platforms better moderate their content, we end up with examples of why perhaps we really don't want those companies to be making these kinds of decisions.

You may have heard that, over the weekend, Twitter started its latest sweep of accounts to shutdown. Much of the focus was on so-called Tweetdeckers, which were basically a network of teens using Tweetdeck software to retweet accounts for money. In particular, it was widely reported that a bunch of accounts known for copying (without attribution) the marginally funny tweets of others and then paying "Tweetdeckers" for mass promotion. These accounts were shutdown en masse over the weekend.

Twitter noted that the sweep was about getting rid of spammers:

A spokesperson for Twitter told HuffPost on Saturday that the sweep was a part of a broader company effort to fight spam on the platform. Last month, Twitter announced it would be making changes to TweetDeck and restricted people from using the app to retweet the same tweet across multiple accounts.

“Keeping Twitter safe and free from spam is a top priority for us,” the company said in a February blog post. “One of the most common spam violations we see is the use of multiple accounts and the Twitter developer platform to attempt to artificially amplify or inflate the prominence of certain Tweets.”

Fair enough. But some people noticed that not everyone swept up in this mass suspensions were involved in such shady practices. The Twitter account @madblackthot2, whose main account (drop the "2") appears to have been temporarily suspended, put together a fascinating thread about how Twitter appeared to be suspending accounts based on keywords around self-harm with a few different examples of people having their accounts suspended for old tweets in which they sarcastically said "kill me."

There are more examples as well. Not everyone who tweets "kill me" is getting suspended, so at least the algorithm is slightly more sophisticated than that. One explanation given is that when a user is reported for certain reasons, the system then searches through past tweets for specific keywords. Perhaps that works in some contexts, but clearly not all of them.

And, again we end up in a situation where demanding that a social media platform do "more moderation!" to kill off bad accounts leads to lots of collateral damage in the dumbest possible way. And, yet, at the same time, people are quickly finding new election propaganda Twitter bots sprouting up like weeds.

This is not to say that Twitter shouldn't be doing anything. The company is clearly trying to figure out what to do and how to handle some of these issues. The issue is that companies are inevitably going to be bad at this. And, yet, the constant push from politicians is to make them more and more legally responsible for not fucking up such things -- which is basically an impossible task. If Twitter were legally mandated to remove certain types of accounts, it's likely that we'd end up seeing many, many more examples of bad takedowns a la the "kill me" suspensions.

19 Comments | Leave a Comment..

Posted on Free Speech - 13 March 2018 @ 9:33am

Trump's Lawyers Apparently Unfamiliar With Streisand Effect Or 1st Amendment's Limits On Prior Restraint

from the someone-send-them-the-big-lebowski dept

Over this past weekend, it was revealed that (1) the adult film actress Stormy Daniels (real name: Stephanie Clifford), who has claimed she had an affair with Donald Trump and then was given $130,000 to stay silent about it, is scheduled to appear on 60 Minutes next weekend and (2) President Trump's lawyers are considering going to court to block CBS from airing it. This is silly, dumb and not actually allowed by the law.

Lawyers associated with President Donald Trump are considering legal action to stop 60 Minutes from airing an interview with Stephanie Clifford, the adult film performer and director who goes by Stormy Daniels, BuzzFeed News has learned.

“We understand from well-placed sources they are preparing to file for a legal injunction to prevent it from airing,” a person informed of the preparations told BuzzFeed News on Saturday evening.

Someone should send his lawyers the Supreme Court's ruling in New York Times v. United States, in which the Nixon White House tried (and failed) to block newspapers from publishing the Pentagon Papers, noting that it would be obvious prior restraint to block such a publication. As Justice Black noted in a concurring opinion:

Both the history and language of the First Amendment support the view that the press must be left free to publish news, whatever the source, without censorship, injunctions, or prior restraints.

In the First Amendment, the Founding Fathers gave the free press the protection it must have to fulfill its essential role in our democracy. The press was to serve the governed, not the governors. The Government's power to censor the press was abolished so that the press would remain forever free to censure the Government. The press was protected so that it could bare the secrets of government and inform the people. Only a free and unrestrained press can effectively expose deception in government. And paramount among the responsibilities of a free press is the duty to prevent any part of the government from deceiving the people and sending them off to distant lands to die of foreign fevers and foreign shot and shell. In my view, far from deserving condemnation for their courageous reporting, the New York Times, the Washington Post, and other newspapers should be commended for serving the purpose that the Founding Fathers saw so clearly. In revealing the workings of government that led to the Vietnam war, the newspapers nobly did precisely that which the Founders hoped and trusted they would do.

If that's too complicated, someone could just send them renowned legal scholar Walter Sobchak's analysis of the issue of prior restraint. That might be more their speed:

Of course, even without it being blatantly unconstitutional, there's the general Streisand Effect nature of trying to stifle the interview -- which seems guaranteed to just make that many more people interested in what Clifford has to say about the President. One could argue that this was already going to get a ton of attention so perhaps it wouldn't make that big a difference on that front, but don't underestimate just how much free advertising this gets the interview... and how it makes more people wonder why the President is so focused on silencing Clifford.

72 Comments | Leave a Comment..

Posted on Techdirt - 12 March 2018 @ 12:03pm

If The US Government Can't Figure Out Who's A Russian Troll, Why Should It Expect Internet Companies To Do So?

from the it's-not-that-easy dept

A few weeks back, following the DOJ's indictment of various Russians for interfering in the US election, we noted that the indictment showed just how silly it was to blame various internet platforms for not magically stopping these Russians because in many cases, they bent over backwards to appear to be regular, everyday Americans. And now, with pressure coming from elected officials to regulate internet platforms if they somehow fail to catch Russian bots, it seems worth pointing out the flip side of the "why couldn't internet companies catch these guys" question: which is why couldn't the government?

Declan McCullagh has an excellent article over at Reason pointing out that all these government officials trying to blame internet companies should probably look a little more closely at their own houses first.

In the bowels of Washington officialdom, despite billion-dollar intelligence budgets and a peerless global surveillance apparatus, very little appears to have been done. No Russian nationals associated with the disinformation campaign were deported from the United States. (Three were improvidently granted U.S. visas.) No official warnings appear to have been sent to social networks or payment processors. And no indictments were made until a few weeks ago.

Facebook notified the FBI about Russian activity in June 2016, but no U.S. law enforcement or intelligence officials visited the social media company to compare notes. During the 2016 presidential campaign, the State Department pulled the plug on a project to combat Russian disinformation. The New Yorker concluded that the FBI, despite its $9 billion budget and 35,000 employees, simply "is not up to the job of detecting and countering Russian disinformation." The Washington Post summarized the bureaucratic failures: "Top U.S. policymakers didn't appreciate the dangers, then scrambled to draw up options to fight back. In the end, big plans died of internal disagreement."

So it's a surprise to see senior members of the House and Senate intelligence committees, which are charged with providing "vigilant legislative oversight" of the nation's spy and counter-espionage agencies, pointing fingers approximately 2,800 miles westward instead.

Of course, you can argue that now, way after the fact, the DOJ has brought out this indictment. But, so too, have most of the internet platforms now been able to research and investigate what happened. But looking back retrospectively is quite different from proactively determining any of this on the fly.

McCullagh notes, correctly, that this doesn't mean internet platforms should do nothing. They obviously all are scrambling to figure out what to do going forward. But it does raise questions as to why the government seems to think the internet platforms can magically figure all of this out when they themselves could not. And, it's particularly telling that it's the two Congressional Intelligence Committees, which are supposed to oversee the intelligence community -- but usually just bolster or shield the intelligence community from criticism -- that are doing the most finger pointing. Perhaps it's more because they want to distract from the failures of the intelligence community.

I'm sure that some will argue some version of the "nerd harder" excuse for why internet companies should be better at detecting foreign influence than the NSA, but (1) any "nerd harder" argument is automatically void for being specious and (2) come on, the NSA has much great ability to connect these threads than any internet platform, no matter what some people will tell you.

34 Comments | Leave a Comment..

Posted on Techdirt - 12 March 2018 @ 9:19am

Killing The Golden Goose (Again); How The Copyright Stranglehold Dooms Spotify

from the because-of-course dept

For many, many, many years, we've talked about how the legacy entertainment industry will seek to kill the Golden Goose by strangling basically any innovation that is helping it adapt to new innovations. We saw the same pattern over and over and over again. The simple version of it goes like this: the legacy entertainment industry sits around and whines about how awful the internet is because it's undermining its gatekeeper business model that extracts massive monopoly rents, but does nothing to actually adapt. Eventually, companies come along and innovate and create a service (a) people want that (b) actually is legal and pays the legacy companies lots of money. This should be seen as a win-win for everyone.

But the legacy companies get jealous of the success of the innovator who did the actual work. They start to overvalue the content and undervalue the innovative service. The short version of this tends to pop up when a legacy entertainment exec says something like "why is innovative company x making so much money when all it's doing is making use of our content?" Of course, if the service part was so obvious, so easy, and so devoid of value, then the legacy entertainment companies would have done it themselves. But they didn't. So with the jealousy comes the inevitable demand for more cash from the innovator. And, usually, demands for equity too, which the innovator has basically no ability to resist, because they need to have a "good" relationship with the content companies. But the demands for more (and the jealousy) never go away.

The end result, of course, is that tons of innovative businesses that created amazing services that people liked get crushed. Completely. Venture capitalist David Pakman (who founded one of the companies, which I used way back in the day, that was eventually crushed, called MyPlay) detailed how the legacy recording industry used this strategy to bury more than 150 companies over the past two decades. It's the same story over and over again. Any company becomes too successful and the legacy copyright holders squeeze them to death, whining the whole time about how they don't pay enough. As Pakman wrote:

The music industry complains loudly about the “leverage” these giants have over them. First they criticized Apple iTunes for not agreeing to raise prices above $0.99, then they went after Pandora and other webcasters by insisting webcasting rates were too low, then they attacked Spotify for not paying them enough, then they insisted Apple Music pay them more than Spotify did, and now, just as the YouTube licensing agreements are coming up for renewal, they complain YouTube doesn’t pay them as much as Spotify.

But this is a “crisis” of their own making. Many of us argued for years that it was in the industry’s best interest to create a healthy ecosystem of hundreds or thousands of successful companies, all enjoying successful businesses around music. But those arguments fell on deaf ears, and instead the industry fought repeatedly to raise royalty rates over and over again, despite evidence that not a single company ever achieved profitability.

In my mind, it would have been in the best long-term interests of the recorded music business to enable the widespread success of thousands of companies, each paying fair but not bone-crushing royalties back to labels, artists and publishers. But the high royalty rates imposed upon startups, even after clear signs over the past 19 years that the strategy killed companies, has prevented a healthy ecosystem from emerging. It’s a bed the music industry made for itself, and now it is left to lie in it.

Not only does this crush lots of interesting companies, the history of this sort of destruction has served as a giant warning sign to entrepreneurs. Years back we wrote about entrepreneur Tyler Crowley explaining how this kind of history makes entrepreneurs steer clear of doing anything with music. His original post is sadly gone from the internet, but we've still got some quotes that highlight the key points. His argument was that there are different options for entrepreneurs, which he describes as "islands" with different rules and conditions to "dock" at those islands:

For tech folks, from the 35,000' view, there are islands of opportunity. There's Apple Island, Facebook Island, Microsoft Island, among many others and yes there's Music Biz Island. Now, we as tech folks have many friends who have sailed to Apple Island and we know that it's $99/year to doc your boat and if you build anything Apple Island will tax you at 30%. Many of our friends are partying their asses off on Apple Island while making millions (and in some recent cases billions) and that sure sounds like a nice place to build a business.

And what about "Music Biz Island"? Well, the labels have made it clear you don't want to dock there.

Now, we also know of Music Biz Island which is where the natives start firing cannons as you approach, and if not stuck at sea, one must negotiate with the chiefs for 9 months before given permission to dock. Those who do go ashore are slowly eaten alive by the native cannibals. As a result, all the tugboats and lighthouses (investors, advisors) warn to stay far away from Music Biz Island, as nobody has ever gotten off alive. If that wasn't bad enough, while Apple and Facebook Island are built with sea walls to protect from the rising oceans, Music Biz Island is already 5 ft under and the educated locals are fleeing....

That doesn't seem healthy for music. And that brings us around, of course, to Spotify. The music streaming giant has filed to go public, and Ben Thompson over at Stratechery has done a bang up job highlighting why even this hugely "successful" music platform looks like a disaster from a standard internet investment perspective. Its margins suck. Its margins suck so bad it's still unclear if Spotify can ever make money. Because it's the same old story that we described above, where the labels (who own a large chunk of Spotify -- remember the equity demands?) are crushing the company in a way that is unlike basically every other successful internet company. Internet companies are built on the idea of huge margins, because the marginal of one more customer is minimal.

But not with Spotify. Spotify has to give over so much of its revenue to the labels that it's nearly impossible for it to ever be a viable business. Ben points out that the revenue and costs numbers show that Spotify operates like any "well-managed SaaS company." But it has a "marginal cost problem" in that the label deals (even ones that were restructured recently) guarantee that nearly all of the money that Spotify gets... goes right out the door to the labels.

Spotify’s margins are completely at the mercy of the record labels, and even after the rate change, the company is not just unprofitable, its losses are growing, at least in absolute euro terms:

Ben has all this laid out in his usual nice charts and graphs and such that are worth checking out.

But, what this all comes down to, yet again, is the stranglehold of a messed up copyright system. The fact that the labels can kill the golden goose over and over and over again is because of one simple reason: the artificial monopoly handed to them by the copyright system, and the power it bestows. It's a market distortion. This isn't to say that there shouldn't be any copyright (let's see if our usual trolls make it this far in the post or if they've already dashed off a comment about how I want no copyright at all...). But it certainly demonstrates how the copyright system is so weighted to favor the copyright holder that they can strangle basically any business that touches on copyright, and make those markets entirely different from basically any similar business that isn't encumbered with copyrights and legacy businesses who, having failed to adapt themselves, now demand a king's ransom from the companies that did all the adapting for them.

92 Comments | Leave a Comment..

Posted on Techdirt - 9 March 2018 @ 3:25pm

Wikimedia's Transparency Report: Guys, We're A Wiki, Don't Demand We Take Stuff Down

from the good-for-them dept

Wikimedia, like many other internet platform these days releases a transparency report that discusses various efforts to takedown content or identify users. We're now all quite used to what such transparency reports look like. However, Wikimedia's latest is worth reading as a reminder that Wikipedia is a different sort of beast. Not surprisingly, it gets a lot fewer demands, but it also abides by very few of those demands. My favorite is the fact that people demand Wikimedia edit or remove content. It's a wiki. Anyone can edit it. But if your edits suck, you're going to be in trouble. And yet, Wikimedia still receives hundreds of demands. And doesn't comply with any of them. Including ones from governments. Instead, Wikimedia explains to them just how Wikipedia works.

From July to December of 2017, we received 343 requests to alter or remove project content, seven of which came from government entities. Once again, we granted zero of these requests. The Wikimedia projects thrive when the volunteer community is empowered to curate and vet content. When we receive requests to remove or alter that content, our first action is to refer requesters to experienced volunteers who can explain project policies and provide them with assistance.

On the copyright front, they only received 12 requests. I actually would have expected more, but the community is pretty strict about making sure that only content that can be on the site gets there. Only 2 of the 12 takedowns were granted.

Wikimedia projects feature a wide variety of content that is freely licensed or in the public domain. However, we occasionally will receive Digital Millennium Copyright Act (DMCA) notices asking us to remove content that is allegedly copyrighted. All DMCA requests are reviewed thoroughly to determine if the content is infringing a copyright, and if there are any legal exceptions, such as fair use, that could allow the content to remain on the Wikimedia projects. From July to December of 2017, we received 12 DMCA requests. We granted two of these. This relatively low amount of DMCA takedown requests for an online platform is due in part to the high standards of community copyright policies and the diligence of project contributors.

This is actually really important, especially as folks in the legacy entertainment industry keep pushing for demands that platforms put in place incredibly expensive "filter" systems. Wikipedia is one of the most popular open platforms on the planet. But it would make no sense at all for it to invest millions of dollars in an expensive filtering system. But, since the whining from those legacy industry folks never seems to recognize that there's a world beyond Google and Facebook, they don't much consider how silly it would be to apply those kinds of rules to Wikipedia.

Also interesting is that Wikipedia has now been dealing with some "Right to be Forgotten" requests in the EU. It notes that in the six month period covered by the transparency report they received one such request (which was not granted):

rom July to December of 2017, the Wikimedia Foundation received one request for content removal that cited the right to erasure, also known as the right to be forgotten. We did not grant this request. The right to erasure in the European Union was established in 2014 by a decision in the Court of Justice of the European Union. As the law now stands, an individual can request the delisting of certain pages from appearing in search results for their name. The Wikimedia Foundation remains opposed to these delistings, which negatively impact the free exchange of information in the public interest.

I don't envy whatever person eventually tries to go after Wikimedia in court over a Right to be Forgotten claim -- though it feels inevitable.

There's more to look at in the report, but it is interesting to look over this and be reminded that not every internet platform is Google or Facebook, and demanding certain types of solutions that would hit all platforms... is pretty silly.

27 Comments | Leave a Comment..

Posted on Techdirt - 9 March 2018 @ 9:43am

If You Think SESTA Will Help Victims Of Sex Trafficking, Read This Now

from the congress-is-about-to-make-a-huge-mistake dept

Earlier this week, I asked for anyone to explain how SESTA would actually stop any sex trafficking. No one had an answer. In that post, I detailed how it would actually make it harder to stop sex trafficking on various platforms. That's not because I'm knowledgeable about sex trafficking -- but I have spent 20 years documenting what happens when you make platforms liable for the actions of their users. And the result is never what the people pushing for such liability expect. It's almost always incredibly counterproductive and dangerous.

But someone who does understand issues related to sex work and sex trafficking is Alana Massey, who has written a really fantastic piece detailing just how much harm SESTA will do to both sex workers and victims of sex trafficking. We've already discussed how FOSTA expands the scope of the law away from just "sex trafficking" to cover all sex work. And in bolting that together with SESTA, which punches a giant hole (surrounded by vague untested standards) into CDA 230, it also creates a ridiculous moderator's dilemma for any website. Massey details what that will actually mean.

... the new legislation would threaten to criminalize peer-to-peer resource sharing that makes people in sex work safer and more connected. The very websites that these bills enable law enforcement to criminalize are precisely where I found the generous communities and actionable advice I needed to get out of and avoid exploitative sex work situations going forward. Though the bill is meant to target sites hosting sex work advertisements, it covers online forums where sex workers can tip each other off about dangerous clients, find emergency housing, get recommendations for service providers who are sex worker-friendly, and even enjoy an occasional meme. These are often on the same websites where advertisements are hosted.

Before you say, "Just get rid of the ads, then," know that online ads themselves are one of the greatest tools for protecting yourself as a sex worker: They make it possible to screen clients, arrange safe indoor working conditions, and establish a communication record with clients that street-based work doesn't provide.

Massey has a lot more in that article about how important CDA 230 is, but that's the kind of stuff we cover all the time. What's more interesting are the details about just how much damage SESTA will do directly to the people it purports to "help." Oh, and she also debunks the moral panic leading up to the bill -- specifically the claim that's been repeated by multiple politicians and famous people pushing SESTA, that it's as easy to engage in sex trafficking today as it is to order a pizza. She tested out that claim:

The premise is that right this second and in your own hometown, it is totally possible to go online and buy a child for sexual slavery — that it's "as easy as ordering a pizza," according to Schumer. I put this claim to the test in my local area and surrounding counties via a set of searches on Backpage that have likely landed me on an FBI watch list. Not one lousy "kid," "minor," "teen," or "child" was available for purchase. Meanwhile, Domino's is at my house within 21 minutes of me placing an order, come hell or high water. It is almost as if perpetrators of human trafficking aren't really all that likely to advertise using photos of shackled minors.

And that leads into a much bigger point about just how damaging SESTA is going to be to those victims. I won't quote the whole thing (go read it!), but she directly takes on the propaganda film, I Am Jane Doe, which has been used widely to support SESTA, to explain how SESTA is likely to create more Jane Does, not fewer.

The term "Jane Doe" has two main uses: The first is referring to a woman in court, often a victim of sexual violence, who is not being legally identified in the proceedings. The second is referring to the unidentified corpses of women murder victims. These bills seek to destroy the few lines of defense people in the sex trades have to protect themselves from becoming Jane Does. These online resources are how I found people who could help me transition out of sex work safely. They didn't smuggle me out of a lurid foot fetish party in a suitcase under cover of night. They introduced me to editors and gave me some nice pointers on what I could write about when just starting out. My career trajectory was every bit as boring as most career trajectories are: often characterized by boredom and punctuated by both extreme satisfaction and intense desperation.

I don't feel especially moved to talk about the exploitative scenarios I mentioned above just so I can be summarily dismissed as either a liar or too damaged to speak for myself. I have turned my traumas into practical guidance for new sex workers to help them avoid similar situations. The legislation these celebrities are championing may soon make this healing and sharing process a criminal act punishable by years in prison.

There's so much more in the article worth reading -- including a detailed debunking of what various celebrities have been saying in support of SESTA. It's a must read for anyone who thinks SESTA is going to save lives. In all the writing I've done about SESTA, it's been from the perspective of wonky knowledge of how "intermediary liability protection" laws work -- which made it clear to me that laws like SESTA and FOSTA would backfire and be abused in various ways. Massey's piece has opened my eyes to the very specific ways in which these laws will create real harm for the victims the bill is supposedly trying to help. Please read it.

11 Comments | Leave a Comment..

Posted on Free Speech - 9 March 2018 @ 8:31am

Keeper Security Reminds Everyone Why You Shouldn't Use It; Doubles Down On Suing Journalist

from the which-is-harming-its-reputation-more? dept

Back in December, we wrote about a blatant SLAPP suit filed by Keeper Security against Ars Technica and its reporter Dan Goodin. Keeper makes a password manager product, and Goodin wrote an article, based on a flaw discovered by Google's Tavis Ormandy. The flaw impacted the browser extension that works with Keeper's application. Keeper took offense to certain elements of the article, and in particular to the idea that Microsoft had forced people to install the flawed software (since the flaw was actually in the browser extension, which is optional). Keeper Security also felt that the article implied that users of its software were vulnerable to a broad attack that put their passwords at risk, when the details suggested it was a more narrow (but still pretty bad) flaw that would require a specific set of circumstances to expose passwords, and there was no evidence that such a set of circumstances existed.

As we noted, however, the lawsuit was clearly bullshit. It was clearly an attempt to stifle negative press about a pretty bad flaw. In February, Ars Technica and Goodin filed both a Motion to Dismiss as well as a Motion to Strike under California's anti-SLAPP law. Both are well argued and worth reading. The Motion to Dismiss hits on all the expected points on why there's no legitimate defamation claim. The summary covers the highlights:

Defendants truthfully reported the findings of a noted Google researcher that there was a security vulnerability in Plaintiff’s password manager product, which had been bundled with Microsoft’s Windows 10 operating system. Plaintiff does not dispute that the flaw existed. Nevertheless, in response to Defendants’ truthful report, Plaintiff tried to bully Mr. Goodin into editing his news article to use language more to Plaintiff’s liking; Mr. Goodin agreed to make certain edits, and declined others, standing by the accuracy of the reporting.

The would-be “inaccuracies” Plaintiff identifies in the article are – at best – of secondary importance, and do not affect the article’s true “gist or sting”; for that reason alone, the Complaint fails as a matter of law. Furthermore, most of the statements that the Complaint alleges are “false and misleading” don’t have anything to do with Plaintiff, but rather, Microsoft. Such statements are not “of and concerning” Plaintiff and cannot be the basis for a defamation claim. Still other statements are subject to an innocent construction and are pure opinion, and not actionable under Illinois law for those additional reasons. Simply put, Defendants’ article uttered no falsehood that could have defamed Plaintiff. Nor does Plaintiff remotely plead publication with actual malice as required by the First Amendment.

Plaintiff’s assertion that “[t]he goal, and result, of the Article was to injure Keeper and its employees, and disparage Keeper’s products” ... is baseless hyperbole. The fact is, Plaintiff brought this lawsuit seeking to punish, and ultimately enjoin, publication of essential journalism on an matter of vital public concern – cybersecurity – involving a conceded vulnerability in Plaintiff’s product. The technology community is open and transparent in policing such vulnerabilities, and rightly so. Plaintiff, above all, should be interested in ensuring consumers are protected from potential threats – not in using litigation to chill public discussion of such threats. Permitting this case to go forward would not only be contrary to law, it would have a profoundly negative impact on important cybersecurity research and reporting generally.

More specifically, the motion highlights that all of the statements at issue in the case fail to meet the standards of defamation in that they are substantially true, subject to "innocent construction" (that is, they can easily be read in a non-defamatory manner), not even about Keeper Security (but about Microsoft) or non-actionable opinions. Furthermore, the motion notes that Keeper Security fails to plead actual malice, which is necessary as Keeper is a public figure ("actual malice" being the Supreme Court's required standard for defamation cases involving a public figure, and which has a specific definition of defamatory content that the authors knew was false, or which was posted with "reckless disregard" for whether or not it was false).

It's a pretty typical and well plead motion to dismiss. As for the anti-SLAPP motion, Ars/Goodin's lawyers decided to argue that choice of law principles require California's anti-SLAPP law to apply. Illinois, where Keeper is based and where the lawsuit is filed, does have its own anti-SLAPP law, but it's weaker than California's. I'm of the belief that it's proper to apply the anti-SLAPP law of the state of the speaker (even when applying the defamation law and venue of the plaintiff), since that state has the greater interest in protecting the First Amendment rights of its residents, and many courts have agreed. But not all.

Keeper has now (not surprisingly) opposed both motions (here's the opposition to the MTD and here's the opposition to the anti-SLAPP claim, both initially spotted by Zack Whittaker). Both of those filings are highly unconvincing.

Its opposition to the motion to dismiss is basically to just repeat certain phrases that it insists are defamatory -- taking them completely out of context. This is pretty weak, because once the statements are inevitably put back into context, it's difficult to see how Keeper has much of a case. It admits that Goodin corrected certain points upon learning of errors, and what's left are statements that are either mostly true or are clearly opinion. For example, this statement is one that Keeper insists is defamatory:

The flaw was almost identical to one the same researcher disclosed in the same manager plugin 16 months ago that allowed websites to steal passwords.

But that's clearly an opinion based on disclosed facts about the two flaws. It's not defamatory at all. Also, the following statement is listed by Keeper as being defamatory, but again, is clearly a statement of non-actionable opinion:

If an outsider can find a bug similar to the 16-month-old vulnerability so quickly and easily, it stands to reason people inside the software company should have found it first.

That Keeper is continuing to push these claims reflects really, really poorly on them. The company insists it had to file this lawsuit to protect its reputation, but it seems quite clear that this lawsuit is what's harming Keeper's reputation. As a fan of password managers, I will never recommend Keeper to anyone. And not because of the flaws. Every one of these products discovers flaws eventually. But because it's suing a journalist for covering it. So the following statement by Keeper in its opposition is pretty ridiculous:

The users of Keeper’s product rely on the integrity of the Keeper product and the reputation of Keeper in deciding to use the Keeper software.

Right. And suing journalists for writing about your flaws is a pretty damn good way to kill that reputation. As we pointed out in our original post on the lawsuit, lots and lots of security experts publicly suggested people stay away from Keeper because of the lawsuit not because of the flaw.

Keeper also claims its not a public figure, and thus doesn't need to show actual malice (though claims it can). First of all, it absolutely is as public figure under defamation law. As Ars/Goodin's motion points out, the company itself touts how it's an "innovator and leader" and "one of the world's most downloaded." Second, as for the claims that it can show actual malice, that's basically laughable. Goodin directly responded to multiple requests for updates with Keeper, changed a few things when he found their argument compelling, but didn't change parts he didn't believe needed to be changed. That's not what someone does when they're just looking to publish false information. Those are the actions of someone looking to get the story right. That's not actual malice. Just because Keeper disagrees with Goodin's editorial choices does not make them actionable.

In response to the anti-SLAPP argument, Keeper basically mocks the idea that California law could possibly apply in Illinois. But, again, it's not such a crazy idea. Plenty of courts have ruled that the speaker's location is the proper one to use for anti-SLAPP laws (even when the plaintiff's state's defamation laws are used).

Still, the larger issue stands. A softwarer company has filed a clear SLAPP suit against a reporter for reporting on some bad news about their software. That's horrific, and should tell you all you need to know about Keeper Security and whether or not to use their software.

Read More | 13 Comments | Leave a Comment..

Posted on Techdirt - 8 March 2018 @ 3:40pm

Playboy Decides Not To Appeal Silly Boing Boing Lawsuit In The Most Petulant Manner Possible

from the nice-one-guys dept

Well that all happened remarkably quickly. In November, we wrote about Playboy filing a particularly ridiculous lawsuit against the blog Boing Boing for linking to (but not hosting) an Imgur collection and YouTube video highlighting basically all Playboy centerfold images. Boing Boing explained to the court in January that linking is not infringement and the judge dismissed the case in February. And while the court left it open for Playboy to file an amended complaint, it also made it clear that Playboy had basically no chance of winning the case.

So it should be of little surprise that the case is now officially over, with Playboy releasing an impressively silly statement to Cyrus Farivar over at Ars Technica:

Playboy's dispute with Boing Boing is about one party (Boing Boing) willfully profiting from infringement upon the intellectual property of another party (Playboy). This is not David vs. Goliath, it is not about the first amendment and it is not an attack about linking. It is about preventing a party from driving its profits off of piracy.

Despite being informed that it was promoting infringement, Boing Boing has left up its post to try to make even more money. It is unfortunate that a site that has at times created original content is fighting so hard for a right to profit from infringing content.

Boing Boing has argued to the court that it should not be legally responsible for making money off of content it knows to be infringing as long as Boing Boing is not the original infringer. That is not editorial integrity. It is not ethical journalism. It is supporting and contributing to piracy and content creators should not tolerate it.

Although we are not refiling an amended complaint at this time, we will continue to vigorously enforce our intellectual property rights against infringement.

John Vlautin / Corporate Communications
Playboy Enterprises

Okay, so notice there's a whole bunch of pure nonsense before admitting the company won't make things worse (and certainly throw away money) on filing an amended complaint. What's quite incredible is just how... wrong nearly everything is that Playboy says. It's fairly obvious that John Vlautin is not a lawyer and basically knows fuck all about copyright or free speech. Indeed, he's basically been an entertainment industry flack for his career, doing PR/communications for record labels and Live Nation, as well as his own firm, which apparently now represents Playboy.

Anyway, let's be clear: having an old blog post linking offsite is hardly "willfully profiting from infringement." Indeed, the court ruled that what Boing Boing was doing was not infringement. That Vlautin ignores all that just makes him look like a petulant sore loser. And, yes, sorry John, but suing a website for what it legally posted is very much a First Amendment issue. And, remember, John, you work for a company, Playboy, which has historically been a strong First Amendment supporter. Hell, the Hugh Hefner Foundation still gives out First Amendment Awards each year. And, in the past, those awards have included people like copyright scholar Pam Samuelson and EFF co-founder John Perry Barlow (who was a vocal critic of copyright laws and how they are used to censor speech).

Also, it's amusing that Vlautin is mocking Boing Boing's arguments in court (which he totally misrepresents) when Boing Boing won. If the argument was so absurd, why did the court rule against Playboy and why is Playboy not filing an amended complaint? Playboy lost, and the company should know better than to employ a PR flack who doesn't know what he's talking about in sending out petulant statements that reflect exceptionally poorly on the company.

21 Comments | Leave a Comment..

Posted on Techdirt - 8 March 2018 @ 11:58am

Copyright, Censorship, Pepe & Infowars

from the all-mashed-together dept

If you're reading this, you're probably well aware of Pepe the Frog, the cartoon character created by Matt Furie years ago that turned into quite the meme by the 4chan crowd. Over time, the meme morphed into one favored by Trump supporters and the alt-right (though, upset that Pepe has become too "mainstream," that crowd has moved onto something of a derivative work known as Groyper). As you may have heard, Furie has now decided to sue Infowars over a poster the site is selling that puts together a bunch of... well... the crowd of people you'd expect to be fans of Infowars and Pepe.

The lawsuit, which you can read in its entirety, claims copyright infringement -- and it's raising a whole bunch of issues concerning memes and copyright that seemed worth exploring.

To do this, though, I actually find it useful to go back in time a bit, and explore Furie's changing attitude towards what became of Pepe. Back in the summer of 2015, when Pepe was still a big meme, but not quite one associated with racists, Furie gave an interview with Vice, in which he made it clear that he was pretty chill with what had happened with Pepe.

I don't really see it as being something that's negative. It's this almost post-capitalist kind of success. I'm not making any money off of it, but it's become its own thing in internet culture. Now, at least, a lot of people make a conscious effort to go out and try and create that kind of meme success, where you're doing these little one-off characters, little gags, little gifs, and that's definitely your intention. I'm just flattered by it. I don't really care. I think it's cool. In fact, I'm getting kind of inspired by all the weird interpretations of it. I wanna use it to my own advantage and try to come up with comics based on other people's interpretations of it.

Later in that same interview, he even gives his opinion on people profiting off of Pepe, and again doesn't have much of a problem with it:

It's like a decentralized folk art, with people taking it, doing their own thing with it, and then capitalizing on it using bumper stickers or t-shirts. That's happening to me too. There is a tradition of it.

He even admits to having "a little collection of bootleg Pepe stuff."

A year or so later, once Pepe had been adopted by the alt-right, Furie still appeared pretty laid back about the whole thing, while making it clear that he, in no way, agreed with the alt-right. But he saw their usage of the meme as a sort of fascinating look at internet culture:

My feelings are pretty neutral, this isn’t the first time that Pepe has been used in a negative, weird context. I think it’s just a reflection of the world at large. The internet is basically encompassing some kind of mass consciousness, and Pepe, with his face, he’s got these large, expressive eyes with puffy eyelids and big rounded lips, I just think that people reinvent him in all these different ways, it’s kind of a blank slate. It’s just out of my control, what people are doing with it, and my thoughts on it, are more of amusement.

He similarly noted that he expected this was just a phase that would fade out over time:

I think that’s it’s just a phase, and come November, it’s just gonna go on to the next phase, obviously that political agenda is exactly the opposite of my own personal feelings, but in terms of meme culture, it’s people reapproppriating things for their own agenda. That’s just a product of the internet. And I think people in whatever dark corners of the internet are just trying to one up each other on how shocking they can make Pepe appear.

And towards the end of the interview, he's asked if he has any regrets about "not having more control over his image" and Furie responds:

I don’t have any regrets about anything. I do my own thing, and if anything, it’s been kind of interesting to see all the evolutions of Pepe. Yeah, no regrets.

A month after that interview... Furie's opinion appeared to shift somewhat. In reading how he dealt with it, it certainly appears that Furie more or less got annoyed with everyone asking him about this and/or asking him if he supported the views of the alt right (and more annoyed with their views becoming mainstream as well), and he decided to take action. His initial instincts were to create a new Pepe comic that certainly expresses his opinions on having his own creation adopted by Trump and Trump supporters, and then tried to take back the meme with a sort of anti-meme #SavePepe campaign. Again, this is an interesting move, switching from a passive position of "that nutty internet" to one where you're fighting memes with memes.

It took another year or so, to last summer, when it appears Furie finally got really fed up with the whole alt-right Pepe thing, and began dispatching cease and desist letters and some DMCA takedowns from a big name law firm. Some news was made when the author of a hateful Islamophobic book using Pepe as a main character agreed not to publish the book, and to donate the $1,500 he had made from an earlier self-published version to the Muslim civil liberties group, the Council on American-Islamic Relations.

And that takes us up to the Infowars lawsuit. It would not surprise me at all to see Infowars cave and settle the case quickly to get it done with. While I think there's a passable fair use argument here, it's so mixed up with political emotions, if I were Infowars, I wouldn't feel at all comfortable having a judge make a ruling on it, let alone a jury. As I've noted in the past, while some cases are clearer than others, fair use is one of those ones where judges can twist the four factors test in all sorts of ways to reach the outcome they'd prefer -- and Furie is definitely a lot more sympathetic here than Alex Jones. So, while I can see the fair use argument, and don't think it's a crazy argument at all, it's certainly not a slam dunk in an actual courtroom.

What's much more interesting (and bothersome) to me is that for as much as I understand Furie's decision and anger over how this all turned out, it's yet another example of how copyright is frequently morphed into a tool for censorship of ideas, rather than what copyright is actually supposed to be. Copyright is an economic right. The entire purpose was to secure the limited exclusive rights to the copyright holder for the sake of economic benefits. For the most part (with a few small exceptions) the US has rejected using copyright for "moral rights." Yet, this is, quite obviously, a case where Furie and his lawyers are using it as a quasi-moral rights tool. He's (quite reasonably!) upset with what Pepe has become (even if he was cool with it originally) and is now using the tool of copyright to stop that.

Even if it's a legit copyright claim that would hold up in court, the overall situation should trouble folks, because it's not what copyright is supposed to be used for. Using copyright to stop someone from infringing is supposed to protect purely the economic issues, rather than the moral ones. Yet Furie's statements and actions (including getting previous bootlegs and declaring it a cool thing) show that this lawsuit is very much about the moral issues and his desire not to allow those with political views he vehemently disagrees with, to use his character. And I can certainly understand why he'd feel that way -- but that's not what copyright is supposed to be used for. And this is the problem we've discussed in the past of "copyright creep." Because copyright is such a powerful tool to stop speech, it is often used that way. And even if the claim would hold up here, the motives behind the use of copyright are clearly not within the intended realm of copyright law. And that's worrisome.

Either way, I still expect Infowars to settle this rather than fight it (I think they'd be crazy not to...), and I completely understand the reason why Furie may not be happy with the whole situation -- but I worry about more and more stories of copyright being used directly to stifle speech, not for any economic reasons, but for purely censorial reasons.

Read More | 40 Comments | Leave a Comment..

Posted on Techdirt - 8 March 2018 @ 8:29am

More People Realizing That SESTA Will Do A Lot More Harm Than Good

from the it's-a-problem dept

At this point, it seems fairly clear that Congress simply does not care that SESTA is going to do an awful lot of harm for almost no benefit at all, and is rushing towards a Senate vote. But more and more people outside of Congress are recognizing the problems that it will cause. While all of the supporters of the bill are insisting they're doing it to "protect" victims of sex trafficking, as we've explained SESTA will almost certainly make their lives worse -- putting them at much more risk while doing little to nothing to stop actual trafficking. The Daily Dot has a good article talking to advocates for victims of sex trafficking and sex workers, talking about the damage SESTA will cause:

In her 40 years of involvement with the industry—formerly as a trafficking victim, a consensual sex worker, and currently as an advocate and outreach provider—DiAngelo has seen many similar efforts by the government and law enforcement to crack down on the establishments and platforms sex workers use to secure clients. Every time, she explains, the end result is the same.

“When you take away a worker’s available options, they have to drop down to the next-best available option. Nobody thinks about that,” DiAngelo says. “But we don’t disappear because they get rid of our work. Our food disappears, our security, our housing … our safety, that disappears, but we don’t and our need for services definitely doesn’t disappear.” Those things, she says, only grow.

Finding work from an ad posted on the internet means there’s a documented trail linking a sex worker and the client; it means the sex worker has the time to do due diligence, checking in with the client’s references and consulting a community of peers to determine whether or not this person is safe. When those sites shut down, as they periodically do, sex workers look elsewhere for clients—most often, on the street.

And that leads to much more dangerous situations:

Fearful of arrest, sex workers move off main streets and into areas where they’re more difficult to spot. Under pressure to make money, they have less time to assess the situation at hand, leaving them vulnerable to violent predators and, indeed, traffickers with a penchant for exploiting people in desperate situations. When sites like RedBook and Backpage shut down, “the life of the trafficker does not change at all,” DiAngelo says. “They still have their product, which is the victim.”

So again, it's unclear why people think that attacking the platforms (and not the actual traffickers) will help the victims at all.

Meanwhile, following the lead of the Wall Street Journal, the LA Times has released an editorial detailing what a disaster SESTA is and how much harm it will do:

The internet has been a tremendous force for good in the world, creating untold opportunities to connect, communicate, learn, create and make a living. But many of the properties of the net that make good things possible also enable less desirable pursuits, and even evil ones, on a vast scale. Congress is now focused on one of the worst of those pursuits: sex trafficking. It's taking particular aim at websites like Backpage.com that run ads for prostitutes, some of whom have been shown to be underage or adults who are effectively enslaved. But in their efforts to give prosecutors and victims more power in court, lawmakers are poised to weaken a legal protection that has helped produce much of what's good about the net.

The LA Times piece, as did the WSJ piece, also notes that Backpage -- the site that backers of the bill admitted they're targeting, despite the fact that it has already shut down its adult ads section -- still faces two big legal challenges that will show the site is not immune under CDA 230:

The irony is that, as a Senate investigation last year contended, Backpage may not be entitled to immunity under Section 230. It's now facing a Justice Department investigation and a renewed lawsuit by trafficking victims in Boston. If Congress simply cannot wait to see how those cases turn out, there is a middle ground — it could give state attorneys general clearer authority to go after websites that violate federal sex trafficking laws. But if it insists on carving out a bigger hole in Section 230 to battle sex trafficking, it's only a matter of time before it comes under pressure to address another evil, and then another, and then another. And before you know it, there will be nothing left of Section 230.

It really is a big question that no one in Congress seems to want to answer. Why can't Congress wait to see if either the DOJ or the court in Boston can get to Backpage -- because if they can, then there's really no argument for SESTA at all (and, as noted in the first half of this post, it's not clear what the argument is for SESTA anyway). However, in talking to people on the Hill, it appears the general opinion of our lawmakers is that they don't really care if the bill doesn't do anything good. They really want to claim they've "done something" and "going after sex trafficking" (even if it will actually make the problem worse) leads to good headlines. Also, the impression I've gotten is they're sick of people arguing about how bad this bill will be, so they want to pass it just to get it off their plates -- which seems like a bad way to make laws to me. But what do I know?

16 Comments | Leave a Comment..

Posted on Techdirt - 7 March 2018 @ 10:48am

Can Someone Explain How SESTA Will Stop Sex Trafficking?

from the questions,-questions dept

Last week I got into a bit of a debate with a SESTA supporter about the bill, which boiled down to me saying that the bill won't do what it claims, will likely make things worse for victims of sex-trafficking, and will also have massive consequences for the internet and speech online. And the response from the person I was debating was "but my side has all these anti-sex-trafficking groups supporting SESTA." That's not exactly a response to any of the points that I raised. As we've noted from the very start, we may not be experts in sex-trafficking, but we do know how the internet works -- and how laws on intermediary liability impact the internet and content online. And nothing in SESTA will work the way its supporters seem to think it will work.

So I want to directly ask here the same questions I asked the individual I was debating (which he refused to answer before tossing out a few ad hominems and suggesting that I'm not worth talking to because I'm "a blogger."): what in SESTA will actually stop or limit sex-trafficking? Because as far as I can tell, it does absolutely nothing to stop sex-trafficking. It does not target sex-traffickers in any way. Supporters of the bill claim or appear to believe that it will stop sex-trafficking by stopping websites from allowing anyone to engage in sex-trafficking or advertising sex-trafficking victims on their websites. But that's not at all how the bill works.

It's (obviously!) already illegal to engage in sex-trafficking. And it's already illegal to advertise sex-trafficking. And law enforcement can already go after those doing both of those things. And yet, miraculously, both of those things still occur frequently online. Now, a reasonable response to this would be to suggest that law enforcement has a good source of information with which to investigate, arrest, and prosecute sex traffickers, since the necessary information is apparently so easily obtained online.

So, what does SESTA do? Rather than make it easier for law enforcement to go after those illegal activities, it creates a new illegal activity: that of running a website that is used by sex traffickers. So, as we've discussed before, this creates a serious "moderator's dilemma" for websites, leading to one of two likely outcomes. Many websites may stop moderating content, because if they're not looking at the content, they can more credibly claim a lack of knowledge. That means less moderation, less oversight, and likely more use of those platforms for sex-trafficking ads. So, suddenly, sex-traffickers will gravitate to those platforms, and those platforms will be less likely to cooperate with law enforcement because (again) they want to avoid "knowledge" of how their platform is being used, and working with law enforcement risks more knowledge.

On the flip-side of the moderator's dilemma, you will get sites that much more vigorously moderate content. This seems to be the solution that SESTA supports think all platforms will embrace -- which is almost certainly incorrect. Indeed, it's incorrect on multiple levels, because not only will some platforms embrace this more heavy moderation setup, those that do will almost certainly over-moderate to a drastic degree, in order to avoid liability. That will mean fairly aggressive levels of keyword blocking, filters, and automated removals. And, as anyone who has studied how such systems work in the real world, all of those will fail. And they'll fail with both false negatives and false positives. That is, lots of perfectly legitimate content will get taken down (perhaps, as we've discussed before, it could be material to help victims of sex-trafficking), and lots of sex-trafficking content will still get through.

It's that latter point that's pretty important: the people engaged in sex-trafficking are already breaking the law. SESTA changes nothing for them in terms of the illegality of what they're doing. It just means the tools they use are going to change a little bit, and anyone who thinks the traffickers won't adjust with it has apparently never spent any time on the internet. If keywords get blocked, traffickers will come up with new euphemisms (they always do). If forums get shut down, they will gravitate to other forums. If filters are created, they will figure out ways to get around the filters. Nothing in SESTA creates any disincentives at all for actual traffickers.

Indeed, SESTA creates a few things that will make life easier for traffickers. As noted above, it will likely lead some sites to do less moderation, and traffickers will quickly gravitate to such sites. Additionally, it will make it much, much harder for rights groups to post information to help victims of sex-trafficking, since much of that information will be seen as a liability risk, and blocked or taken down. And, finally, it will create massive disincentives for sites to work with law enforcement or families of victims to help them, because of the risk of those actions being used to prove the requisite "knowledge."

So, please: can someone who is a supporter of SESTA explain how the bill will do anything to actually stop sex trafficking? Because I can't find a single useful thing that it does.

40 Comments | Leave a Comment..

Posted on Techdirt - 7 March 2018 @ 9:40am

Famous Racist Sues Twitter Claiming It Violates His Civil Rights As A Racist To Be Kicked Off The Platform

from the that's-not-how-any-of-this-works dept

We've seen a bunch of lawsuits of late filed by very angry people who have been kicked off of, or somehow limited by, various social media platforms. There's Dennis Prager's lawsuit against YouTube, as well as Chuck Johnson's lawsuit against Twitter. Neither of these have any likelihood of success. These platforms have every right to kick off whoever they want, and Section 230 of the CDA pretty much guarantees an easy win.

Now we have yet another one of these, Jared Taylor, a self-described "race realist" and "white advocate" (what most of us would call an out and out racist), has sued Twitter for kicking him and his organization off its platform. Taylor is represented by a few lawyers, including Marc Randazza, who I know and respect, but with whom I don't always agree -- and this is one of those cases. I think Randazza took a bad case and is making some fairly ridiculous arguments that will fail badly. Randazza declined to comment on my questions about this case, but his co-counsel -- law professor Adam Candeub and Noah Peters -- both were kind enough to discuss for quite some time their theory on the case, and to debate my concerns about why the lawsuit will so obviously fail. We'll get to their responses soon, but first let's look at the lawsuit itself.

To the credit of these lawyers, they make a valiant effort to distinguish this case from the Prager and Johnson cases, which appear to be just completely ridiculous. The Taylor case makes the most thorough argument I've seen for why Twitter can't kick someone off its platform. It's still so blatantly wrong and will almost certainly get laughed out of court, but the legal arguments are marginally better than those found in the other similar cases we've seen.

Like the other two cases we've mentioned, this case tries to twist the Supreme Court's Packingham ruling to say more than it really says. If you don't recall, that's the ruling from last summer noting that people can't be banned from the overall internet and laws requiring people be removed entirely from the internet violate their rights. All of these cases try to twist the Supreme Court's saying the government can't ban someone from the internet to also mean a private platform can't kick you off its service. Here's Taylor's version, which is used to set up the two key arguments in the case (which we'll get to shortly):

Twitter is the platform in which important political debates take place in the modern world. The U.S. Supreme Court has described social media sites such as Twitter as the “modern public square.” Packingham v. North Carolina (2017) 582 U.S. [137 S. Ct. 1730, 1737]. It is used by politicians, public intellectuals, and ordinary citizens the world over, expressing every conceivable viewpoint known to man. Unique among social media sites, Twitter allows ordinary citizens to interact directly with famous and prominent individuals in a wide variety of different fields. It has become an important communications channel for governments and heads of state. As the U.S. Supreme Court noted in Packingham, “[0]n Twitter, users can petition their elected representatives and otherwise engage with them in a direct manner. Indeed, Governors in all 50 States and almost every Member of Congress have set up accounts for this purpose. In short, social media users employ these websites to engage in a wide array of protected First Amendment activity on topics as diverse as human thought.” 137 S. Ct. at pp. 1735-36 (internal citations and quotations omitted). The Court in Packingham went on to state, in regard to social media sites like Twitter: “These websites can provide perhaps the most powerful mechanisms available to a private citizen to make his or her voice heard. They allow a person with an Internet connection to ‘become a town crier with a voice that resonates farther than it could from any soapbox.”’ Id. at p. 1737 (citation omitted) (quoting Reno v. American Civil Liberties Union (1997) 521 U. S. 844, 870 [117 S.Ct. 2329]).

The key to the claims here, are that Twitter's actions violate California law -- specifically both the California Constitution and the Unruh Civil Rights Act, which has become the latest "go to" of aggrieved people whining about being kicked off various internet platforms. The lawsuit argues that Taylor didn't violate Twitter's terms of service, and even though it flat out admits that Twitter's terms of service allow the company to remove users for any reason at all, it says that if done in a discriminatory manner, that violates Taylor's civil rights under the Unruh Act, a law that protects against discrimination on the basis of "sex, race, color, religion, ancestry, national origin, disability, medical condition, genetic information, marital status, or sexual orientation."

So how does kicking Taylor off Twitter run afoul of that?

Twitter has enforced its policy on “Violent Extremist Groups” in a way that discriminates against Plaintiffs on the basis of their viewpoint. It has not applied its policies fairly or consistently, targeting Mr. Taylor and American Renaissance, who do not promote violence, while allowing accounts affiliated with left-wing groups that promote violence to remain on Twitter.

Read that again. The argument is that, in effect, because Twitter has failed to ban similar "left-wing groups," this is discrimination. But, that directly runs afoul of CDA 230, which is explicit that the decision to moderate (or not!) some content, does not make you liable for other content you moderate (or fail to moderate). It says that no provider may be liable for "any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected." In other words, what Twitter decides to remove is its decision alone.

Randazza is well aware of CDA 230, though I'm unclear if the two other lawyers are, but the complaint doesn't seem to bother to address why CDA 230 will almost certainly get this case dumped. In response to my question, the lawyers pointed out that they saw no reason to get around CDA 230 since (1) they don't believe it applies to this situation and (2) they'll wait to respond to those arguments if (when!) Twitter raises them in response to the complaint.

The other key argument in the case is that this violates California's Constitution by denying Taylor his right to "freely speak, write and publish." But nothing in the California Constitution says that any private platform has to host that speech. The filing bends over backwards to make Twitter be declared a digital public square / public utility, but that seems unlikely to fly in court.

Twitter is a public forum that exists to “[g]ive everyone the power to create and share ideas instantly, without barriers.” (Exh. B). The U.S. Supreme Court has described social media sites such as Twitter as the “modern public square.” Packingham, supra, 137 S. Ct. at p. 1737. Twitter is the paradigmatic example of a privately-owned space that meets all of the requirements for a Pruneyard claim under the California Constitution: It serves as a place for large groups of citizens to congregate; it seeks to induce as many people as possible to actively use its platform to post their views and discuss issues, as it “believe[s] in free expression and believe[s] every voice has the power to impact the world"... Twitter's entire business purpose is to allow the public to freely share and disseminate their views without any sort of viewpoint censorship; and no reasonable person would think Twitter was promoting or endorsing Plaintiff's speech by not censoring it--no more than a reasonable person would think Twitter was promoting or endorsing President Trump's speech or Kim Jong Un's speech by allowing it to exist on their platform. Thus, Plaintiff's speech imposes no cost on Twitter's business and no burdens on its property rights. Serving as a place where "everyone [has] the power to create and share ideas instantly, without barriers" and "every voice has the power to impact the world" is Twitter's very reason for existence. By adding to the variety of views available to the public, Plaintiffs are acting on Twitter's "belief in free speech" and fulfiling Twitter's tated mission of "sharing ideas instantly."

That's all nice and good... but completely meaningless with regards to whether or not Twitter can kick someone off its platform. The complaint goes on at great length to try to turn Twitter into something it is not:

Twitter is given over to public discussion and debate to a far greater extent than the shopping center in Pruneyard or the "streets, sidewalks and parks" that "[f]rom time immemorial... have been held in trust for the use of the public and have been used for purposes of assembly, communicating thoughts and discussing public questions." ... Unlike shopping centers, streets, sidewalks and parks, which are mostly used for functional, non-expressive purposes such as purchasing consumer goods, transportation, and private recreation, Twitter's primary purpose is to enable members of the public to engage in speech, self-expression and the communication of ideas.... In analysis that cuts to the heart of the Pruneyard public forum inquiry, the Packingham Court stated: "While in the past there may have been difficulty in identifying the most important places (in a spatial sense) for the exchange of views, today the answer is clear. It is cyberspace--the 'vast democratic forums of the Internet' in general, and social media in particular." ....

Becuase Twitter is a protected public forum under California law, Twitter may not selectively ban speakers from participating in its public forum based on disagreement with the speaker's viewpoint, just as the government may not selectively ban speech that expresses a viewpoint it disagrees with.

This all sounds good, but is basically wrong. Twitter, as a private platform, has been found repeatedly to have its own First Amendment rights to control what is displayed on its own platform. And, again, for all the high-minded language, nothing in the complaint explains how a private platform deciding it doesn't want to be associated with an individual user over his odious opinions, is even in the same ballpark as blocking someone from the entire internet. The complaint skims over all of this, but I imagine that Twitter's response briefs will hammer home the point repeatedly.

There are a few other claims in the lawsuit that we won't even bother digging into at this point, since there's a very high likelihood of them all being tossed out under CDA 230. It would be nice if that happens relatively quickly before lots of other similar lawsuits are filed and lots of time and money is wasted on this nonsense. In the meantime, Taylor and anyone else kicked off of these platforms is free to go on other platforms that would be happy to host his sort of nonsense (and there are plenty of others). But there's nothing in the law that says that Twitter must keep him there. And while I have no idea if Taylor knows this, Randazza almost certainly does.

As for Randazza's co-counsel, they were kind enough to engage in a fairly lengthy discussion on their theories of CDA 230, which I would charitably describe as "naive." They make a few different interpretations of CDA 230 that might be kind of plausible if you literally ignore hundreds and hundreds of cases about CDA 230, starting with Zeran, which quite clearly established that under CDA 230, internet platforms get broad immunity, which Taylor's lawyers claim only applies when the content moderation efforts "are connected to protecting children from essentially sexual or violent content." There are literally no cases that actually agree with that assessment. Candeub in fact argued that CDA 230 is a very narrow statute, in which any effort to curate creates liability and immunity only applies in that narrow case of protecting children. But that's not how courts have interpreted at all. Starting with Zeran, which clearly established that CDA 230 gives platforms broad immunity, especially on moderating or curating content:

The scant legislative history reflects that the "disincentive" Congress specifically had in mind was liability of the sort described in Stratton Oakmont, Inc. v. Prodigy Services Co., 1995 WL 323710 (Sup.Ct.N.Y. May 24, 1995). There, Prodigy, an interactive computer service provider, was held to have published the defamatory statements of a third party in part because Prodigy had voluntarily engaged in some content screening and editing and therefore knew or should have known of the statements. Congress, concerned that such rulings would induce interactive computer services to refrain from editing or blocking content, chose to grant immunity to interactive computer service providers from suits arising from efforts by those providers to screen or block content. Thus, Congress' clear objective in passing § 230 of the CDA was to encourage the development of technologies, procedures and techniques by which objectionable material could be blocked or deleted either by the interactive computer service provider itself or by the families and schools receiving information via the Internet. If this objective is frustrated by the imposition of distributor liability on Internet providers, then preemption is warranted. Closely examined, distributor liability has just this effect.

Internet providers subjected to distributor liability are less likely to undertake any editing or blocking efforts because such efforts can provide the basis for liability. For example, distributors of information may be held to have "reason to know" of the defamatory nature of statements made by a third party where that party "notoriously persists" in posting scandalous items.... An Internet provider's content editing policy might well generate a record of subscribers who "notoriously persist" in posting objectionable material. Such a record might well provide the basis for liability if objectionable content from a subscriber known to have posted such content in the past should slip through the editing process. Similarly, an Internet provider maintaining a hot-line or other procedure by which subscribers might report objectionable content in the provider's interactive computer system would expose itself to actual knowledge of the defamatory nature of certain postings and, thereby, expose itself to liability should the posting remain or reappear. Of course, in either example, a Internet provider can easily escape liability on this basis by refraining from blocking or reviewing any online content. This would eliminate any basis for inferring the provider's "reason to know" that a particular subscriber frequently publishes objectionable material. Similarly, by eliminating the hot-line or indeed any means for subscribers to report objectionable material, an Internet provider effectively eliminates any actual knowledge of the defamatory nature of information provided by third parties. Clearly, then, distributor liability discourages Internet providers from engaging in efforts to review online content and delete objectionable material, precisely the effort Congress sought to promote in enacting the CDA. Indeed, the most effective means by which an Internet provider could avoid the inference of a "reason to know" of objectionable material on its service would be to distance itself from any control over or knowledge of online content provided by third parties. This effect frustrates the purpose of the CDA and, thus, compels preemption of state law claims for distributor liability against interactive computer service providers.

Taylor's lawyers have a... very different interpretation of all of this. First, they argued that the mere act of curating content on a website is an act of content creation and thus not covered by CDA 230. When I pointed out that the text of basically every CDA 230 case says exactly the opposite, Candeub pointed me to three specific cases that he claims support his position. All three are lower level rulings, none of which have precedential power, as compared the litany of appeals court rulings going the other way -- and literally all three of these cases are fairly questionable. But I'll focus on the first one Candeub pointed to, Song Fi v. Google, which is one of the rare cases where a court has, in fact, ruled that CDA 230 didn't apply to YouTube's decision to take down a video. YouTube claimed that the video was getting faked views and pulled the video claiming a terms of service violation. The court -- very surprisingly -- found that CDA 230 didn't apply because the video did not fit under the category of "otherwise objectionable" material under CDA 230. As Professor Eric Goldman pointed out at the time, if the case were appealed, it would almost certainly go the other way.

But, more importantly, the case was still a loser for the plaintiffs, because the court found that since YouTube had in its terms of service the right to remove content for any reason, there was no breach of contract. It's odd that Candeub points us to the Song Fi ruling, since the Taylor complaint also includes a breach of contract claim while also repeatedly pointing out that Twitter's terms of service also say they can remove anyone for any reason. So, while this is one (lower court, non-precedential) ruling that kinda (if you squint) says what Candeub wants it to say on 230, it would still be fatal to his larger case, were it applied (and again, basically every other ruling has gone the other way, including many in the 9th Circuit which are binding on this court).

For example, in Zango v. Kaspersky, the 9th Circuit ruled that CDA 230(c)(2) applies to companies filtering content, and further notes that if people don't like the filtering choices, they're free to go elsewhere:

Zango also suggests that § 230 was not meant to immunize business torts of the sort it presses. However, we have interpreted § 230 immunity to cover business torts. See Perfect 10, Inc. v. CCBill, LLC, 488 F.3d 1102, 1108, 1118-19 (9th Cir.2007) (holding that CDA § 230 provided immunity from state unfair competition and false advertising actions). In any event, what § 230(c)(2)(B) does mean to do is to immunize any action taken to enable or make available to others the technical means to restrict access to objectionable material. If a Kaspersky user (who has bought and installed Kaspersky's software to block malware) is unhappy with the Kaspersky software's performance, he can uninstall Kaspersky and buy blocking software from another company that is less restrictive or more compatible with the user's needs. Recourse to competition is consistent with the statute's express policy of relying on the market for the development of interactive computer services

Candeub's co-counsel, Peters, offered up a different analysis of the 230 question, claiming that since they're not looking to hold Twitter liable as a publisher, CDA 230 doesn't apply. But, that's responding to the wrong part of CDA 230. That's the issue under CDA 230(c)(1). The problem for this lawsuit is under CDA 230(c)(2), which is what the Zeran court (and many, many, many other courts) established means that websites have full immunity for the choices they make in moderating content.

Either way, I ran Candeub and Peters' reasoning by Professor Goldman, who is considered one of the top experts on CDA 230, and he responded that their comments "gets the analysis precisely backwards." Given just how much caselaw is on the books about this already, it would be quite a surprise to find that Candeub, Peters and Randazza will have magically changed what many consider to be settled law. Just note the recent description of CDA 230's settled law in a recent ruling in the 1st Circuit:

There has been near-universal agreement that section 230 should not be construed grudgingly. See, e.g., Doe v. MySpace, Inc., 528 F.3d 413, 418 (5th Cir. 2008); Universal Commc'n Sys., Inc. v. Lycos, Inc., 478 F.3d 413, 419 (1st Cir. 2007); Almeida v. Amazon.com, Inc., 456 F.3d 1316, 1321-22 (11th Cir. 2006); Carafano v. Metrosplash.com, Inc., 339 F.3d 1119, 1123 (9th Cir. 2003). This preference for broad construction recognizes that websites that display third-party content may have an infinite number of users generating an enormous amount of potentially harmful content, and holding website operators liable for that content "would have an obvious chilling effect" in light of the difficulty of screening posts for potential issues. Zeran, 129 F.3d at 331. The obverse of this proposition is equally salient: Congress sought to encourage websites to make efforts to screen content without fear of liability. See 47 U.S.C. § 230(b)(3)-(4); Zeran, 129 F.3d at 331; see also Lycos, 478 F.3d at 418-19. Such a hands-off approach is fully consistent with Congress's avowed desire to permit the continued development of the internet with minimal regulatory interference.

I don't envy anyone trying to convince this court that all those other courts are wrong -- and especially when their client is an avowed racist "race realist" who Twitter had every reason to wish off its platform.

Read More | 106 Comments | Leave a Comment..

More posts from Mike Masnick >>