Mike Masnick’s Techdirt Profile

mmasnick

About Mike MasnickTechdirt Insider

Mike is the founder and CEO of Floor64 and editor of the Techdirt blog.

He can be found on Twitter at http://www.twitter.com/mmasnick



Posted on Techdirt - 21 September 2017 @ 9:33am

Insanity: Theresa May Says Internet Companies Need To Remove 'Extremist' Content Within 2 Hours

from the a-recipe-for-censorship dept

It's fairly stunning just how much people believe that it's easy for companies to moderate content online. Take, for example, this random dude who assumes its perfectly reasonable for Facebook, Google and Twitter to "manually review all content" on their platforms (and since Google is a search engine, I imagine this means basically all public web content that can be found via its search engine). This is, unfortunately, a complete failure of basic comprehension about the scale of these platforms and how much content flows through them.

Tragically, it's not just random Rons on Twitter with this idea. Ron's tweet was in response to UK Prime Minister Theresa May saying that internet platforms must remove "extremist" content within two hours. This is after the UK's Home Office noted that they see links to "extremist content" remaining online for an average of 36 hours. Frankly, 36 hours seems incredibly low. That's pretty fast for platforms to be able to discover such content, make a thorough analysis of whether or not it truly is "extremist content" and figure out what to do about it. Various laws on takedowns usually have statements about a "reasonable" amount of time to respond -- and while there are rarely set numbers, the general rule of thumb seems to be approximately 24 hours after notice (which is pretty aggressive).

But for May to now be demanding two hours is crazy. It's a recipe for widespread censorship. Already we see lots of false takedowns from these platforms as they try to take down bad content -- we write about them all the time. And when it comes to "extremist" content, things can get particularly ridiculous. A few years back, we wrote about how YouTube took down an account that was documenting atrocities in Syria. And the same thing happened just a month ago, with YouTube deleting evidence of war crimes.

So, May calling for these platforms to take down extremist content in two hours confuses two important things. First, it shows a near total ignorance of the scale of content on these platforms. There is no way possible to actually monitor this stuff. Second, it shows a real ignorance about the whole concept of "extremist" content. There is no clear definition of it, and without a clear definitions wrong decisions will be made. Frequently. Especially if you're not giving the platforms any time to actually investigate. At best, you're going to end up with a system with weak AI flagging certain things, and then low-paid, poorly trained individuals in far off countries making quick decisions.

And since the "penalty" for leaving content up will be severe, the incentives will all push towards taking down the content and censorship. The only pushback against this is the slight embarrassment if someone makes a stink about mistargeted takedowns.
Of course, Theresa May doesn't care about that at all. She's been bleating on censoring the internet to stop terrorists for quite some time now -- and appears willing to use any excuse and make ridiculous demands along the way. It doesn't appear she has any interest in understanding the nature of the problem, as it's much more useful to her to be blaming others for terrorist attacks on her watch, than actually doing anything legitimate to stop them. Censoring the internet isn't a solution, but it allows her to cast blame on foreign companies.

61 Comments | Leave a Comment..

Posted on Techdirt - 19 September 2017 @ 11:53am

Senator Blumenthal Happy That SESTA Will Kill Small Internet Companies

from the this-is-a-problem dept

So, earlier today the Senate Commerce Committee held a two and a half hour hearing about SESTA -- the Stop Enabling Sex Traffickers Act of 2017. The panelists were evenly split, with California Attorney General Xavier Becerra and Yiota Souras from the National Center for Missing and Exploited Children being in support of the bill, and Professor Eric Goldman and Abigail Slater from the Internet Association worrying about the impacts of SESTA (notably, both highlighted that they're not against all changes to CDA 230, they just want to be quite careful and are worried about the language in this bill). I was actually somewhat surprised that the hearing wasn't as bad as it could have been. There certainly was some grandstanding, and some insistence that because SESTA says it will go after sex trafficking, it obviously will -- but many Senators did seem willing to listen to concerns about the bill and how it's written. Much attention was paid to the sketchy "knowledge" standard in the bill, which we wrote about this morning. And that's good -- but there was a fair bit of nonsense spewed as well.

Perhaps the most problematic comments were from the bill's co-author, Senator Richard Blumenthal, who has been attacking CDA 230 since his time as Connecticut's Attorney General. While you can watch the entire hearing, I created a short clip of Blumenthal's questions (which, oddly, C-SPAN won't let me embed here) so I'll transcribe it:

Blumenthal: I think I've said why I support this legislation, which I helped craft, and we've tried to do it carefully. And we tried to listen to the industry. We've tried to listen really closely to some of the concerns that have been raised this morning by Mr. Goldman. For example, the idea that this legislation will cause sex trafficking to -- I'm using your word -- proliferate. Hard to believe. Mr. Becerra, what do you think and will this measure cause sex trafficking to proliferate?

So... the idea that Blumenthal listened carefully is laughable on its face. He's been fighting this issue since at least 2010 when he went after Craigslist for ads he didn't think they should have on the site. And in Blumenthal's own testimony he admitted that forcing Craigslist to change how it worked only led to sex trafficking ads moving from Craigslist -- which cooperated very closely with law enforcement -- to Backpage and expanded their reach. I'm at a loss as to why we should take Blumenthal's word on what will happen when he admits his own actions targeted at sex trafficking in the past made the problem worse. To then mock Prof. Goldman for suggesting the same might happen here is... quite incredible.

Also, interesting that rather than asking Goldman to clarify his position first (he does later), Blumenthal starts by asking Becerra to back him up.

Becerra: I can't agree with what Professor Goldman has said. I think it's just the opposite. If we have a standard in place, then I believe the stakeholders within the internet community will come forward in ways we've seen before, but even more vigoroulsy, because they'll understand what the standard is, and I think that's so very important to make it clear for folks. The most important thing, Senator Booker sorta pointed this out, is we need to get the opponents of this measure to explain, in detail, what they would propose in place. Otherwise, it's always a moving target. It's Whac-a-mole. Someone needs to give us what a better bill looks like.

So, this is also bizarre and wrong. First, much of the discussion from Goldman and Slater (and us) was about the lack of any clarity around the "standard." The bill says that "knowing conduct" that "assists, supports or facilitates" sex trafficking can make a platform guilty of civil and criminal violations of the law. But "knowing conduct" is not clarified. And as we've seen in other contexts, including in the copyright realm, years-long fights can happen in court over what "knowledge" might mean. The famous YouTube/Viacom fight, that went on for nearly a decade, was almost entirely focused on whether or not YouTube had knowledge of infringement, and whether the law required "specific" knowledge or "general" knowledge. Nothing in this bill clarifies that.

Even worse, the term "knowing conduct" is dangerously vague. It could be read to mean that if the site does something that it knows that it is doing, and it leads to facilitating sex trafficking -- even if the site doesn't know about that outcome -- it would constitute "knowing conduct." Goldman had pointed this exact problem out earlier in the hearing, so for Becerra to insist that this is a clear standard is ludicrous.

Becerra is also confused if he thinks this will lead internet companies to "more vigorously" come forward. Coming forward with evidence of sex trafficking will then be turned around on them as proof of "knowledge." With this law in place, why would any internet company be more willing to come forward when that only increases liability?

Finally, the idea that opponents need to come forward with other language is similarly weird. SESTA's supporters are the ones demanding a massive change in the underpinning of the internet. Shouldn't the burden be on them to prove that this will help and not hurt? And, on top of that, it also ignores the fact that many opponents have come forward with different language (which I know as a fact because someone ran some alternative language by me a few weeks ago, and again earlier this week). So either Becerra doesn't know that or he's being disingenuous.

I'll cut the next section where Blumenthal says (misleadingly) that a proposal put forth from tech companies was to curtail or "eliminate" the ability of State AGs to pursue violations of the law (the proposal I saw simply clarified when and how they could go after sites) and Becerra eagerly says that would be terrible, as you'd expect.

Blumenthal: Let me ask, Mr. Goldman, do you really believe that this law would cause sex trafficking to proliferate?

Goldman: Thank you, Senator, for the opportunity to clarify that. Indeed, my concern is that we already see a number of efforts on the part of legitimate players to reduce sex trafficking promotion. To the extent that any of those companies decide 'I am better off turning off my efforts across the board, to try to reduce the knowledge that I have,' then that creates a larger number of zones that the sites will not be taking the legitimate effort that we want them to take. It creates an environment where there's more places for that to occur.

This is an excellent and succinct explanation of the problem. Under SESTA, the "knowledge" standard is so vague and unclear, that actually doing what Congress wants -- policing sex trafficking -- creates "knowledge" and makes these companies liable under the law. Blumenthal, of course, doesn't seem to get it -- or doesn't care.

Blumenthal: You know, I have a higher opinion of the industry than you do. I really believe that this law will raise the bar, will increase consciousness, and that far from trying to evade, or, in fact, deny themselves knowledge, so as to avoid any accountability, they will be more energetic. I absolutely really believe that most of these companies want to do the right thing and that this law will give them an increased impetus and incentive to do so.

WHAT?!? First off, if the idea is to give companies a greater impetus and incentive to do what they already want to do (as Blumenthal claims...) then threatening them with criminal and civil penalties for simply "knowing" that their platforms are used for illegal activity seems like a totally fucked up way of doing so. If you want to encourage platforms to do the right thing, then why is the entire bill focused on punishing platforms for merely knowing that their platform was illegally used? Second, if Blumenthal truly had a higher opinion of tech companies, why is he misrepresenting what Goldman said, and saying that companies would choose to avoid knowledge to "avoid accountability"? That's not the issue at all -- and is, indeed, self-contradictory with Blumenthal's own statements. Companies want to do the right thing to reduce sex trafficking, but this bill puts them in legal jeopardy for even researching if their platforms are used that way. That's the point that Goldman was trying to make and Blumenthal totally misrepresents.

And then it gets worse. Goldman points out a separate issue, noting that big companies like Google and Facebook may have the resources to "do more" but startups without those resources won't be able to take the steps necessary to avoid liability under the law:

Goldman: There's no doubt that the legitimate players will do everything they can to not only work with the law enforcement and other advocates to address sex trafficking and will do more than they even do today. At the same time, the industry is not just the big players. There is a large number of smaller players who don't have the same kind of infrastructure. And for them they have to make the choice: can I afford to do the work that you're hoping they will do.

Okay, and here's where things get absolutely fucked up. Note what Goldman is clearly saying here: this bill will wreak havoc on startups who simply can't afford to monitor everything that people do on their platforms. And then, Blumenthal's response is to say that those startups are criminals who should be prosecuted:

Blumenthal: And I believe that those outliers -- and they are outliers -- will be successfully prosecuted, civilly and criminally under this law.

WHAT THE FUCK?!? Goldman was talking about tons and tons of smaller companies -- or anyone who operates any online service that enables user comments, where they can't monitor everything -- and under this law will have to make the choice of whether they do any monitoring at all or face the risk of that being used against them, and Blumenthal's response is that they should be prosecuted.

Senator Blumenthal: those companies are not outliers and they're not criminals. They're thousands upon thousands of smaller internet companies, many based in your home state of Connecticut, that you apparently want to see shut down.

That's messed up.

64 Comments | Leave a Comment..

Posted on Techdirt - 19 September 2017 @ 10:43am

Shockingly, NY Times Columnist Is Totally Clueless About The Internet

from the do-your-fucking-research,-kristof dept

It's fairly stunning just how often the NY Times Opinion pages are just... wrong. Nick Kristof, one of the most well known of the NYT's columnists, has spent years, talking about stopping sex trafficking -- but with a history of being fast and loose with facts, and showing either little regard for verifying what he's saying, or a poor understanding of the consequences of what he says. I would hope that everyone reading this supports stopping illegal and coerced sex trafficking. But doing so shouldn't allow making up facts and ignoring how certain superficial actions might make the problems worse. Kristof, in particular, has been targeting Backpage.com for at least five years -- but has been caught vastly exaggerating claims about the site to the point of potentially misstating facts entirely (such as claiming Backpage existed before it actually did, and that it operated in cities where it did not). Kristof also has a history of being laughably credulous when someone comes along with a good story about sex trafficking, even when it's mostly made up. He's been accused of having a bit of a savior complex.

And that's on display with his recent, extraordinarily confused piece attacking Google for not supporting SESTA -- the "Stop Enabling Sex Traffickers Act." As we've explained in great detail, SESTA (despite its name) is unlikely to stop any sex trafficking and likely would make the problem worse. That's because the whole point of SESTA is to undermine CDA 230, the part of the law that creates incentives for tech companies to work with authorities and to help them track down sex trafficking on their sites. What the bill would do is make websites owners now both civilly and criminally liable for knowledge of any sex trafficking activity on their sites -- meaning that any proactive efforts by them to monitor their websites may be seen as "knowledge," thus making them liable. The new incentives will be not to help out at all -- not to monitor and not to search.

Meanwhile, by putting such a massive target on websites, it will inevitably be abused. We see how people abuse the DMCA to take down content all the time -- now add in the possibility of sites getting hit with criminal penalties, and you can see how quickly this "tool" will be abused to silence content online.

But, never mind all of that. To Kristof, because the bill says it's against sex trafficking, and he's against sex trafficking, it must be good. And, he's quite sure that the only people against the bill are Google, and that there's ill-intent there.

Why? Why would Google ally itself with Backpage, which is involved in 73 percent of cases of suspected child sex trafficking in the U.S., which advertised a 13-year-old whose pimp had tattooed his name on her eyelids?

First of all, Kristof is, again, playing fast and loose with the facts if he thinks Google is an "ally" of Backpage. Google has directly come out and said that it believes that Backpage should be criminally prosecuted by the DOJ (remember, CDA 230 does not apply to federal criminal charges).

I want to make our position on this clear. Google believes that Backpage.com can and should be held accountable for its crimes. We strongly applaud the work of the Senate Permanent Subcommittee on Investigations in exposing Backpage's intentional promotion of child sex trafficking through ads. Based on those findings, Google believes that Backpage.com should be criminally prosecuted by the US Department of Justice for facilitating child sex trafficking, something they can do today without need to amend any laws. And years before the Senate's investigation and report, we prohibited Backpage from advertising on Google, and we have criticized Backpage publicly.

So, no, Google is not protecting Backpage. Kristoff, towards the end of his post, waves off Google's strong words about Backpage as proof that it has no reason not to support this legislation, without even once grappling with (a) what Google actually says or (b) why Google (and tons of others) would still oppose this legislation as being tremendously damaging. Even if you're not quite as convinced as Google that Backpage has broken the law (the Senate Report appeared to take a number of Backpage actions completely out of context), to argue that Google is supporting Backpage is clearly just wrong. But, Kristof, having set up his thesis, is going to go for it, no matter how wrong:

The answer has to do with Section 230 of the Communications Decency Act, which protects internet companies like Google (and The New York Times) from lawsuits — and also protects Backpage. Google seems to have a vague, poorly grounded fear that closing the loophole would open the way to frivolous lawsuits and investigations and lead to a slippery slope that will damage its interests and the freedom of the internet.

"Poorly grounded fear?" That's just wrong. Kristof seems totally ignorant of issues related to intermediary liability on the internet -- an issue that has been studied for quite some time. When you give people tools to put liability on online services for the actions of their users, the tools are abused. Every time. They get abused for censorship. We know this. You don't have to look any further than the intermediary liability setup we have in the copyright realm, where every year we see millions of false DMCA notices filed just to censor content, and not for any reason having to do with copyright.

How do you think things will turn out when you're able to not just threaten a website with civil copyright penalties with limited damages, but with potential criminal penalties, through a vaguely worded law where mere "knowledge" can get your entire site in trouble? But again, Kristof doesn't care.

That impresses few people outside the tech community, for the Stop Enabling Sex Traffickers Act was crafted exceedingly narrowly to target only those intentionally engaged in trafficking children. Some tech companies, including Oracle, have endorsed the bill.

First, this is wrong. Lots of people outside the tech industry have raised concerns -- including free speech groups like the ACLU. But, even if it were only the tech community, why wouldn't you listen to the industry that actually has the experience in understanding how these kinds of laws are regularly abused to silence perfectly legitimate speech and to quash perfectly legitimate services? Wouldn't their input be valuable? Why does Kristof brush them off? As for the Oracle line -- let's be clear: Oracle and HP are the only "tech" companies that have come out in support of the bill, and neither run online services impacted by CDA 230. It's completely disingenuous to argue that Oracle represents "tech" when it's not an internet services provider who would be impacted by changes in CDA 230. Why even listen to them, rather than those who have the actual experience?

And the idea that this was "crafted exceedingly narrowly to only target those intentionally engaged in trafficking children" is just on its face, wrong. First off, the bill doesn't specifically just target trafficking having to do with children, but I think we can all agree that any trafficking is problematic. The issue is that it doesn't just punish those "intentionally engaged in trafficking." It specifically targets any website that is used in a way that "assists, supports or facilitates" trafficking and has broadly defined "knowledge" that the site is used that way. That's... not intentionally engaging in trafficking. It's much, much, much broader. Let's say you're Airbnb. SESTA makes it much riskier to be in business. If Airbnb hears that someone used Airbnb to traffic someone (which, unfortunately, is impossible to detect), now it risks criminal and civil lawsuits, because it "knew" of conduct that "assisted, supported, or facilitated." This is even if Airbnb doesn't know which accounts were used for this.

“This bill only impacts bad-actor websites,” notes Yiota Souras, general counsel at the National Center for Missing & Exploited Children. “You don’t inadvertently traffic a child.”

This is... just so misguided and wrong it's almost laughable. No, of course, no one "inadvertently" traffics a child. But that's not what this law is about. The law is about blaming websites if one of its users does anything related to trafficking someone via its services. And, that creates massive potential liability. Say someone wants to get our little site in trouble? They could just go and post links in comments to sex trafficking ads, and suddenly we're facing potential criminal charges. We're not Google. We can't hire staff to read every possible comment and recognize whether or not they're linking to illegal activity. And despite what some will say, even Google can't possibly hire enough staff, or get its AI good enough, to parse everything it touches to see whether or not it's linked to illegal activity. But under the current setup of SESTA, this leads you to a risk of massive liability.

The concerns here are real -- and Kristof is either ignorant or being purposely blind to the arguments here.

Senator Rob Portman, an Ohio Republican and lead sponsor of the legislation, says that it would clearly never affect Google. “We’ve tried to work with them,” Portman told me

This is laughable. The bill would impact basically every site, including Google. After all, it was just a few years ago, that a Mississippi Attorney General went on an illegal fishing expedition against Google -- put together by the MPAA's lawyers -- demanding all sorts of information from Google. Based on what? Well, Jim Hood said that because he could use Google to find sex trafficking ads, Google was breaking the law. A court tossed this out, and the two sides eventually settled, but under SESTA, Hood would now be able to go after Google criminally if any search turned up trafficking. And how the hell is Google supposed to make sure that no one ever uses any of its properties for sex trafficking?

But, never mind the facts. Kristof insists there's no issue here because the bill's sponsor says there's no issue.

Senator Richard Blumenthal of Connecticut, the lead Democratic sponsor, adds that “it’s truly baffling and perplexing” that some in the tech world (Google above all) have dug in their heels. He says the sex trafficking bill gathered 28 co-sponsors within a week, making it a rare piece of bipartisan legislation that seems likely to become law.

It's truly baffling that those with actual experience and knowledge in how weakening intermediary liability laws creates all sorts of problems are now telling you there will be all sorts of problems? And, really, isn't this the same Senator Richard Blumenthal who, when he was Connecticut Attorney General, was famous for campaigning against CDA 230 and blaming tech for basically everything? He's not exactly a credible voice. But, Kristof has his story and he apparently seems willing to believe anyone who says anything, no matter how little is based on facts, if it supports his version of the story.

I write about this issue because I’m haunted by the kids I’ve met who were pretty much enslaved, right here in the U.S. in the 21st century. I’ve been writing about Backpage for more than five years, ever since I came across a terrified 13-year-old girl, Baby Face, who had been forced to work for a pimp in New York City.

And you've been repeatedly called out and corrected for factual errors in your writing on this issue. Because you're quick to believe things that later turn out to be wrong. And, yes, stories like ones you've come across are awful and we should be doing everything possible to stop such exploitation. But blaming internet companies doesn't help. You blame the actual criminals, the ones trafficking the children. But, Kristof is clear: he doesn't care about blaming those actually responsible. He wants to take down internet companies. Because reasons.

But it’s not enough to send a few pimps to prison; we should also go after online marketplaces like Backpage. That’s why Google’s myopia is so sad.

Why? Why should we blame internet companies because people use them for illegal activity? What's wrong with blaming the people who actually break the laws? CDA 230, as currently written, encourages platforms to cooperate with law enforcement and to take down content. SESTA would undermine that and stop companies from working with law enforcement, because any admission of "knowledge" can be used against them.

In response to my inquiries, Google issued a statement: “Backpage acted criminally to facilitate child sex trafficking, and we strongly urge the Department of Justice to prosecute them for their egregious crimes against children. … Google will continue to work alongside Congress, antitrafficking organizations and other technology companies to combat sex trafficking.”

Fine, but then why oppose legislation? Why use intermediaries to defend Backpage? To me, all this reflects the tech world’s moral blindness about what’s happening outside its bubble.

Why oppose it? Because the legislation is a nuclear bomb on how the internet works and a direct attack on free speech. It's not "moral blindness" at all. In fact, SESTA would be a moral disaster because it removes incentives for companies to help stop trafficking, out of fear of creating "knowledge" for which they'll face civil and criminal lawsuits. This has been explained to Kristof -- and, in fact, people told him this on Twitter after his article was published, and he insisted that no one other than Google seemed concerned with SESTA.

That's also not true. As we've seen with our own letter, dozens of tech companies are worried about it. And we've talked to many more who admitted to us that they, too, think this is an awful law, but they're afraid of grandstanding folks like Kristof publishing misleading screeds against them falsely saying that worrying about SESTA is the same as supporting sex trafficking.

Incredibly, when an actual human trafficking expert and researcher, Dr. Kim Mehlman-Orozco, decided to challenge Kristof and point out that his opinions aren't backed up by the actual research, Kristof dismissed her views and data as not being as valuable as the few anecdotes he has.

Even if Google were right that ending the immunity for Backpage might lead to an occasional frivolous lawsuit, life requires some balancing.

Uh, what? This is basically Kristof first admitting that he's wrong that it won't impact sites other than Backpage, and then saying "meh, no biggie." But that's... really fucked up. We're not talking about the "occasional frivolous lawsuit." From what we've seen with the DMCA, it seems likely that there would be a rash of dangerous lawsuits, and companies being forced out of business -- not to mention tons of frivolous threats that never get to the lawsuit stage, but lead to widespread censorship just out of the fear of possible liability. How can Kristof just shrug that off as "balance"?

For example, websites must try to remove copyrighted material if it’s posted on their sites. That’s a constraint on internet freedom that makes sense, and it hasn’t proved a slippery slope. If we’re willing to protect copyrights, shouldn’t we do as much to protect children sold for sex?

HOLY SHIT. And here we learn that Kristof is so completely out of his depth that it's not even funny. Seriously, someone educate Nick Kristof a little on how the DMCA has been abused to silence speech, to kill companies and to create huge problems for free speech online? And that's with much lower penalties than what we're talking about with SESTA.

I asked Nacole, a mom in Washington State whose daughter was trafficked on Backpage at the age of 15, what she would say to Google.

“Our children can’t be the cost of doing business,” she said. Google understands so much about business, but apparently not that.

Ah, always close with a "for the children!" argument after making a bunch of statements that are just devoid of facts. No one is fighting this for the support of "business." They're doing it because they understand how important intermediary liability protections are against undermining how the internet works and how free speech works online. Kristof has a long history of not caring about facts so long as he gets a good story about just how concerned he is about trafficking. We're all concerned about trafficking -- but passing a law that will make the problem worse just to appear like you're a hero is not the solution, Nick.

56 Comments | Leave a Comment..

Posted on Techdirt - 19 September 2017 @ 9:02am

Is There A Single Online Service Not Put At Risk By SESTA?

from the the-risks-are-big dept

Earlier today, I wrote up a list of the many problems with SESTA and how it will be abused. Over and over again, we've seen defenders of the bill -- almost none of whom have much, if any, experience in managing services on the internet -- insist that the bill is "narrowly targeted" and wouldn't create any problems at all for smaller internet services. However, with the way the bill is worded, that seems unlikely. As stated in the last post, by opening up sites to facing both lawsuits from state Attorneys General and civil lawsuits, SESTA puts almost any site that offers services to the public at risk. The problematic language in the bill is that this is the "standard" for liability:

"The term 'participation in a venture' means knowing conduct by an individual or entity, by any means, that assists, supports, or facilitates a violation...."

And that could apply to just about anyone offering services online. So, let's dig into a few examples of companies and services potentially facing liability thanks to this nuclear bomb-sized hole in CDA 230.

  • Airbnb: I did a whole post on this earlier. But there are multiple reports of how some people have used Airbnb for prostitution. I'm quite sure that Airbnb doesn't want its users to use the platform in this way, but under SESTA, it will face criminal and civil penalties -- and it has no way to prevent it. Prosecutors or civil litigants can easily point to these articles, and note that this demonstrates "knowledge" that Airbnb is "facilitating" sex trafficking, and, voila, no CDA 230 protections.
  • Square: The popular payments processor that has made it easy for small businesses to accept credit cards... can also be used for prostitution/sex trafficking. Square can't specifically watch over what each of its customers is selling, but clearly, it wouldn't be difficult for prosecutors and/or civil litigants to argue that Square is knowingly facilitating trafficking by allowing traffickers to accept payments.
  • Facebook: Obviously, tons of sex traffickers use Facebook to advertise their wares. There have been news stories about this. So clearly Facebook has "knowledge." If it can't magically eradicate it, it may also now have to deal with criminal and civil lawsuits lobbed its way.
  • Snapchat: Last year there was a story of a teenager lured into a sex trafficking ring via Snapchat. So, clearly, lawyers might argue that Snapchat has "facilitated" sex trafficking.
  • Wordpress/Automattic: Automattic hosts a significant portion of the entire internet with its powerful Wordpress.com platform. It's pretty damn likely that at least some sex traffickers use Wordpress to set up sites promoting what they're offering. Thus, it's "facilitating" sex trafficking.
  • Wikipedia: You wouldn't think of Wikipedia as being a hub for sex trafficking, but the site gets hit with spam all the time, and people trying to promote stuff via its webpages -- and even when spam is edited out, it remains viewable in the history tabs. So, if a few links to trafficking advertisements show up, even if edited out, Wikipedia could face liability for facilitating such ads.
  • Google Docs: Did sex traffickers use Google docs to create fliers or manage a spreadsheet? Is that "facilitating" sex trafficking? Well, we might not know until after a court goes through a long and involved process to figure it out.
  • Cloudflare: Tons of websites use Cloudflare as a CDN to provide better uptime. What if one of them involves advertisements for sex trafficking. Is Cloudflare liable? After all, its CDN and anti-DDoS technology helped "facilitate" the service...
  • Rackspace: Popular hosting company has millions of websites. If one of them hosts advertising for trafficking, it can be liable for facilitating sex trafficking.
  • Amazon: Through its web services arm, Amazon hosts a large portion of the internet. Can you say for certain that Amazon S3 isn't used somewhere by some sex trafficking parties?
  • YouTube: These days, almost anything can be found in videos, and while YouTube has a system to "notify" the company of abuse, that may be used against the company, claiming "knowledge." After all, in a case in Italy where Google execs were found criminally liable, the fact that some users clicked the "notify" button was used as evidence against the execs.
  • Namecheap: Has Namecheap ever registered a domain that was then used by sex traffickers? Well, then it can be argued that it facilitated sex trafficking, and is not protected by CDA 230.
  • Indeed: Basically any "job board" is at risk. There are stories of sex traffickers seeking victims via promises of summer jobs -- and while these stories are highly dubious at best, similar accusations against online job boards under SESTA would be easy to make.
  • Reddit: As an open forum, certainly sex trafficking gets discussed in some corners of the site. Is it possible that some have used it to help facilitate trafficking? Absolutely.
  • Any site that has comments: And that includes sites like Techdirt. Want to get a site in trouble? Why not spam its comments with links to sex trafficking ads? Voila, you've now put those sites at risk of criminal and civil liability...
The point of this is that this list can go on and on and on. Almost any internet service can be used in some way to "facilitate" sex trafficking. And rather than recognizing that the problem is those engaging in sex trafficking, SESTA now lets everyone go after the tools they use. But nearly all those tools are mostly used for perfectly legitimate, non-illegal activity. Yet, under SESTA, all face massive liability and the potential for criminal charges.

29 Comments | Leave a Comment..

Posted on Techdirt - 19 September 2017 @ 6:31am

Why SESTA Is Such A Bad Bill

from the so-much-damage dept

We've been talking quite a bit about SESTA -- the Stop Enabling Sex Traffickers Act -- and why it's so problematic, but with hearings today, I wanted to dig in a bit more closely with the text to explain why it's so problematic. There are a large number of problems with the bill, so let's discuss them one by one.

Undermines the incentives to moderate content and to work with law enforcement:

This remains the biggest issue for me: the fact that the bill is clearly counterproductive to its own stated goals. When people talk about CDA 230, they often (mistakenly) only talk about CDA 230(c)(1) -- which is the part that says sites are immune from liability. This leads many people to (again, mistakenly) claim that the only thing CDA 230 is good for is absolving platforms from doing any moderation at all. But this actually ignores the equally important part of the same section: CDA 230(c)(2) which explicitly encourages platforms to moderate "objectionable" content, by noting that good faith efforts to moderate and police that content have no impact on your protection from liability in part (1).

In other words: as currently stated, CDA 230 says that you're encouraged to moderate your platform and takedown bad content, because there's no increase in legal liability if you do so. Indeed, it's difficult to find a single internet platform that does zero moderation. Most platforms do quite a bit of moderation, because otherwise their platforms would be overrun by spam. And, if they want people to actually use their platforms, nearly every site (even those like 4chan) tend to do significant moderation out of public pressure to keep certain content off. Yet, under SESTA you now face liability if you are shown to have any "knowledge" of violations of federal sex trafficking laws. But what do they mean by "knowledge"? It's not at all clear, as it just says "knowledge." Thus, now if a site, for example, discovers someone using its platform for trafficking and alerts authorities, that's evidence of "knowledge" and can be used against them both in criminal charges and in civil lawsuits.

In other words, somewhat incredibly, the incentive here is for platforms to stop looking for any illegal activity on their sites, out of fear of creating knowledge which would make them liable. How does that help? Indeed, platforms will be incentivized not to do any moderation at all, and that will create a mess on many sites.

The vague "knowledge" standard will be abused:

This is sort of a corollary to the first point. The problematic language in the bill is this:

The term ‘participation in a venture’ means knowing conduct by an individual or entity, by any means, that assists, supports, or facilitates a violation...

But what do they mean by "knowing conduct"? Who the hell knows. We already know that this is going to get litigated probably for decades in court. We have some similar problems in the DMCA's safe harbors, where there have been legal battles going on many years over whether the standard is "general knowledge" v. "specific knowledge" and what is meant by "red flag knowledge." And in SESTA the language is less clear. When people have attempted to pin down SESTA's sponsors on what the standard is for knowledge, they've received wildly varying answers, which just means there is no standard, and we'll be talking about lawsuits for probably decades before it's established what is meant by "knowledge." For companies, again, the best way to deal with this is to not even bother doing any moderation of your platform whatsoever, so you can avoid any claim of knowledge. That doesn't help at all.

The even vaguer "facilitation" language will be massively abused:

In that same definition of "participation in a venture" what may be even more problematic than the vague "knowledge" standard, is the vaguer claim that an entity "by any means, that assists, supports or facilitates a violation..." of sex trafficking laws, meets the standard of "participation in a venture." All three of those terms have potential problems. Assisting sounds like it requires proactive action -- but how do you define it here. Is correcting typos "assisting"? Is having an automated system suggesting keywords "assisting"? Is autocompleting search "assisting"? Because lots of sites do things like that, and it doesn't give them any actual knowledge of legal violations. How about "supporting"? Again, perfectly benign activities can be seen as "supporting" criminal behavior without the platform being aware of it. Maybe certain features are used in a way that can be seen as supporting. We've pointed out that Airbnb could be a target under SESTA if someone uses an Airbnb for sex trafficking. Would the fact that Airbnb handles payment and reviews be seen as "supporting"?

But the broadest of all is the term "facilitating." That covers basically anything. That's flat out saying "blame the tool for how it's used." Almost any service online can be used to "facilitate" sex trafficking in the hands of sex traffickers. I already discussed Airbnb above, but what about if someone uses Dropbox to host sex trafficking flyers? Or what if a sex trafficker creates advertisements in Google Docs? Or what if a pimp creates a blog on Wordpress? What if they use Skype for phone calls? What if they use Stripe or Square for payments? All of those things can be facilitation under this law, and the companies would have no actual knowledge of what's going on, but would face not only criminal liability but the ability of victims to sue them rather than the actual traffickers.

This is the core problem: this bill targets the tools rather than the law breakers.

Punching a hole in CDA 230 will be abused:

This is one that seems to confuse people who don't spend much time looking at intermediary liability protections, how they work and how they'll be abused. It's completely normal for people in that situation to not recognize how widely intermediary liability is used to stifle perfectly legitimate speech and activity. However, we know damn well from looking at the DMCA, in particular, that when you set up a process by which there might be liability on a platform, it's regularly abused by people angry about content online to demand censorship. Indeed, we've seen people regularly admit that if they see content they dislike, even if there's no legitimate copyright claim, they'll "DMCA it" to get it taken down.

Here, the potential problems are much, much worse. Because at least within the DMCA context, you have relatively limited damages (compared to SESTA at least -- the monetary damages in the DMCA can add up quickly, but at least its only monetary and it's limited to a ceiling of $150,000 per work infringed). With SESTA, criminal penalties are much more stringent (obviously) which will create massive incentives for platforms to cave immediately, rather than face the risk of criminal prosecution. Similarly, the civil penalties show no upper bound under the law -- meaning the potential monetary penalty may be significantly higher.

The chilling effects of criminal charges:

Combine all of this and you create massive chilling effects for any online platforms -- big or small. I already explained earlier why the new incentives will not be to help law enforcement or to moderate content at all, for fear of creating "knowledge" but it's even worse than that. Because, for many platforms, the massive potential liability from SESTA will mean they don't create any kind of platform at all. A comment feature on a website would become a huge liability. Any service that might conceivably be used by anyone to "facilitate" sex trafficking creates the potential for serious criminal and civil liability, which should be of great concern. It would likely lead to many platforms not being created at all, just because of the potential liability. For ones that already exist, some may shutter, and others may greatly curtail what the platform allows.

State Attorneys General have a terrible track record on these issues:

In response to the previous point, some may point out (correctly!) that the existing federal law already exempts federal criminal charges -- meaning that the DOJ can go after platforms if it finds that they're actively participating in sex trafficking. But, for as much as we rag on the DOJ, they tend not to be in the business of going after platforms just for the headlines. State AGs, on the other hand, have a fairly long history of doing exactly that -- including directly at the behest of companies looking to strangle competitors.

Back in 2010 we wrote about a fairly stunning and eye-opening account by Topix CEO Chris Tolles about what happened when a group of State Attorneys General decided that Topix was behaving badly. Despite the fact they had no legal basis for doing so, they completely ran Topix through the ringer, because it got them good headlines. Here's just a snippet:

The call with these guys was actually pretty cordial. We walked them through how we ran feedback at Topix, that how in January 2010, we posted 3.6M comments, had our Artificial Intelligence systems remove 390k worth before they were ever even put up, and how we had over 28k feedback emails and 210k user flags, resulting in over 45k posts being removed from the system. When we went through the various issues with them, we ended up coming to what I thought was a set of offers to resolve the issues at hand. The folks on the phone indicated that these were good steps, and that they would circle back with their respective Attorneys’ General and get back to us.

No good deed goes unpunished

So, after opening the kimono and giving these guys a whole lot of info on how we ran things, how big we were and that we dedicated 20% of our staff on these issues, what was the response. (You could probably see this one coming.)

That’s right. Another press release. This time from 23 states’ Attorney’s General.

This pile-on took much of what we had told them, and turned it against us. We had mentioned that we required three separate people to flag something before we would take action (mainly to prevent individuals from easily spiking things that they didn’t like). That was called out as a particular sin to be cleansed from our site. They also asked us to drop the priority review program in its entirety, drop the time it takes us to review posts from 7 days to 3 and “immediately revamp our AI technology to block more violative posts” amongst other things.

And, remember, this was done when the AGs had no legal leverage against Topix. Imagine what they would do if they could hold the threat of criminal and civil penalties over the company?

Similarly, remember how leaked Sony emails revealed that the MPAA deliberately set up Mississippi Attorney General Jim Hood with the plan to attack Google (with the letter Hood sent actually being written by MPAA outside lawyers?). If you don't recall, Hood used claims that, because he was able to find illegal stuff via Google, it meant he could go on a total fishing expedition into how it handled much of its business.

In the Sony leak, it was revealed that the MPAA viewed a NY Times article about the value of lobbying state AGs as a sort of playbook to cultivate "anti-Google" Attorneys General, who it could then use to target and take down companies the MPAA didn't like (remember, this was what the MPAA referred to, unsubtly, as "Project Goliath").

Do we really want to empower that same group of AGs with the ability to drag down lots of other platforms with crazy fishing expeditions, just because some angry Hollywood (or other) companies say so?

Opening up civil lawsuits will be abused over and over again:

One of the big problems with SESTA is that it will open up internet companies to getting sued a lot. We already see a bunch of cases every year where people who are upset about certain content online, target lawsuits at those sites just out of anger. The lawsuits tend to get thrown out, thanks to CDA 230, but lawyers keep trying creative ideas to get around CDA 230, adding in all sorts of frivolous attempts. So, for example, after the decision in the Roommates case -- in which Roommates.com got dinged for activity not protected by CDA 230 (specifically its own actions that violated fair housing laws) -- lots of people cite the Roommates case as an example of why their own argument isn't killed off by CDA 230.

In other words, if you give private litigants a small loophole to get around CDA 230, they try to jump in and expand it to cover everything. So if SESTA becomes law, you can expect lots of these lawsuits where people will go to great lengths to argue just about any lawsuit is not protected by 230, because of supposed sex trafficking occuring via the site.

Small companies will be hurt most of all:

There's this weird talking point making the rounds, that the only one really resisting SESTA is Google. We've discussed a few times why this is wrong, but let's face it: of all the companies out there, Google is probably best positioned (along with Facebook) to weather any of this. Both Google and Facebook are used to massive moderation on their platforms. Both companies have built very expensive tools for moderating and filtering content, and both have built strong relationships with politicians and law enforcement. That's not true for just about everyone else. That means, SESTA would do the most damage to smaller companies and startups, who simply cannot invest the resources to deal with constant monitoring and/or threats from how people use their platform.

Given all of these reasons, it's immensely troubling that SESTA supporters keep running around insisting that the bill is narrowly tailored and won't really impact many sites at all. It suggests either a willful blindness to the actual way the internet works (and how people abuse these systems for censorship) or a fairly scary ignorance level, with little interest in getting educated.

35 Comments | Leave a Comment..

Posted on Techdirt - 18 September 2017 @ 7:10pm

EFF Resigns From W3C After DRM In HTML Is Approved In Secret Vote

from the disappointing dept

This is not a huge surprise, but it's still disappointing to find out that the W3C has officially approved putting DRM into HTML 5 in the form of Encrypted Media Extensions (EME). Some will insist that EME is not technically DRM, but it is the standardizing of how DRM will work in HTML going forward. As we've covered for years, there was significant concern over this plan, but when it was made clear that the MPAA (a relatively new W3C member) required DRM in HTML, and Netflix backed it up strongly, the W3C made it fairly clear that there was no real debate to be had on the issue. Recognizing that DRM was unavoidable, EFF proposed a fairly straightforward covenant, that those participating agree not to use the anti-circumvention provisions of the DMCA (DMCA 1201) to go after security researchers, who cracked DRM in EME. The W3C already has similar covenants regarding patents, so this didn't seem like a heavy lift. Unfortunately, this proposal was more or less dismissed by the pro-DRM crowd as being an attempt to relitigate the question of DRM itself (which was not true).

Earlier this year, Tim Berners-Lee, who had the final say on things, officially put his stamp of approval on EME without a covenant, leading the EFF to appeal the decision. That appeal has now failed. Unfortunately, the votes on this were kept entirely secret:

So much for transparency.

In Bryan Lunduke's article about this at Network World, he notes that despite the W3C saying that it had asked members if they wanted their votes to be public, with all declining, Cory Doctorow (representing EFF) says that actually EFF was slapped on the wrist for asking W3C members if they would record their votes publicly:

“The W3C did not, to my knowledge as [Advisory Committee] rep, ask members whether they would be OK with having their votes disclosed in this latest poll, and if they had, EFF would certainly have been happy to have its vote in the public record. We feel that this is a minimal step towards transparency in the standards-setting that affects billions of users and will redound for decades to come.”

“By default, all W3C Advisory Committee votes are ‘member-confidential.’ Previously, EFF has secured permission from members to disclose their votes. We have also been censured by the W3C leadership for disclosing even vague sense of a vote (for example, approximate proportions).”

It was eventually revealed that out of 185 members participating in the vote, 108 voted for DRM, 57 voted against, and 20 abstained.

And while the W3C insisted it couldn't reveal who voted for or against the proposal... it had no problem posting "testimonials" from the MPAA, the RIAA, NBCUniversal, Netflix, Microsoft and a few others talking about just how awesome DRM in HTML will be. Incredibly, Netflix even forgot the bullshit talking point that "EME is not DRM" and directly emphasized how "integration of DRM into web browsers delivers improved performance, battery life, reliability, security and privacy." Right, but during this debate we kept getting yelled at by people who said EME is not DRM. So nice of you to admit that was all a lie.

In response to all of this, Cory Doctorow has authored a scathing letter, having the EFF resign from the W3C. It's worth reading.

The W3C is a body that ostensibly operates on consensus. Nevertheless, as the coalition in support of a DRM compromise grew and grew — and the large corporate members continued to reject any meaningful compromise — the W3C leadership persisted in treating EME as topic that could be decided by one side of the debate. In essence, a core of EME proponents was able to impose its will on the Consortium, over the wishes of a sizeable group of objectors — and every person who uses the web. The Director decided to personally override every single objection raised by the members, articulating several benefits that EME offered over the DRM that HTML5 had made impossible.

But those very benefits (such as improvements to accessibility and privacy) depend on the public being able to exercise rights they lose under DRM law — which meant that without the compromise the Director was overriding, none of those benefits could be realized, either. That rejection prompted the first appeal against the Director in W3C history.

In our campaigning on this issue, we have spoken to many, many members' representatives who privately confided their belief that the EME was a terrible idea (generally they used stronger language) and their sincere desire that their employer wasn't on the wrong side of this issue. This is unsurprising. You have to search long and hard to find an independent technologist who believes that DRM is possible, let alone a good idea. Yet, somewhere along the way, the business values of those outside the web got important enough, and the values of technologists who built it got disposable enough, that even the wise elders who make our standards voted for something they know to be a fool's errand.

We believe they will regret that choice. Today, the W3C bequeaths an legally unauditable attack-surface to browsers used by billions of people. They give media companies the power to sue or intimidate away those who might re-purpose video for people with disabilities. They side against the archivists who are scrambling to preserve the public record of our era. The W3C process has been abused by companies that made their fortunes by upsetting the established order, and now, thanks to EME, they’ll be able to ensure no one ever subjects them to the same innovative pressures.

This is a disappointing day for the web, and a black mark on Tim Berners-Lee's reputation and legacy of stewardship over it.

94 Comments | Leave a Comment..

Posted on Techdirt - 18 September 2017 @ 11:51am

The Senate Is Close To Undermining The Internet By Pretending To 'Protect' The Children

from the 230-matters dept

Protecting children from harm is a laudable goal. But, as we've noted for many years, grandstanding politicians have a fairly long history of doing a lot of really dangerous stuff by insisting it needs to be done "for the children." That doesn't mean that all "for the children" laws are bad, but they do deserve scrutiny, especially when they appear to be reactive to news events, and rushed out with little understanding or discussion. And that's a big part of our concern with SESTA -- the Stop Enabling Sex Traffickers Act -- a "for the children" bill. With a name like that, it's difficult to oppose, because we're all in favor of stopping sex trafficking. But if you actually look at the bill with any understanding of how the internet works, you quickly realize that it will be tremendously counterproductive and would likely do a lot more to harm trafficking victims by making it much more risky for internet services to moderate their own sites, and to cooperate with law enforcement in nabbing sex traffickers using their platforms.

There's a hearing tomorrow morning about SESTA, and the bill is quickly moving forward, with very few Senators expressing any real concern about the impact it might have on free speech or the internet -- despite the fact that a ton of tech companies and free speech advocates have spoken out about their concerns. Instead, over and over again, we're hearing false claims about how it's just Google that's concerned. Last month, we'd put up a page on our Copia site about the bill with a letter to Congress signed by a few dozen tech companies. Today we're offiically announcing a standalone site, 230Matters.com, that explains why CDA 230 is so important, highlighting the many different parties concerned with the bill, from the ACLU and EFF to tech companies to think tanks and more. The site also hosts the letter that we sent to Congress with our concerns about the bill, put together with the group Engine Advocacy and signed by over 40 companies including Kickstarter, Reddit, Tucows, NVCA, Github, Automattic, Cloudflare, Rackspace, Medium and more.

That's not "just Backpage" or "just Google". The letter was signed by internet companies big and small that know just how damaging SESTA will be -- not just to their ability to operate online, but to their own efforts to proactively moderate their own sites, or even to work with law enforcement to help stop trafficking online. In other words, this bill is a double whammy: (1) it will greatly harm innovation and free speech online and (2) do so in a way that is likely to make trafficking worse. Unfortunately, supporters of the bill are falsely claiming that being against this bill is the equivalent of supporting sex trafficking. That's dangerous and leaves no room for actual discussion about why the bill will be so counterproductive.

The letter is still open for more signatures -- so if you represent a company that is concerned about this bill, please consider signing on.

With Congress paying attention to SESTA this week, you can expect more posts from us exploring the problems with the bill and with the arguments in its favor. We already had one post earlier today debunking the attacks on EFF and CDT, and more are forthcoming...

64 Comments | Leave a Comment..

Posted on Techdirt - 15 September 2017 @ 7:39pm

When Godwin's Law Met The Streisand Effect

from the take-it-down,-you-nazi dept

Okay, here's a fun post for a Friday evening: Earlier this week, I was at World Hosting Days, where I gave a keynote speech about the importance of CDA 230 and things like intermediary liability protections -- and why they are so important to protecting free speech online. The emcee of the event was Mike Godwin, who (among his many, many accomplishments over the years as an internet lawyer and philosopher) coined Godwin's Law. The organizers of the event, realizing that they had the guy who coined Godwin's Law and the guy (me!) who coined the Streisand Effect in the same place at the same time, thought it might be fun to have the two of us talk about these two memes.

And, voila. Here's the video of the two of us discussing it. We're also planning to release this as a podcast soon, so if you already listen to the Techdirt podcast and want to wait for that, feel free... But, if you want to skip ahead and watch/listen now, go for it.

21 Comments | Leave a Comment..

Posted on Techdirt - 15 September 2017 @ 10:44am

Moral Muppets At Harvard Cave In To The CIA; Rescind Chelsea Manning's Fellowship

from the fucking-cowards dept

Harvard is one of the most prestigious universities in the world (and its graduates often feel the need to remind you of that). But apparently Harvard is more worried about protecting its reputation from the elite than actually fulfilling its stated mission of "educating the citizens and citizen-leaders for our society." In an act of utter cowardice, it withdrew a Visiting Fellowship that it gave to Chelsea Manning just a couple days after announcing it -- all because the CIA and its friends got upset. Harvard caving in to the CIA is not a good look.

Two days ago, Harvard's Institute of Politics at the Kennedy School announced that Chelsea Manning would be a "Visiting Fellow" for the 2017-2018 school year. She was joining others -- including former Trump press secretary Sean Spicer, former Trump campaign manager Corey Lewandowski and Clinton campaign manager Robby Mook. The Visiting Fellows program is basically a high falutin' way of saying that these people would come give some talks at the school. But the point of the program -- in theory -- is to expose people to a variety of ideas from a variety of different perspectives. Personally, I think honoring Spicer, Lewandowski and Mook is fairly ridiculous, but I respect and support Harvard wishing to bring them -- or anyone -- in to talk about their experience

But, of course, anything having to do with Manning is controversial to some -- mostly those who have bought into a misleading line of tripe from cable news. And thus people freaked out that Harvard was including her. Among those most triggered by Harvard planning to have Manning come talk to students was the CIA. On Thursday, former CIA depute director (and former acting director) Michael Morell resigned from his own fellowship (in a different program) at the Kennedy School in protest. His letter is full of debunked bullshit.

Unfortunately, I cannot be part of an organization -- The Kennedy School -- that honors a convicted felon and leaker of classified information, Ms. Chelsea Manning, by inviting her to be a Visiting Fellow at the Kennedy School's Institute of Politics. Ms. Manning was found guilty of 17 serious crimes, including six counts of espionage, for leaking hundreds of thousands of classified documents to Wikileaks, an entity that CIA Director Mike Pompeo says operates like an adversarial foreign intelligence service.

Senior leaders in our military have stated publicly that the leaks by Ms. Manning put the lives of US soldiers at risk. Upon her conviction, then Rep. Mike Rogers and Rep. Dutch Ruppersberger, the top Republican and Democrat on the House Intelligence Committee at the time, praised the verdict, saying "Justice has been served today." They added "Pfc. Manning harmed our national security, violated the public's trust, and now stands convicted of multiple serious crimes."

This statement is hogwash. Yes, she was convicted of various crimes including espionage, but only because the Espionage Act is a complete unconstitutional joke that makes no distinction between leaking to the press and spying for a foreign government -- and under which you're not allowed to share your motives for leaking information. Saying she was "convicted of espionage" without context is misleading bullshit and Morell, of all people, knows that and is exploiting it.

The claim that Pompeo now says that Wikileaks is acting like an "adversarial foreign intelligence service" is bullshit and misleading in two ways. First, Pompeo is not exactly an unbiased observer. He's long been a massive surveillance state cheerleader -- who was one of the biggest supporters of having the NSA illegally spy on nearly every American, and who has a long history of grandstanding against those with the courage to blow the whistle on the unconstitutional activities Pompeo himself has championed (more on him in a moment).

Separately, even if you accept Pompeo's recent statements about how Wikileaks acts today, anyone with any knowledge of the history (which Morell certainly has) knows that Wikileaks was a very different kind of operation back when Manning first leaked the documents to the site. Manning's leaks to Wikileaks were really its first big "government" leak. Earlier leaks had been more targeted at corporate malfeasance, and the site's reputation at the time was as a general home for hosting whistleblowing documents of all kind.

As for Ruppersberger and Rogers' statements, they are in the Pompeo camp as long time defenders of the surveillance state. Ruppersberger's district was where many NSA employees lived, and Rogers' reputation was largely built around acting like a tough guy on "law and order" and surveillance. So, big whoop.

The really obnoxious and bullshit part of Morell's letter, though, is the claim that "our military have stated publicly that the leaks by Ms. Manning put the lives of US soldiers at risk." Note Morell's careful choice of words. He didn't say that she put people's lives at risk. Or that anyone was harmed by Manning's whistleblowing. He says that some in the military publicly stated that lives were put at risk. His careful choice of words is because he knows full well that at Manning's sentencing hearing, those same military officials admitted there was no evidence of any lives harmed as a result of the leaks. It was also admitted that the earlier claims of harm were misleading, in that some of the names that the military had claimed had died... had actually died before the Wikileaks disclosures.

Back to Pompeo. Soon after Morell's letter became public, CIA director Pompeo refused to give a planned speech at Harvard, giving a similarly bullshit statement:

"My conscience and duty to the men and women of the [CIA] will not permit me to betray their trust by appearing to support Harvard's decision with my appearance at tonight's event," Pompeo wrote, referring to the Thursday engagement. "Ms. Manning betrayed her country and was found guilty of 17 serious crimes for leaking classified information to Wikileaks."

"Leaders from both political parties denounced Ms. Manning's actions as traitorous and many intelligence and military officials believe those leaks put the lives of the patriotic men and women at the CIA in danger," Pompeo continued. "And those military and intelligence officials are right."

Again, this is bullshit for all the same reasons that Morell's letter was bullshit.

But Harvard, as an academic institution that supports differences of opinion and free speech, stood up to these CIA spooks, right? Nope, they immediately caved and withdrew the fellowship, but tried to appease people by saying she could still come to speak.

We are withdrawing the invitation to her to serve as a Visiting Fellow — and the perceived honor that it implies to some people — while maintaining the invitation for her to spend a day at the Kennedy School and speak in the Forum.

I apologize to her and to the many concerned people from whom I have heard today for not recognizing upfront the full implications of our original invitation.

What a bullshit, cowardly statement in response to concern trolling from surveillance state supporters with actual blood on their hands. Mike Morell, among his many claims to fame, defended torture, and droning innocent civilians.

Here's something else: Morell has accepted responsibility and apologized for playing a large role in providing incorrect intelligence that led the US to attack Iraq, leading to the actual deaths of thousands of US soldiers. For Havard to rescind its offer to Manning, over false claims of putting US soldiers at risk from a guy who has admitted his own decisions lead to the deaths of thousands of US soldiers, is a total travesty.

What's more, this comes just a day after it came out that Harvard administrators deliberately overruled a decision to admit a woman who was about to be released from prison for killing her child. The story is heartbreaking in many ways -- but it reminds us that prison is supposed to be a place of redemption, but the cowards at Harvard overruled what some said was "one of the strongest candidates in the country last year, period," over fears of how it would look. One of the quotes from a Harvard professor in the article is quite incredible:

But frankly, we knew that anyone could just punch her crime into Google, and Fox News would probably say that P.C. liberal Harvard gave 200 grand of funding to a child murderer, who also happened to be a minority. I mean, c’mon.

It takes courage to stand up for what's right. It takes courage to stand up for redemption after one has served their time for crime. Harvard has no courage. Harvard is made up of cowards.

As an aside: last night was the EFF's Pioneer Awards, in which I had the honor and privilege of standing with Chelsea Manning, who gave a truly inspirational speech about redemption and the ability to face adversity with dignity, just minutes before Harvard showed that it had no dignity at all.

84 Comments | Leave a Comment..

Posted on Techdirt - 15 September 2017 @ 9:35am

Trump Administration Says It's Classified If They Can Let The NSA Spy On Americans

from the that's-not-how-this-is-supposed-to-work dept

Senator Ron Wyden, as a member of the Senate Intelligence Committee, spent half a decade trying to get President Obama's Director of National Intelligence, James Clapper, to answer some fairly straightforward questions about NSA surveillance on Americans. As you may recall, this got so bad that Clapper flat out lied to Wyden in an open Senate hearing, which inspired Ed Snowden to leak documents to Glenn Greenwald. With the Trump administration, Dan Coats took over Clapper's job... and Clapper's role of obfuscating in response to important questions from Wyden concerning NSA surveillance. Despite promises to the contrary, Coats (like Clapper before him) has refused to share just how many Americans have their information sucked up under Section 702. Since that program is up for renewal later this year, that kind of information seems quite relevant to the debate.

However, as we noted back in June, Wyden has also been asking a different, and much more specific question of Coats. At a hearing in June, Wyden asked:

Can the government use FISA Act Section 702 to collect communications it knows are entirely domestic?

This seems like a kind of important question. 702 on its face, says that it can't be used to target domestic communications. Literally, the law says this: "An acquisition authorized under [this statute]... may not intentionally acquire any communication as to which the sender and all intended recipients are known at the time of the acquisition to be located in the United States."

But, as we've learned, when Senator Wyden asks an "is this happening?" question -- the answer is always "yes." And, once again, it appears that Coats is playing games. Coats responded to that question at the time saying: "Not to my knowledge. It would be against the law." That seems like a pretty clear and definitive answer: "no." Which is as it should be.

But then... something weird happened. The very next day, Coats' office put out a "clarifying" statement (ruh roh...), saying that Coats had "interpreted" Wyden's question to be referring specifically to Section 702(b)(4) (the part that says you can't spy on domestic communications). But, that's not what Wyden had asked. He had asked about the entirety of 702. So this "clarification" certainly seemed to suggest that Coats' original answer was incorrect in regards to the actual question, and instead, his staff was rewriting Wyden's question to make sure he had answered it accurately.

In other words, it appears that Coats put himself in a Clapper-position, of mistakenly claiming that the NSA isn't spying on Americans under a specific authority when it absolutely is -- and the reinterpretation of the question was his retroactive attempt to make his answer "truthy."

Not surprisingly, this didn't please Wyden, who quickly asked Coats to officially answer the original question with a yes or no, and not the reinterpreted question his office claimed he had answered.

Coats has now sent "an answer" but not a good one. He's now claiming that it's classified, and also takes some weird shots at Wyden for asking such a question in the first place:

Dear Senator Wyden:

In response to your letter of July 31, 2017, I would note that I responded to your question publicly both at the Senate Select Committee on Intelligence's open hearing on June 7, as well as in an unclassified letter to you on June 8. However, in further conversations with you and your staff, including at a closed budget hearing on June 15, it became clear that you already had the specific information that you were seeking, but this information was classified. In an effort to be responsive to you, I committed to assessing whether the sources and methods information you were asking for could be publicly released.

After consulting with the relevant intelligence agencies, I concluded that releasing the information you are asking to be made public would cause serious damage to national security. To that end, I provided you a comprehensive classified response to your question on July 24. This response also discussed, at length, why the information is properly classified and cannot be publicly released.

I want to stress that the Intelligence Community takes seriously its obligation to faithfully execute collection under Section 702 consistent with the Constitution and statutory requirements. We also take seriously our obligation to ensure Congress has all the information - both publicly available and classified it needs to conduct oversight of this program. While I recognize your goal of an unclassified response, given the need to include classified information to fully address your question, the classified response provided on July 24 stands as our response on this matter.

Sincerely,

Daniel R. Coats

Now, for those of you thinking "okay, it makes sense that we can't reveal classified information that might harm national security," let me remind you of the question that Wyden asked:

Can the government use FISA Act Section 702 to collect communications it knows are entirely domestic?

Okay. So please explain how a simple yes or no answer to that can be classified -- especially given the plain language of the law itself? And, of course, this answer -- or, more specifically, the refusal to say "no" -- more or less confirms that the answer is a resounding "YES!" the government believes that it can use Section 702 to collect purely domestic communications, in clear contradiction to the plain language of the law.

Furthermore, if this question is so scary and so dangerous, why didn't anyone -- including Coats himself -- have any problem answering it when it was initially posed back in June? It didn't seem like such a risk to national security then. It's only a risk to national security after Coats' staff realized he misspoke? How, exactly, does that work?

As you might imagine, Senator Wyden is not pleased with this turn of events:

It is hard to view Director Coats' behavior as anything other than an effort to keep Americans in the dark about government surveillance. I asked him a simple, yes-or-no question: Can the government use FISA Act Section 702 to collect communications it knows are entirely domestic?

What happened was almost Orwellian. I asked a question in an open hearing. No one objected to the question at the time. Director Coats answered the question. His answer was not classified. Then, after the fact, his press office told reporters, in effect, Director Coats was answering a different question.

I have asked Director Coats repeatedly to answer the question I actually asked. But now he claims answering the question would be classified, and do serious damage to national security.

The refusal of the DNI to answer this simple yes-no question should set off alarms. How can Congress reauthorize this surveillance when the administration is playing games with basic questions about this program?

This is on top of the administration's recent refusal even to estimate how many Americans’ communications are swept up under this program.

The Trump administration appears to have calculated that hiding from Americans basic information relevant to their privacy is the easiest way to renew this expansive surveillance authority. The executive branch is rejecting a fundamental principle of oversight by refusing to answer a direct question, and saying that Americans don't deserve to know when and how the government watches them.

So, uh, who in the NSA is going to play the role of Snowden this time? Once again, it appears we have a Director of National Intelligence claiming no surveillance on Americans under a specific authority, when everything that Wyden is saying indicates that he damn well knows that's not true. Sooner or later someone's going to leak the fact that the intelligence community is lying to the American public in order to spy on the American public.

Read More | 29 Comments | Leave a Comment..

Posted on Techdirt - 14 September 2017 @ 10:49am

Lawyer: Without The Monkey's Approval, PETA Can't Settle Monkey Selfie Case

from the has-the-monkey-settled? dept

Ted Frank is a well-respected lawyer who has heroically dedicated much of his career to stopping bad legal practices, including sketchy settlements in class action lawsuits. Now he's taking action in another case involving a sketchy settlement: the monkey selfie case. As we highlighted earlier this week, while it was no surprise that PETA and photographer David Slater worked out a settlement agreement to end the ridiculous lawsuit PETA had filed, it was deeply concerning that part of the settlement involved PETA demanding that the original district court ruling -- the one saying, clearly, that animals don't get copyrights -- should be thrown out.

It took just a few days for Frank, on behalf of CEI, to file a wonderful and hilarious amicus brief with the court. There are a bunch of reasons why vacatur is improper here, but the real beauty of this brief is in pointing out that Naruto -- the monkey -- has been left out of the settlement, and thus not "all parties" have agreed. No, really.

PETA continued to assert that it acted as Naruto’s next friend before this Court, after Dr. Engelhardt voluntarily dismissed her appeal before briefs were filed.... The defendants argued that because Dr. Engelhardt was the only person pleaded to have any relationship with Naruto, PETA could not demonstrate the “significant relationship” required to establish next friend standing.... In response, PETA again asserted in writing and at oral argument that it acts as Naruto’s next friend....

Incredibly, PETA now represents that it entered into settlement with the defendants alone—without Naruto.... The settlement instead “resolves all disputes arising out of this litigation as between PETA and Defendants.”... This statement makes no sense. PETA did not have claims against the defendants. PETA argued repeatedly it was a next friend, a nominal party. For what their worth, all claims arising out of this litigation belong to the sole plaintiff, Naruto....

The underlying complaint does not plead a case or controversy between PETA and defendants, and this alone bars vacatur. Without standing, PETA may not move for vacatur. It does not matter that the defendants half-heartedly moved for vacatur under their settlement agreement “without joining or taking any position as to the bases for that request.”... The losing party—Naruto—must carry the burden of proving “equitable entitlement to the extraordinary remedy of vacatur.”...

No Naruto, no standing, no vacatur.

No Naruto, no standing, no vacatur. What a world we live in.

PETA’s too-clever-by-half argument simply does not work. PETA cannot claim to be a qualified next friend, then pretend to be unqualified when it suits them for the limited purpose of vacating an unfavorable precedent. Their position is especially untenable because PETA still “contends that it can satisfy the Next Friend requirements, or should be permitted the opportunity to do so before the district court, if the appeal is not dismissed.”

Alternatively, Frank argues that since Naruto is not technically a part of the settlement, perhaps the appeals court should reject the settlement and issue its opinion anyway:

Alternatively, if the Court takes PETA’s argument literally, and if PETA agreed only to stop acting as next friend for Naruto, leaving the monkey without an advocate, such a selfish settlement would not extinguish Naruto’s appeal. A stipulation signed only on behalf of the next friend (a nominal party) cannot moot the underlying controversy with the actual party. To the extent that PETA insists this occurred, they have simply ceased to adequately represent their supposed friend Naruto. If so, PETA’s stipulation should be disregarded.

Frank also takes a stab at PETA's whole "next friend" argument and why it's so silly in a footnote. First, he notes that if the court is concerned that Naruto is now "friendless" at the court, it could appoint a guardian ad litem, with the following footnote mocking PETA's claim to "next friend" status.

The Competitive Enterprise Institute has as much of a personal relationship with Naruto as PETA pleaded (i.e., none), so might plausibly serve the role as well as PETA has. However, any next friend or guardian should have a bona fide personal and non-ideological interest in the incompetent person—putting aside the question of whether animals may be persons under Fed. R. Civ. Proc. 17.

And, of course, who knows if Naruto (or some other "next friend") won't sue again:

In any event, if Naruto’s claims were indeed not settled by PETA, vacatur should be denied because “Naruto” (that is, someone claiming to be his “next friend”) would remain free to file suit again for further acts of alleged infringement.

While this is a bit of a throwaway line, it's actually important -- and it's one that David Slater should pay attention to. Allowing PETA to toss out the lower court settlement might not end his legal troubles over this matter. Anyone else alleging to be Naruto's "next friend" might go right back to court.

Finally, Frank notes that just because the parties have announced a settlement, that doesn't mean the court can't reject it and issue a ruling -- providing guidance to other courts in the circuit on this issue.

In Americana Art, the panel chose to issue an affirming opinion notwithstanding the dismissal because of the “opportunity to provide additional guidance to the district courts.”... PETA previously stated to this Court that the case presents “a question of first impression [and] the issue is not a trivial one.” ... Given the judicial resources already expended at the district-court and appellate level, the Court can rationally conclude, especially given that PETA is attempting to elide the question of whether it is or is not a “next friend,” that, if the Court is already close to a decision in this straightforward case, it should provide “guidance to the district courts” by issuing a decision that would not require much additional expenditure of judicial resources

I would be pleasantly surprised if the 9th Circuit actually keeps the case going and issues an opinion -- but at the very least, it shouldn't ditch the district court ruling.

Read More | 48 Comments | Leave a Comment..

Posted on Techdirt - 14 September 2017 @ 9:45am

Charles Harder Loses Again: You Can't Just File Defamation Lawsuits In A Random State Because You Like Its Statute Of Limitations

from the sorry-charles dept

As you may know, Charles Harder is the lawyer behind the lawsuit Shiva Ayyadurai filed against us, so feel free to view everything we say here through that prism. Last week, of course, the judge in our case dismissed the case against us, noting that everything we said was clearly protected by the First Amendment. But that wasn't Harder's only loss of the week. Eriq Gardner points out that he also lost a case he filed against The Deal.

That case had been filed a couple months before our lawsuit, in federal court in New Hampshire. It was filed on behalf of Scottsdale Capital Advisors, a company based in Arizona, and one of its execs, the Nevada-based John Hurry, against the Delaware-registered and New York-based "The Deal" and one of its reporters, the California-based William Meagher. Now, you may wonder why this lawsuit was filed in New Hampshire, seeing as none of the states above include "New Hampshire." And, indeed, the court was wondering that too, because it dismissed the case over this bit of weird venue shopping:

Scottsdale has failed to establish that defendants have the minimum contacts with New Hampshire required for this court to exercise personal jurisdiction over them in this action consistent with the Fourteenth Amendment’s due process clause. Specifically, the plaintiffs have not demonstrated that their claims are related to the defendants’ forum-based activities or that the defendants purposefully contacted New Hampshire such that they could expect to answer for their actions here.

So, why file in New Hampshire? Here's a hint:

  • Statute of limitations for defamation lawsuits in New York: One Year
  • Statute of limitations for defamation lawsuits in Delaware: Two Years
  • Statute of limitations for defamation lawsuits in California: One Year
  • Statute of limitations for defamation lawsuits in Arizona: One Year
  • Statute of limitations for defamation lawsuits in Nevada: Two Years
  • Statute of limitations for defamation lawsuits in New Hampshire: Three Years
Which one of those is not like the others?

The articles at issue in the lawsuit were published on December 6, 2013, March 20, 2014 and April 16, 2014. The lawsuit was filed on November 18, 2016. In other words, the lawsuit was filed over two years after the publication of the articles, and thus any state with a one-year or two-year statute of limitations -- including all of the states connected to the various parties -- would not have worked. New Hampshire, however, has that lovely three year statute of limitations.

The ruling then details the rather extraordinary effort that Scottsdale's lawyers put into trying to argue that New Hampshire is the right place to hear a case when none of the parties is actually in New Hampshire. In short, the argument appears to be that Dartmouth, a college based in New Hampshire, has a subscription to The Deal, and therefore it's published in New Hampshire. And also, The Deal tried to get Dartmouth to renew its subscription, and thus it "conducted business" in New Hampshire. The court is... not impressed for a variety of reasons.

At the time it published Meagher’s articles, and in the time since, The Deal has had only one subscriber in New Hampshire -- Dartmouth College. According to The Deal’s records, no user accessed these three articles through the Dartmouth subscription. Nor did either of the two users of the Dartmouth subscription who had signed up to receive “The DealFlow Report” at the time the articles were published open the attachments containing links to the March 25 or April 22 articles; and no evidence suggests either opened the attachment containing a link to the December 10 article. Indeed, according to data collected through Google Analytics, not a single user who read these articles through The Deal’s online portal was located in New Hampshire.

Because no evidence suggests that anyone in New Hampshire -- Dartmouth-affiliated or otherwise -- viewed the three allegedly-defamatory articles, the plaintiffs focus on other contacts between The Deal and Dartmouth. For example, The Deal solicited Dartmouth’s subscription, and renewals thereof, through emails and telephone calls specifically directed at Dartmouth. Furthermore, during the time period between January 1, 2013 and June 2017, 81 individuals were registered to use The Deal’s online portal under Dartmouth’s subscription. Approximately 30 to 40 students each year were permitted to access The Deal’s online portal via IP authentication (that is, without entering a log-in name or password). The Deal registered a total of 7,232 “sessions” by Dartmouth users visiting its online portal during this time period. The Deal also communicated directly with between 32 and 48 individuals at Dartmouth by email during this time, including regular circulation of “The DealFlow Report” to the two Dartmouth-affiliated individuals who had signed up for it.

But, the court notes that's not nearly enough to make New Hampshire the proper venue:

First, the circulation of the allegedly-defamatory articles in New Hampshire is negligible.... (“The size of a distribution of offending material helps determine whether a defendant acted intentionally.”). Though some 7,000 members of the Dartmouth community theoretically had access to The Deal Pipeline, the plaintiffs do not dispute the defendants’ representation that only 30 users were signed up to use that subscription to access The Deal’s online portal at the time the articles were published, and that only two users actually received an email newsletter containing active links to the articles. Such “thin distribution may indicate a lack of purposeful contact,” and it appears to do so here....

Regardless of the number of individuals who could have accessed the offending articles through Dartmouth’s subscription to The Deal Pipeline, the evidence presented suggests that none did. Unlike in Keeton and Calder, where New Hampshire residents read the allegedly libelous statements, presumably, damaging the plaintiffs’ reputations, Scottsdale’s reputation in New Hampshire cannot be impacted by the statements allegedly published in New Hampshire if no one in New Hampshire saw the statements.

The court also notes that there's a strong appearance that the lawsuits were filed to create a burden on the reporter, Meagher:

The defendants’ burdens of appearing in New Hampshire and the inconvenience to the plaintiffs weigh somewhat against finding jurisdiction here. While that burden on The Deal, a corporate defendant located in New York, is not heavy, the burden on Meagher, an individual residing in California, may be. Ticketmaster-N.Y., 26 F.3d at 210 (“The burden associated with forcing a California resident to appear in a Massachusetts court is onerous in terms of distance . . . .”);....

“As the First Circuit has explained, however, the ‘burden of appearance’ factor is important primarily because ‘it provides a mechanism through which courts may guard against harassment.’” R&R Auction Co., LLC v. Johnson, 2016 DNH 40, 23 (Barbadoro, J.) (quoting Ticketmaster-N.Y., 26 F.3d at 211). This is not the first action that Scottsdale has brought against the defendants for defamation. In May 2016, Scottsdale sued the defendants in New York, where The Deal is located. It withdrew that action on the eve of the deadline for defendants’ motion to dismiss, forcing the defendants to incur the expense of drafting that motion unnecessarily, and then filed this action in New Hampshire. Scottsdale also sued FINRA in Arizona over its investigations of Scottsdale. The defendants here suggest that “the Plaintiffs’ primary strategic purpose” for bringing both the New York and New Hampshire actions “was to coerce Defendants into revealing the identity of Mr. Meagher’s confidential source in the hopes that this information would bolster their case against FINRA in Arizona.” Scottsdale does not deny -- nor even address -- this allegation in its objection and did not do so at oral argument. This factor, therefore, weighs heavily against the reasonableness of this court finding personal jurisdiction.

This ruling also comes the day after another Charles Harder lawsuit was filed in New York. Again, there, we noted that there were serious statute of limitations questions.

In a profile of Charles Harder published in the Hollywood Reporter last year, it was noted that Harder is well aware of the differences in state laws over defamation:

In his offices, Harder keeps charts mapping the differences in libel and privacy laws throughout the country. He also has become a pro on where to strategically file cases.

It appears that at least some courts are not impressed with the "strategy" behind where Harder files his cases.

Read More | 27 Comments | Leave a Comment..

Posted on Techdirt - 13 September 2017 @ 12:01pm

The Latest Scam To Protect Sketchy Patents From Patent Office Review: Sell To Native Americans

from the the-patent-system-is-so-damn-broken dept

We've written a bunch over the past few years about the so-called Inter Partes Review (IPR) process at the US Patent Office. In short, this is a process that was implemented in the patent reform bill back in 2010 allowing people and companies to ask a special "review board" -- the Patent Trial and Appeal Board (PTAB) -- at the Patent Office to review a patent to determine if it was valid. This was necessary because so many absolutely terrible patents were being granted, and then being used to shake down tons of companies and hold entire industries hostage. So, rather than fix the patent review process, Congress created an interesting work-around: at least make it easier for the Patent Office to go back and check to see if it got it right the first time.

Last year, part of this process was challenged at the Supreme Court and upheld as valid. However, the whole IPR is still very much under attack. There's another big Supreme Court case on the docket right now which argues that IPR is unconstitutional (the short argument is that you can already challenge patents in court, and by taking them to an administrative board, it creates an unconstitutional taking of property without a jury). There are also some attempts at killing the IPR in Congress.

While those play out, however, never underestimate the ability of sketchy lawyers to find loopholes and dive through them in ways that are clearly sticking a giant middle finger up at the law. Such is the case with the pharmaceutical company Allergan, who just "sold" some of its patents for the dry-eye drug Restasis to the St. Regis Mohawk Tribe based in upstate New York. There are currently challenges against the Restasis patents both in court and via the IPR at the PTAB -- and the PTAB has indicated that Allergan is likely to lose its patents. But Allergan has basically short circuited the process just days before the PTAB was set to hear arguments over the patent, and will now tell the PTAB it can't review these patents because of (no joke) the sovereign immunity of the Mohawk tribe.

The reasoning goes back, first, to a ruling at the beginning of this year where the PTAB dismissed some reviews of patents held by the University of Florida after the University -- a part of the state of Florida -- made a claim of sovereign immunity, saying it's exempted under the 11th Amendment of the Constitution. While there are some arguments against this, the PTAB agreed. The lawyers representing the University of Florida in this case apparently saw this as an opportunity. They're the same lawyers representing Allergan in this "sale."

Of course, it's a sale in name only. The only reason for the sale is to be able to avoid the IPR process. In all other ways, Allergan appears to retain control. From the NY Times article on the deal:

Under the deal, which involves the dry-eye drug Restasis, Allergan will pay the tribe $13.75 million. In exchange, the tribe will claim sovereign immunity as grounds to dismiss a patent challenge through a unit of the United States Patent and Trademark Office. The tribe will lease the patents back to Allergan, and will receive $15 million in annual royalties as long as the patents remain valid.

So, yeah. This is an insanely blatant attempt at avoiding a process put in place under the law, and where this pharmaceutical company is basically paying off a Native American tribe for the right to avoid a process that might invalidate some patents. As a side note, the tribe's quote on this to the NY Times is pretty ridiculous:

“The tribe has many unmet needs,” Dale White, the tribe’s general counsel, said in an interview. “We want to be self-reliant.”

Being "self-reliant" means doing something of actual value yourself. It doesn't mean abusing an already questionable loophole in patent law to help giant pharma companies keep their dubious patents and limit the ability of more affordable medicine to get on the market. And, of course, lots of people are predicting that there will be more deals like this in the near future.

Either way this is a big deal. Law professor Rachel Sachs has already pointed out that this could go way beyond just the IPR process and could impact claims in federal court as well. And you can be sure that if that's true it will be exploited. There is no "legitimate" reason for this patent sale and license-back other than to avoid having the patent reviewed. It's a sickeningly blatant attempt to avoid the law and to keep a patent from possible invalidation. Even those who support the patent system should be concerned when obvious games like this are played to abuse the system. It doesn't make the system look any stronger. It just shows how desperate some companies are to avoid having their patents looked at closely.

Of course, there is some more history on this issue going back quite a while. Almost exactly 10 years ago, we wrote about the ridiculousness of letting state universities claim sovereign immunity to avoid being sued for patent infringement (even while asserting patents against other entities). And, back in 2011, we saw a similar issue pop up with a Native American nation (in that case, the Quapaw Tribe in Oklahoma) able to have a patent infringement case dropped entirely by using sovereign immunity. At the time, we wondered if this might enable a creation of patent-free autonomous zones -- but that didn't really happen. Instead, we get something much, much worse: patent holding giants totally abusing the system to make sure that bad patents can be used to inflate prices and limit competition, even in the field of important life-saving drugs.

34 Comments | Leave a Comment..

Posted on Free Speech - 13 September 2017 @ 9:45am

Dear Government Employees: Asking Questions - Even Dumb Ones - Is Not A Criminal Offense

from the a-story-in-three-acts dept

What is it with federal government officials and their weird belief that being questioned by the public -- even with dumb questions -- is a criminal offense? Does it take three stories to make a trend? Perhaps. Let's do these one at a time.


Scene One: Guy faces criminal charges for asking Senator if his daughter was kidnapped

I'm sure that in some recesses of Simon Radecki's mind, the following stunt was a good idea. I'm sure, when he came up with it, it seemed like a clever way to create a feeling of panic within a Senator's mind that might -- just maybe -- make him reconsider the panic his policies might be causing millions of people. And yet, still... this seems like a really dumb stunt:

After thanking Mr. Toomey for appearing, Mr. Radecki said, “We’ve been here for a while. You probably haven’t seen the news. Can you confirm whether or not your daughter Bridget has been kidnapped?”

The ensuing four-second pause was punctuated by Mr. Toomey uttering “uhhhh,” before Mr. Radecki added, “The reason I ask is because that’s the reality of families that suffer deportation …”

See? You can totally see the thought process that would lead to such questioning, even if most of us would also quickly realize what a dumb line of questioning it was and would never let it out of our heads. But dumb questions aren't illegal. But... that hasn't stopped the police from going after Radecki and charging him with "disorderly conduct." Toomey's staffers didn't help matters by saying that the question was "inherently threatening." Except, that's not even remotely true under the law. And there's a fair bit of First Amendment law on what counts as a "true threat." And a hypothetical to make a point is not considered a true threat.

Scene 2: Charges dropped against reporter for asking Health Secretary questions too loudly

The questioning in this case happened back in May and got some attention. West Virginia reporter Dan Heyman, a reporter for the Public News Service, was arrested and charged with "willful disruption of governmental process" for the truly audacious act of yelling questions at Health and Human Services Secretary Tom Price. Of course, that's kind of his job as a reporter. For the past four months Heyman has been dealing with a set of completely bogus charges because he was doing his job, asking questions of public officials.

Thankfully, now, common sense has prevailed, as prosecutors have dropped all charges and admitted that Heyman was just doing "aggressive journalism" that "was not unlawful and did not violate the law with which he was charged." It's just unfortunate that he still had to be arrested and have criminal charges hanging over his head for four months.

Scene 3: White House lawyer promises to send the Secret Service after aggressive questioner

Sensing a pattern yet? The recently hired lawyer in the White House, Ty Cobb (note: not the dead baseball player) appeared to threaten a questioner with a Secret Service visit for asking pointed questions. Here's the exchange, as posted over at Business Insider:

"How are you sleeping at night? You’re a monster," Jetton wrote to Cobb's White House email account on Tuesday night.

"Like a baby ... " wrote Cobb, who was brought in to the White House to oversee Trump's legal and media response to the ongoing Russia investigation.

The conversation escalated quickly, with Jetton attacking "the havoc" Cobb and his "ilk are causing."

"I, like many others, lay awake, restless, my mind dissecting countless scenarios of how bad this could get, what new thing you have dreamt up to pull us down a pathway to hell," Jetton wrote. "You remind me less of a grumpy baseball player and more of that horrid clown from the Stephen King novel."

Cobb replied: "Enjoy talking to the Secret Service. Hope you are you less than nine years old as you seem to be ... "

As an aside: Cobb appears to have difficulty not responding to any random email that comes his way -- having also been completely and totally fooled by a guy who literally used the domain emailprankster.co.uk to send emails pretending to be other White House officials, eventually leading Cobb to threaten the prankster with possible felony charges.

Either way, absolutely nothing in the exchange above deserves (or is likely to get) a Secret Service visit.

Look, this isn't that hard. Being a government official -- whether elected or appointed -- is not a fun gig. You have lots of people questioning you and second guessing you all the time. And some of those people are mean. Possibly really mean. But, that's kinda part of the territory when you live and work in a mostly open democracy, rather than an authoritarian dictatorship. People get to ask questions -- even stupid, annoying or scary ones. And we don't arrest them and throw them in jail.

52 Comments | Leave a Comment..

Posted on Techdirt - 12 September 2017 @ 10:37am

Monkey Selfie Case Reaches Settlement -- But The Parties Want To Delete Ruling Saying Monkeys Can't Hold Copyright

from the this-is-bad dept

For many years now, we've been covering the sometimes odd/sometimes dopey case of the monkey selfie and the various disputes over who holds the copyright (the pretty clear answer: no one owns the copyright, because the law only applies to humans). David Slater, the photographer whose camera the monkey used, has always claimed that he holds the copyright (and has, in the past, tried to blame us at Techdirt for pointing out that the law disagrees). A few years back, PETA, the publicity-hungry animal rights group, hired big time lawyers at Irell & Manella to argue (1) the monkey holds the copyright, not Slater, (2) PETA somehow magically can stand in for the monkey in court -- and sued Slater over it. Slater and I disagree over whether he holds the copyright, but on this we actually do agree: the monkey most certainly does not hold the copyright.

The district court ruled correctly that works created by monkeys are in the public domain and that PETA had no case. PETA appealed. Last month, we wrote that the case was likely to settle, because both sides were highly motivated to get it out of court. On Slater's side, he had told some reporters that the legal fight has left him broke (which bizarrely lead to a bunch of people blaming me, which still makes no sense), while PETA desperately wanted to settle because the hearing in the case made it abundantly clear that the appeals court was not buying its argument. Indeed, it appears that the judges hearing the case could barely contain laughter at the bananas argument made by PETA's lawyers.

So it comes as little surprise that the parties have released a joint statement saying they've settled the case and asking the court to dismiss the appeal. Part of the agreement is that Slater says he'll donate 25% of any future proceeds from the monkey selfie pictures to organizations that protect the habitat of macaque monkeys in Indonesia, which seems like a good cause.

But... there is a pretty clear problem with the proposed settlement. Not only are they asking the court to dismiss the case due to the settlement, the parties have also agreed to ask the court to vacate the district court's ruling saying that animals cannot copyright works they create. Basically, PETA and its high-priced lawyers lost really badly on a fundamental issue of copyright... and now they want to erase that precedent so they or others can try again. PETA is arguing, incredibly, that if the original ruling stands, it will unfairly bind the monkey Naruto:

Here, the settlement is between PETA and Defendants. Accordingly, under Bonner Mall, PETA maintains that Naruto should not be “forced to acquiesce” to the district court’s judgment that he lacks standing under the Copyright Act where the appeal will be mooted by an agreement by PETA and PETA’s Next Friend status is contested and undecided. Rather, PETA maintains that it would be just and proper to vacate the judgment of the district court.

Wait. So PETA doesn't want Naruto -- the monkey that it claims to represent on no real basis, and who has absolutely no clue any of this is actually happening -- to be "forced to acquiesce" to the ruling? That's utter bullshit.

Of course, it's almost certainly not the real motivation here. The more likely reason is simply that PETA doesn't want that precedent on the books and there will likely be other cases in the very near future on other non-human created works. PETA's lawyers, Irell & Manella, may very well be trying to position themselves as the go-to lawyers on issues like who holds the copyright on AI-created works (answer again: no one), and having this ruling on the books, even at the district court level, would be inconvenient.

Hopefully the court will see through this and leave the ruling as is. Otherwise it seems likely that we'll be seeing a lot more of these kinds of cases. In the meantime, PETA also put a silly statement on its blog calling the case "groundbreaking." It was not groundbreaking. It was a stupid, nonsensical argument that was clearly not correct, and was basically laughed out of court. PETA says that this "sparked a massive international discussion about the need to extend fundamental rights to animals...." Except it did nothing of the sort.

Most of the press coverage you'll see about the case are just sort of laughing it off -- saying "oh that silly monkey selfie case has settled." But very few of them are reporting the request to vacate the lower court ruling. It's a bad idea and hopefully the court does not allow it to happen.

Read More | 43 Comments | Leave a Comment..

Posted on Techdirt - 12 September 2017 @ 9:32am

FTC Advice On How To Deal With Equifax Hack: Er... Race The Hackers To Filing Your Taxes Before They Do

from the what-the-actual-fuck dept

So, yes, by now you know all about the whole Equifax hack and how really, really terrible it is. Lots of sites have been posting various stories about what you should do about it, when the truth is you really can't do much. A lot of people are likely going to deal with an awful lot of bad stuff almost entirely because of this leak by Equifax. Not surprisingly, the FTC has weighed in with some suggestions, most of which won't actually help very much. Most of them are the standard suggestions everyone's giving -- including checking your credit reports, putting a credit freeze on your files and basically watching very closely to see if you're fucked over by whoever has access to these files.

But the FTC's very last suggestion is the one I wanted to focus on today. It's basically "um, well, maybe try to file your tax returns early next year, so you beat hackers trying to do the same?"

File your taxes early — as soon as you have the tax information you need, before a scammer can. Tax identity theft happens when someone uses your Social Security number to get a tax refund or a job. Respond right away to letters from the IRS.

As someone who has been a victim of someone filing fake tax returns to try to get your refund, it's a really shitty process to go through. The problem here, though, is the whole setup of our tax system, which makes it pretty damn easy for someone to fake your tax returns -- now made even easier thanks to this breach. If the FTC really wanted to help, it should be pushing for a complete overhaul of how tax filing works, such that merely knowing your Social Security Number and address isn't enough to file tax returns in your name. Among the many problems here, it starts with the idiotic idea that we use SSNs as an identity tool -- but there's also the fact that we continue to have the IRS force every American to play a guessing game with their taxes just to keep tax prep companies like Intuit and H&R Block happy.

I recognize that the FTC isn't directly in a position to fix this, but the fact that it's best suggestion is "race the hackers to filing your tax returns and hope you get there first" should highlight just how totally fucked up our income tax system is in the US.

27 Comments | Leave a Comment..

Posted on Techdirt - 11 September 2017 @ 1:33pm

Tesla Remotely Extended The Range Of Drivers In Florida For Free... And That's NOT A Good Thing

from the think-about-the-implications dept

In the lead up to Hurricane Irma hitting Florida over the weekend, Tesla did something kind of interesting: it gave a "free" upgrade to a bunch of Tesla drivers in Florida, extending the range of those vehicles, to make it easier for them to evacuate the state. Now, as an initial response, this may seem praiseworthy. The company did something (at no cost to car-owners) to help them evacuate from a serious danger zone. In a complete vacuum, that sounds like a good idea. But there are a variety of problems with it when put back into context.

The first thing you need to understand is that while Tesla sells different version of its Model S, with different ranges, the range is actually entirely software-dependent. That is, it uses the same batteries in different cars -- it just limits how much they'll charge via software. Thus, spend more on a "nicer" model and more of the battery is used. So all that happened here was that Tesla "upgraded" these cars with an over the air update. In some ways, this feels kind of neat -- it means that a Tesla owner could "purchase" an upgrade to extend the range of the car. But it should also be somewhat terrifying.

In some areas, this has lead to discussions about the possibility of hacking the software on the cheaper version to unlock the greater battery power -- and I, for one, can't wait to see the CFAA lawsuit that eventually comes out of that should it ever happen (at least some people are hacking into the Tesla's battery management system, but just to determine how much capacity is really available).

But this brings us back to the same old discussion of whether or not you really own what you've bought. When a company can automagically update the physical product you bought from them, it at least raises some serious questions. Yes, in this case, it's being used for a good purpose: to hopefully make it easier for Tesla owners to get the hell out of Florida. But it works the other way too, as law professor Elizabeth Jo points out:

And, of course, there's the possibility that one of these over-the-air updates goes wrong in disastrous ways:

So, yes, without any context, merely upgrading the cars' range sure sounds like a good thing. But when you begin to think about it in the context of who actually owns the car you bought, it gets a lot scarier.

96 Comments | Leave a Comment..

Posted on Techdirt - 11 September 2017 @ 11:58am

Lawyers Overcome First Challenge In Showing 'We Shall Overcome' Is In The Public Domain

from the sing-it! dept

A year and a half ago, we wrote about how the same team of lawyers who successfully got "Happy Birthday" recognized as being in the public domain (despite decades of Warner Chappell claiming otherwise, and making boatloads of money) had set their sites on a similar fight over the copyright status of the song "We Shall Overcome." There were a lot of details in the original lawsuit that we wrote about -- all suggesting very strongly that the song "We Shall Overcome" was way older than the copyright holder claimed, and it was almost certainly in the public domain.

There's been some back and forth in the case, but a new ruling on summary judgment motions effectively says key parts of the song are not under copyright. Specifically at issue is whether or not the first and fifth verse of the song are "sufficiently original" to qualify for copyright. And here, Judge Denise Cote says "nope." The verse in question is probably the part of the song you know:

We shall overcome,
We shall overcome
We shall overcome some day
Oh deep in my heart I do believe
We shall overcome some day.

Here, basically no one denies that there are extraodinarily similar songs that predate the 1960 and 1963 copyrights. The real question is whether there was some sort of substantial difference in the new copyrighted versions from the original -- enough to grant a new copyright. There's a LOT of history that the ruling digs into, and I'm not going to repeat it all here. Suffice it to say it appears that those registering the copyright were well aware that they were registering the copyright on a song that had been around for ages. Pete Seeger, who is on the copyright -- but apparently asked to have his name taken off later (which never happened, and it's now revealed that others hoped he would "forget" he asked about it) -- has said many times that the song was much older. The admission is that they filed for the copyright to prevent the song from being commercialized (which is, in some ways, kind of the opposite of the purpose of copyright, but...). And that's copyfraud. That's not the purpose of copyright and filing for such a registration is not supposed to be allowed.

Here, the court doesn't reach a decision on whether or not the registration was fraud on the Copyright Office -- that issue may move on to trial. However, the judge does make it clear that the copyright here doesn't seem legit. Specifically, in this case, the question being decided is who has the burden here. The holders of the copyright wanted to force the plaintiffs to prove that the copyright is invalid, arguing a "presumption of validity" in their registered copyright. But the court notes that enough evidence has presented to raise serious questions about the legitimacy of that copyright that the burden falls on the defendants to prove that the copyright (specifically on those two identical verses) is legit:

Without a sufficiently original contribution to Verse 1/5, the Song’s Verse 1/5 does not qualify for copyright protection as a derivative work. This similarity, coupled with the failure to clearly identify the PSI Version of the Song as the Song’s antecedent is also sufficient to rebut the presumption of validity. Therefore, the Defendants may not rest on a presumption that their copyrights are valid and they bear the ultimate burden of showing the validity of those copyrights without the weight added by that presumption.

So that's not a complete "this is in the public domain." But... it's a pretty strong indication of where we're heading.

On a separate note, I'm pleased to see the following discussion on how copyright is not (as some try to argue) some sort of "natural right" or one that "confers absolute ownership.":

The Constitution provides that “Congress shall have Power . . . [t]o promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries”. U.S. Const. art. I, § 8, cl. 8. This constitutional grant of authority to create a copyright is given in express recognition of the primacy of the public interest. See TCA Television Corp. v. McCollum, 839 F.3d 168, 177 (2d Cir. 2016). “[T]he primary purpose of copyright is not to reward the author, but is rather to secure ‘the general benefits derived by the public from the labors of authors.’” New York Times v. Tasini, 533 U.S. 483, 519 (2001) (Stevens, J., dissenting) (citation omitted). “[T]he authorization to grant to individual authors the limited monopoly of copyright is predicated upon the dual premises that the public benefits from the creative activities of authors, and that the copyright monopoly is a necessary condition the full realization of such creative activities.” Melville B. Nimmer & David Nimmer, 1 Nimmer on Copyright § 1.03[A] [hereinafter “Nimmer”]; Barton Beebe, Bleistein, the Problem of Aesthetic Progress, and the Making of American Copyright Law, 117 Colum. L. Rev. 319, 341 (2017) (“The Framers likely included the Progress Clause both to justify and to limit in some way the extraordinary grant of monopoly rights provided for by the Exclusive Rights Clause.”). As the Honorable Pierre Leval has explained, “[t]he copyright is not an inevitable, divine, or natural right that confers on authors the absolute ownership of their creations. It is designed rather to stimulate activity and progress in the arts for the intellectual enrichment of the public.” Pierre N. Leval, Toward a Fair Use Standard, 103 Harv. L. Rev. 1105, 1107 (1990).

That's not necessarily a key point in the ruling, but I think it's important to remind some people of this fact, since it's one that's frequently confused by copyright system supporters.

Either way, it's worth reading the full ruling. This is not a complete victory, but it's a good start. In the long run, it certainly seems likely that the barrier of a fake copyright on "We Shall Overcome"... shall be overcome.

Read More | 24 Comments | Leave a Comment..

Posted on Techdirt - 11 September 2017 @ 9:40am

It Doesn't Matter How Much Of An Asshole You Think Someone Is, That's No Excuse To DMCA

from the that's-not-how-copyright-law-works dept

We've pointed out time and time again that one of the problems with setting up any rules that allow for content to be taken down online is just how widely they will be abused. This is one of the reasons why we think that CDA 230's immunity is much better than the DMCA 512 safe harbors. Under CDA 230, if a platform receives a takedown over content that is, say, defamatory, they get to decide how best to act, without a change in their own legal liability. They can take it down, or they can leave it up, but there's no greater legal risk in either decision. With the DMCA, it's different. If you, as a platform, refuse to take down the content, you then risk much greater legal liability. And, because of this, we regularly see the DMCA abused by anyone who wants to make certain content disappear -- even if it has nothing to do with copyright.

Take this latest example of game developer Sean Vanaman, who has promised to issue DMCA takedown notices for YouTube star PewDiePie's (Felix Kjellberg) videos featuring Vanaman's video game, Firewatch:

The issue is, more or less, that PewDiePie is, well, kind of a jackass and possibly a bigot (there's some dispute over whether he's really a bigot or just "proving a point," but I'm going with Popehat's famous Goatfucker Rule on this one). And PewDiePie did one of his awful, insensitive PewDiePie things, which has reasonably pissed off some people.

One of those people is Vanaman, who is pointing directly to this episode as the reason why he's going to issue DMCA takedowns and is urging other game developers to do the same:

And, look, it's completely reasonable to dislike PewDiePie. And it's completely reasonable to be upset that someone you dislike and believe is toxic has done videos showing your games. But what's not reasonable and also not allowed under the law is to abuse the DMCA to take down content, just because you don't like how someone's using it. PewDiePie's videos are almost certainly fair use. While we've seen some debate over "Let's Play" videos like PewDiePie's over the years, in general most copyright experts who've discussed the matter seem to feel that the standard Let's Play video is very likely to be protected by fair use.

Having seen some of PewDiePie's Firewatch let's play video, it definitely would appear to be protected by fair use. The fact that Vanaman directly and publicly admits that he's not taking the video down for any valid copyright reason, but rather because he thinks PewDiePie is "a propagator of despicable garbage" doesn't help Vanaman's case at all. Rather, it gives PewDiePie a lot more leverage to claim that any such takedown would be abusive, and possibly even a violation of the DMCA's 512(f) against misrepresentations.

But the larger point remains: no matter what you think of PewDiePie or Vanaman, the issue here is that when we create laws that give people the power to take down content, it will be abused for a variety of reasons. Often -- as is the case here -- those reasons will have absolutely nothing to do with copyright. Vanaman spouting off about his non-copyright reasons for wanting to issue a takedown only makes that so much clearer in this case.

66 Comments | Leave a Comment..

Posted on Techdirt - 8 September 2017 @ 7:39pm

Equifax Security Breach Is A Complete Disaster... And Will Almost Certainly Get Worse

from the hang-on... dept

Okay, chances are you've already heard about the massive security breach at Equifax, that leaked a ton of important data on potentially 143 million people in the US (basically the majority of adults in America). If you haven't, you need to pay more attention to the news. I won't get into all the details of what happened here, but I want to follow a few threads:

First, Equifax had been sitting on the knowledge of this breach since July. There is some dispute over how quickly companies should disclose breaches, and it makes sense to give companies at least some time to get everything in order before going public. But here it's not clear what Equifax actually did. The company has seemed almost comically unprepared for this announcement in so many ways. Most incredibly, the site that Equifax set up for checking if your data has been compromised (short answer: yeah, it almost certainly was...) was on a consumer hosting plan using a free shared SSL certificate, a funky domain and an anonymous Whois record. And, incredibly, it asked you for most of your Social Security Number. In short, it's set up in a nearly identical manner to a typical phishing site. Oh and it left open the fact that the site had only one user -- "Edelman" -- the name of a big PR firm.

Not surprisingly, it didn't take long for various security tools to warn that the site wasn't safe.

And, when Equifax pushed people to its own "TrustedID" program to supposedly check to see if you were a victim of its own failures... it just started telling everyone yes no matter what info they put in:

So, yeah, what the hell did Equifax do during those six weeks it had to prepare? Oh, well, a few of its top execs used the delay to sell off stock, which may put them in even more hot water (of the criminal variety). Also, just days before it revealed the breach, and long after it knew of it, the company was talking up how admired its CEO is. This is literally the last tweet from Equifax prior to tweeting about the breach (screenshotted, because who knows how long it'll last):

I can't see any scenario under which Smith keeps his job. And it seems likely that many other execs are going to be in trouble as well. Beyond the possible insider trading above, there's already scrutiny on its corporate VP and Chief Legal Officer, John J. Kelley, who made $2.8 million last year and runs the company's "security, compliance, and privacy" efforts.

And despite six weeks to prepare for this, the following was Equifax's non-apology:

We apologize to our consumers and business customers for the concern and frustration this causes.

That's a classic non-apology. It's not apologizing for its own actions. It's not apologizing for the total mess it's created. It's just apologizing if you're "concerned and frustrated."

Oh, and did we mention that the very morning of the day that Equifax announced the breach, it tweeted out about a newsletter it published about how "safeguarding valuable customer data is critical." Really (again, screenshotted in case this disappears):

What the fuck, Equifax? Should we even mention that Equifax has been a key lobbying force against data breach bills? Those bills have some problems... but, really, it's not a good look following all of this.

And while there was some concern that signing up to check to see if you were a victim (again: look, you probably were...) would force you out of being a part of any class action lawsuit, that's since been "clarified" to not apply to any class action lawsuits over the breach. And you better believe that the company is going to be facing one heck of a class action lawsuit (a bunch are being filed, but they'll likely be consolidated).

That's all background of course. What I really wanted to discuss is how this will almost certainly get worse before it gets better. More than twelve years ago, I wrote that every major data breach is later revealed to be worse than initially reported on. This has held true for years and years. The initial analysis almost always underplays how serious the leak is or how much data is leaked. Stay tuned, because there's a very high likelihood we'll find out that either more people were impacted or that more sensitive information is out there.

And that should be a major concern, because what we already know here is stunning. As Michael Hiltzik at the LA Times noted, this is the mother lode of data if you want to commit all sorts of fraud:

The data now at large includes names, Social Security numbers, birthdates, addresses and driver’s license numbers, all of which can be used fraudulently to validate the identity of someone trying to open a bank or credit account in another person’s name.

In some cases, Equifax says, the security questions and answers used on some websites to verify users’ identity may also have been exposed. Having that information in hand would allow hackers to change their targets’ passwords and other account settings.

Other data breaches may have been bigger in terms of total accounts impacted, but it's hard to see how any data breach could have been this damaging. For over a decade, we've pointed out that credit bureaus like Equifax are collecting way too much data, with zero transparency. In fact, back in 2005, we wrote about Equifax itself saying that it was "unconstitutional and un-American" to let people know what kind of information Equifax had on them. The amount of data that Equifax and the other credit bureaus hold is staggering -- and as this event shows, they don't seem to have much of a clue about how to actually secure it.

At some point, we need to rethink why we've given Equifax, Experian and TransUnion so much power over so much of our everyday lives. You can't opt-out. They collect most of their data without us knowing and in secret. You can't avoid them. And now we know that at least one of them doesn't know how to secure that data.

88 Comments | Leave a Comment..

More posts from Mike Masnick >>