Mike Masnick’s Techdirt Profile

mmasnick

About Mike Masnick Techdirt Insider

Mike is the founder and CEO of the Copia Institute and editor of the Techdirt blog.

He can be found on Twitter at http://www.twitter.com/mmasnick

https://twitter.com/mmasnick



Posted on Free Speech - 17 May 2021 @ 9:35am

UK Now Calling Its 'Online Harms Bill' The 'Online Safety Bill' But A Simple Name Change Won't Fix Its Myriad Problems

from the this-is-not-how-this-should-work dept

We've talked a bit about the UK's long-running process to basically blame internet companies for all of society's ills. What was originally called the "online harms" bill has now officially morphed into the Online Safety Bill, which was recently released in draft form.

Despite the UK government insisting that it spent the past few years talking to various stakeholders, the final bill is a disaster for the open internet. Heather Burns, from the UK's Open Rights Group (loosely the UK's version of the EFF), has a short thread about the bill, explaining why it's so problematic. Here's the key bit:

You're going to read a lot today about the government's plans for the Online Safety Bill on #onlineharms, a regulatory process which has eaten up much of the past two years of my professional work. I suppose if I had a hot take to offer after two years, it's this:

  1. If you see the bill being presented as being about "social media" "tech giants" "big tech" etc, that's bullshit. It impacts *all services of all sizes, based in the UK or not. Even yours.* Bonus: take a drink every time a journo or MP says the law is about reining in Facebook.
  2. If you see the Bill being presented as being about children's safety, that's bullshit. It's about government compelling private companies to police the legal speech and behaviour of everyone who says or does anything online. Children are being exploited here as the excuse.
  3. So as you read the Bill, consider how altruistic any government initiative must be if it requires two layers of A/B tested messaging disinformation.

A week earlier Burns, who's been deeply engaged in the process in the UK wrote up a long blog post explaining all the problems with the fundamental approach embraced in the bill: it's basically outsourcing all of the roles of the government to internet companies, and then threatening to punish them if they get it wrong. Here's just one important bit:

The first and most immediate impact of the imposition of senior management liability will be a chilling effect on free speech. This is always a consequence of content moderation laws which are overly prescriptive and rigid, or conversely, overly vague and sweeping.

When everything falls into a legally ambiguous middle ground, but the law says that legally ambiguous content must be dealt with, then service providers find themselves backed into a corner. What they do in response is take down vast swathes of user-generated content, the majority of which is perfectly legal and perhaps subjectively harmful, rather than run the risk of getting it wrong.

This phenomenon, known as “collateral censorship” – with your content being the collateral – has an immediate effect on the right to freedom of expression.

Now add the risk of management liability to the mix, and the notion that tech sector workers might face personal sanctions and criminal charges for getting it wrong, and you create an environment where collateral censorship, and the systematic takedowns of any content which might cause someone to feel subjectively offended, becomes a tool for personal as well as professional survival.

In response to this chilling effect, anyone who is creating any kind of public-facing content whatsoever – be that a social media update, a video, or a blog post – will feel the need to self-censor their personal opinions, and their legal speech, rather than face the risk of their content being taken down by a senior manager who does not want to get arrested for violating a “duty of care”.

The general summary for tons of experts is that this bill is a dumpster fire of epic proportions. Big Brother Watch notes that this would introduce "state-backed censorship and monitoring on a scale *never seen before* in a liberal democracy." The scariest part is that it will require companies to remove lawful speech. The bill refers to it as "lawful but still harmful" (which some have taken to calling "lawful but awful" speech). But as noted above, that really creates tremendous incentives for excessive censorship and suppression of all sorts of speech to avoid falling on the wrong line.

Indeed, this is the very model used by the Great Firewall of China. For years, rather than instructing internet companies what to block with the Great Firewall, internet companies would often just get vague messages about what kinds of content the government was "concerned" about, along with threats that if the internet companies didn't magically block all of that content, they (and their executives) would face liability. The end result is clearly significant over-blocking. If you only get punished for under-blocking, the natural result is going to be over-blocking.

Among the many other problems with this, the UK's approach will only lead the Chinese to insist that this shows their Great Firewall approach is the only proper way to regulate the internet. They've certainly done that before.

It really is quite incredible how closely the bill mimics the Great Firewall approach, but with UK regulators OFCOM stepping in for the role of the Chinese government:

There are a few attempts in the draft bill to put in place language that looks like they're supportive of free speech, but most of these are purely fig leaves -- the kind of thing they can point to in order to say "see, we support free speech, no censorship here, no siree" but which fail to take into account how these will work in practice.

Specifically, there's a section saying that websites (and executives), that will now face liability if they leave up too much "lawful but harmful" content, must make sure not to take down "democratically important" content. What does that mean? And who decides? Dunno. There's also a weird carveout for "journalists" but again, that's problematic, when you realize that merely the act of defining who is and who is not a journalism is a big free speech issue. And the bill does note that "citizen journalists will have the same protections as professional journalists." Does... that mean every UK citizen has to declare themselves a "citizen journalist" now? How does that even work?

The whole thing is not just a complete disaster, it's a complete disaster that tons of smart people have been warning the UK government about for the past two years without getting anywhere at all. I'm sure we'll have a lot more to say about it in the near future, but for now it really looks like the UK approach to "online harms"... er... "online safety" is to replicate the Chinese Great Firewall. And that's quite stunning.

Read More | 7 Comments | Leave a Comment..

Posted on Free Speech - 14 May 2021 @ 12:17pm

Michigan Legislator With No Understanding Of The 1st Amendment Wants To Fine Fact Checkers For Pointing Out His Lies

from the incredible dept

Michigan State Rep. Matt Maddock has quite a reputation for lying:

Michigan state Rep. Matt Maddock, who has repeatedly spread lies about election fraud and falsely said COVID-19 “is less lethal than the flu,” wants to make it harder for fact-checkers to challenge unsubstantiated claims by politicians.

That's really only the start of a much longer list of problematic statements by the elected official:

Maddock made several attempts to overturn the election. In late December, he and Daire Rendon, R-Lake City, joined a federal lawsuit filed by Trump supporters to challenge the results of the election. The suit asked a judge to allow lawmakers to certify states' election results, a move that would enable the Republican-led Michigan Legislature to reject Biden's victory. But a judge turned down the suit, calling their arguments "flat-out wrong" and "a fundamental and obvious misreading of the Constitution."

It appears that Maddock's "fundamental and obvious misreading of the Constitution" extends to the 1st Amendment as well. He's now introduced an astoundingly unconstitutional bill that seeks to "register" and then fine fact checkers who fact check his lies. You can read his Fact Checker Registration Act (which somehow has eight other unserious co-sponsors) and just marvel at the blatant unconstitutionality of it all. I mean beyond all of the big problems with it, there's the fact that it literally calls out the Poynter Institute's International Fact-Checking Network as requiring registration into his scheme.

Under the bill, any "fact checker" has to put up a $1 million bond, and then any "affected party" can sue any fact checker for the money put up in the bond, if they can show the fact checker engaged in "wrongful conduct that is a violation of the laws of this state."

What does that even mean? Well, Maddock made the unconstitutional intent of his bill abundantly clear in a Facebook post about it:

Social Media companies deplatform people, politicians, and businesses on the basis of “Fact Checkers” who relish their role punishing those whom they deem 'false’. Many believe this enormous economic and social power is being abused. Who are these Fact Checkers? We’re going to find out. My legislation will put Fact Checkers on notice: don't be wrong, don't be sloppy, and you better be right.

I mean, if we applied the same rules to him, he'd be paying out a ton of money, since he's so often wrong about things. But, also this is pretty obviously unconstitutional in multiple ways. Forcing fact checkers to register with the state is already highly questionable because the setup is designed to intimidate fact checking, which is a core form of protected speech, and some of the most important kinds of speech protected by the 1st Amendment. If, as is obviously the fact, the registration (and bond) requirement is designed to intimidate fact checkers, then it's clearly unconstitutional.

Second, courts have ruled over and over again that merely being "sloppy" and even making mistakes is not grounds for the speech to be deemed a violation of the law. This is why cases like NY Times v. Sullivan and United States v. Alvarez are so important. They recognize that the 1st Amendment means that the government can't willy nilly try to shut down speech, even if it's false.

Finally, the structure of the bill is just... weird. It says that a fact checking organization can get fined for "wrongful conduct that is a violation of the laws of this state." And while he claims that this includes being "sloppy," any such law that says being sloppy with your fact check is illegal in Michigan would, separately, violate the 1st Amendment.

This whole thing is just more victim playing by the modern GOP, who seems to feel that anyone calling them on their bullshit, disinformation, and lies, is somehow violating their rights, while having no qualms at all about stamping out the rights of those who actually tell the truth.

Of course, after lots of people started pointing out what an attack on free speech and a free press this bill was, Maddock tried to defend the bill, but only ended up producing a word salad of nonsense.

“This isn’t about journalists or free speech,” he said. “It’s about the fact checkers who have been injected into our First Amendment right to be wrong if we want to. If a fact-check entity is bankrupting businesses and cancelling people with lies, they should be held accountable. If they have high standards and are doing good fact checking, they have nothing to worry about.”

Fact checkers are journalists, dude. And what does "injected into our First Amendment right to be wrong" mean here -- especially given that he's trying to fine them if they're wrong?

Elect better people, Michigan voters.

Read More | 42 Comments | Leave a Comment..

Posted on Techdirt - 14 May 2021 @ 9:28am

The Flopping Of Trump's Blog Proves That It's Not Free Speech He's Upset About; But Free Reach

from the it's-the-audience dept

A week ago, we wrote about Trump's new blog, which was designed to look vaguely tweet-like, noting that this proved that he never needed Twitter or Facebook to speak freely. He's always been able to speak on his own website. NBC News has an interesting story now, suggesting that the blog just isn't getting that much attention.

A week since the unveiling, social media data suggests things are not going well.

The ex-president’s blog has drawn a considerably smaller audience than his once-powerful social media accounts, according to engagement data compiled with BuzzSumo, a social media analytics company. The data offers a hint that while Trump remains a political force, his online footprint is still dependent on returning to Facebook, Twitter and YouTube.

The Desk of Donald J. Trump is limited — users can’t comment or engage with the actual posts beyond sharing them to other platforms, an action few people do, according to the data.

Some have been using this to argue that Twitter and Facebook's bans on the former president were attacks on his "free speech." But it actually demonstrates something different -- and important. Everyone complaining about the removal of Trump's account are not actually mad about the "free speech" part of it. They're really mad about the "free reach." (Hat tip to Renee DiResta for making this point years ago).

Being kicked off these platforms by the platforms (as opposed to, say, the government) is not an attack on your ability to speak. There are lots of places to do that. It is, instead, an attack on having easy access to an audience on those platforms. And, as far as I can tell, there is no right to having as large an audience as possible. Thus, in the same sense that I can't demand a million followers on any of these platforms, the former president similarly can't demand that they supply him with the audience of their users.

170 Comments | Leave a Comment..

Posted on Techdirt - 13 May 2021 @ 1:36pm

Dartmouth's Insane Paranoia Over 'Cheating' Leads To Ridiculous Surveillance Scandal

from the this-is-dumber-than-it-looks dept

The NY Times had an incredible story a few days ago about an apparent "cheating scandal" at Dartmouth's medical school. The problem was, it doesn't seem like there was any actual cheating. Instead, it looks like a ton of insane paranoia and an overreliance on surveillance technology by an administration which shouldn't be in the business of educating kindergarteners, let alone med students. We've had a few posts about the rise of surveillance technology in schools, and its many downsides -- and those really ramped up during the pandemic, as students were often taking exams from home.

So much of the paranoia is based on the silly belief that if you don't have everything crammed totally into your head, you haven't actually learned anything. Out here in the real world, it seems like a more sensible realization is that if you teach people how they can look up the necessary details when they need them, you've probably done a good job. Yes, there may be some exceptions and some scenarios where full knowledge is important. But for most things, the ability to know how to find the right answer is a lot more important than making sure trivial details are all remembered and can be regurgitated on an exam. Indeed, studies have shown repeatedly, that trying to cram the details into your head for an exam often means they don't stick in long term memory.

In short, this type of insane test taking tests people on exactly the wrong thing, and instead encourages the kind of behavior that leads to worse outcomes in the long run.

But the situation at Dartmouth is -- believe it or not -- even dumber. 17 Dartmouth medical students have been accused of cheating -- but those accusations were based on a tool that is not designed to spot cheating. It was based on Canvas, a popular platform for professors to post assignments and for students to submit homework through. And here's what happened, according to the NY Times:

To hinder online cheating, Geisel requires students to turn on ExamSoft — a separate tool that prevents them from looking up study materials during tests — on the laptop or tablet on which they take exams. The school also requires students to keep a backup device nearby. The faculty member’s report made administrators concerned that some students may have used their backup device to look at course material on Canvas while taking tests on their primary device.

Geisel’s Committee on Student Performance and Conduct, a faculty group with student members that investigates academic integrity cases, then asked the school’s technology staff to audit Canvas activity during 18 remote exams that all first- and second-year students had taken during the academic year. The review looked at more than 3,000 exams since last fall.

The tech staff then developed a system to recognize online activity patterns that might signal cheating, said Sean McNamara, Dartmouth’s senior director of information security. The pattern typically showed activity on a Canvas course home page — on, say, neurology — during an exam followed by activity on a Canvas study page, like a practice quiz, related to the test question.

“You see that pattern of essentially a human reading the content and selecting where they’re going on the page,” Mr. McNamara said. “The data is very clear in describing that behavior.”

The audit identified 38 potential cheating cases. But the committee quickly eliminated some of those because one professor had directed students to use Canvas, Dr. Compton said.

In emails sent in mid-March, the committee told the 17 accused students that an analysis showed they had been active on relevant Canvas pages during one or more exams. The emails contained spreadsheets with the exam’s name, the test question number, time stamps and the names of Canvas pages that showed online activity.

If you just read that, it might sound like at least some evidence that those students were doing something they weren't supposed to be doing (even if you think the rules are dumb). But, even that seems to not be accurate. There are some of us (and I am guilty of this) who rarely, if ever, shut down tabs that have important tools or information for our work. Plenty of students are the same, and likely leave Canvas open all the time. And that's what many of the students have claimed.

Geisel students said they often had dozens of course pages open on Canvas, which they rarely logged out of. Those pages can automatically generate activity data even when no one is looking at them, according to The Times’s analysis and technology experts.

School officials said that their analysis, which they hired a legal consulting firm to validate, discounted automated activity and that accused students had been given all necessary data in their cases.

But at least two students told the committee in March that the audit had misinterpreted automated Canvas activity as human cheating. The committee dismissed the charges against them.

In another case, a professor notified the committee that the Canvas pages used as evidence contained no information related to the exam questions his student was accused of cheating on, according to an analysis submitted to the committee. The student has appealed.

The school's paranoia over this went further. When it confronted the 17 students, it more or less pressured them into pleading guilty rather than fighting their case:

Dartmouth had reviewed Mr. Zhang’s online activity on Canvas, its learning management system, during three remote exams, the email said. The data indicated that he had looked up course material related to one question during each test, honor code violations that could lead to expulsion, the email said.

Mr. Zhang, 22, said he had not cheated. But when the school’s student affairs office suggested he would have a better outcome if he expressed remorse and pleaded guilty, he said he felt he had little choice but to agree. Now he faces suspension and a misconduct mark on his academic record that could derail his dream of becoming a pediatrician.

“What has happened to me in the last month, despite not cheating, has resulted in one of the most terrifying, isolating experiences of my life,” said Mr. Zhang, who has filed an appeal.

The article notes other students were told they had 48 hours to respond to charges -- and that they weren't provided the evidence the school supposedly had on them, while also being pressured to admit guilt:

They said they had less than 48 hours to respond to the charges, were not provided complete data logs for the exams, were advised to plead guilty though they denied cheating or were given just two minutes to make their case in online hearings, according to six of the students and a review of documents.

There are just layers upon layers of ridiculousness here. Not only is it bad pedagogically to teach this way, it's dangerous to engage in this kind of surveillance (in the middle of a pandemic, no less), and to just build up an entire atmosphere of mistrust.

EFF did a long and detailed post on this in which they note that the data in question could not have shown cheating, and arguing that these students have been denied basic due process.

But after reviewing the logs that were sent to EFF by a student advocate, it is clear to us that there is no way to determine whether this traffic happened intentionally, or instead automatically, as background requests from student devices, such as cell phones, that were logged into Canvas but not in use. In other words, rather than the files being deliberately accessed during exams, the logs could have easily been generated by the automated syncing of course material to devices logged into Canvas but not used during an exam. It’s simply impossible to know from the logs alone if a student intentionally accessed any of the files, or if the pings exist due to automatic refresh processes that are commonplace in most websites and online services. Most of us don’t log out of every app, service, or webpage on our smartphones when we’re not using them.

Meanwhile, the student free speech advocacy organization FIRE has been demanding answers from Dartmouth as well. To make matters worse, FIRE noticed that Dartmouth recently hid its "due process policies" from public view (convenient!):

We also asked why the college appears to have recently password-protected many of its due process policies. Of course, doing so conveniently hides them from the scrutiny of the public and prospective students who might be curious whether they will have rights — and what those rights might be — if they matriculate at Dartmouth.

And, of course, all of this could have been avoided if Dartmouth wasn't so overly paranoid about the idea that medical students might (gasp!) be able to look up relevant information. When I go to a medical professional, I don't necessarily need them to have perfect recall of every possible symptom or treatment. What I hope they're able to do is use their knowledge, combined with their ability to reference the proper materials, to figure out the best solution. Perhaps I should avoid doctors who graduated from Dartmouth if I want that.

17 Comments | Leave a Comment..

Posted on Techdirt - 13 May 2021 @ 10:48am

Why Is Wired So Focused On Misrepresenting Section 230?

from the it's-bizarre dept

We've already highlighted our concerns with Wired's big cover story on Section 230 (twice!). The very same day that came out, Wired UK published a piece by Prof. Danielle Citron entitled Fix Section 230 and hold tech companies to account. Citron's proposal was already highlighted in the cover story and now gets this separate venue. For what it's worth, Citron also spent a lot of energy insisting that the Wired cover story was the "definitive" article on 230 despite all of its flaws, and cheered on and liked tweets by people who mocked my arguments for why the article is just not very accurate.

Over the last few years, we've also responded multiple times to Citron's ideas, which are premised on a completely false narrative: that without a legal cudgel, websites have no incentive to keep their sites clean. That's clearly not true at all. If a company doesn't moderate, then it turns into a garbage dump of spam, harassment, and abuse. They lose users. They lose advertisers. It's just not good business. There are plenty of incentives to deal with bad stuff online -- though Citron never seems to recognize any of that, and insists, instead, that every site is a free-for-all because 230 does not provide them legal liability. This latest piece in Wired is more of the same.

American lawmakers sided with the new inventors, young men (yup, all men) who made assurances that they could be trusted with our safety and privacy. In 1996, US Congress passed Section 230 of the Communications Decency Act, which secured a legal shield for online service providers that under- or over-filtered third-party content (so long as aggressive filtering was done in good faith). It meant that tech companies were immune to lawsuits when they removed, or didn’t remove, something a third party posted on their platforms.

That's only partially accurate. The "good faith" part only applies to one subsection of the bill, and is not the key reason why sites can be "aggressive" in filtering. That's the 1st Amendment. The immunity part is a procedural benefit that prevents abusive litigation by those seeking to waste the court's time by filing frivolous and expensive lawsuits -- or, more likely, threaten to do so, if websites won't remove perfectly legal content they just don't like.

But, thanks to overbroad court rulings, Section 230 ended up creating a law-free zone. The US has the ignominious distinction of being a safe haven for firms hosting illegality.

This is just wrong. And Citron knows it's wrong, and it's getting to be embarrassing that she (and Wired) would repeat it. First, Section 230 has no impact on federal criminal law, so anything that violates federal criminal law is not safe. Second, there are almost no websites that want "illegal" content on their site. Most have teams of people who deal with such reports or court orders. Indeed, the willingness of websites to quickly remove any content deemed illegal has been abused by reputation management firms to get content removed via faked judicial orders, or through a convoluted scheme involving fake defendants who "settle" lawsuits just to get a court order out of it.

This isn’t just an American pathology: Because the dominant social media companies are global, illegality they host impacts people worldwide. Indeed, safety ministers in South Korea and Australia tell me that they can help their citizens only so much, since abuse is often hosted on American platforms. Section 230 is to social media companies what the Cayman Islands has long been to the banking industry.

Over and over again we've seen the exact opposite of this, in two separate, but important ways. First, many of these companies still are more than willing to geoblock content if it's found to violate the law in a certain country. However, much more importantly, the ability of US-based websites to keep content up means that threatened, marginalized, and oppressed people are actually able to get their messages out. Oppressive governments around the world, including in places like Turkey and India have sought to force websites to take down content that merely criticizes their governments.

Any reasonable discussion on this needs to take that into account in demanding that "illegal" content must automatically be taken down. And when weighed with the fact that most companies don't want to host truly illegal and problematic content, most of the content that is likely to be removed without those protections is exactly the kind of authoritarians-suppressing-speech content that we're concerned about.

Tech companies amplify damaging lies, violent conspiracies and privacy invasions because they generate copious ad revenue from the likes, clicks and shares. For them, the only risk is bad PR, which can be swiftly dispatched with removals, bans and apologies.

This is stated without anything backing it up and it's garbage. It's just not true. All of the big companies have policies in place against this content, and they (unlike Citron) recognize that it doesn't "generate copious ad revenue from the likes, clicks and shares" (likes, clicks and shares don't directly generate ad revenue...). These companies know that the long-term health of their platforms is actually important, and losing advertisers and users because of garbage is a problem. This is why Facebook, Twitter, and YouTube all have teams working on these issues and trying to keep the platforms in better shape. They're certainly not perfect at it, but part of that is because of the insane scale of these platforms and the ever changing nature of the problematic content on those platforms.

I know that among a certain set it's taken on complete faith that no one at these companies cares, because they just "want clicks" and "clicks mean money." But that shows an astounding disconnect from what the people at these companies, and those setting and enforcing these policies actually think. It's just ivory tower nonsense, completely disconnected from reality.

For individuals and society, the costs are steep. Lies about mask wearing during the Covid-19 pandemic led to a public health disaster and death.

Which spread via cable news more than on social media, and included statements from the President of the United States of America. That's not a Section 230 problem. It's also not something that changing Section 230 fixes. Most of those lies are still Constitutionally protected. Citron's problem seems to be with the 1st Amendment, not Section 230. And changing Section 230 doesn't change the 1st Amendment.

Plans hatched on social media led to an assault on the US Capitol. Online abuse, which disproportionately targets women and minorities, silences victims and upends careers and lives.

These are both true, but it's an incredible stretch to say that Section 230 was the blame for either of these things. The largest platforms -- again, Facebook, YouTube, Twitter, etc. -- all have policies against this stuff. Did they do a bad job enforcing them? Perhaps! And we can talk about why that was, but I can assure you it's not because "230 lets us ignore this stuff." It's because it's not possible to magically make the internet perfect.

Social media companies generally have speech policies, but content moderation is often a shell game. Companies don’t explain in detail what their content policies mean, and accountability for their decisions isn’t really a thing. Safety and privacy aren’t profitable: taking down content and removing individuals deprives them of monetizable eyes and ears (and their data). Yes, that federal law gave us social media, but it came with a heavy price.

This is the only point in which Citron even comes to close to acknowledging that the companies actually do make an effort to deal with this stuff, but then immediately undermines it by pretending they really don't care about it. Which is just wrong. At best it could be argued that the platforms didn't care enough about it in 2010. But that was a century ago in internet years, and it's just wrong now. And, "taking down content and removing individuals deprives them of monetizable eyes and ears (and their data)" only if those particular eyes and ears aren't scaring off many more from their platform. And every platform now recognizes that the trolls and problem makers do exactly that. Citron, incorrectly again, completely misses that these companies now recognize that not all users are equal, and trolls and bad actors do more damage to the platform than they're worth in "data" and "ad revenue."

It feels like Citron's analysis is stuck in the 2010 internet. Things have changed. And part of the reason they've changed is that Section 230 has allowed companies to freely experiment with a variety of remedies and solutions to best deal with these problems.

Are there some websites that focus on and cater to the worst of the worst? There sure are. And if she wanted to focus in on just those, that would be an interesting discussion. Instead, she points to the big guys, who are not acting the way she insists they do, to demand they do... what they already do, and insists we need to change the law to make that happen, while ignoring all of the actual consequences of such a legal change.

The time for having stars in our eyes about online connectivity is long over. Tech companies no longer need a subsidy to ensure future technological progress.

It's not a subsidy to properly apply legal liability to the actual problematic parties. It's a way of saving the judicial system from a ton of frivolous lawsuits and avoiding the ability to censor by proxy by giving aggrieved individuals the ability to silence critics by mere threats of litigation to third party platforms.

If anything, that subsidy has impaired technological developments that are good for companies and society.

Uh, no. 230's flexibility has allowed a wide range of different platforms to try a variety of different approaches, and to seek out the best approaches for that kind of community. Wikipedia's approach is different from Facebooks which is different from Reddit's which is different from Ravelry's which is different from Github's. That's because we have 230 that allows for these different approaches. And all of those companies are trying to come up with solutions that are "good for society" because if they don't, their sites turn into garbage dumps and people will seek out alternatives.

We should keep Section 230 – it provides an incentive for companies to engage in monitoring – but condition it on reasonable content moderation practices that address illegality causing harm. Companies would design their services and practices knowing that they might have to defend against lawsuits unless they could show that they earned the federal legal shield.

The issue with this is that if you have to first prove "reasonableness" you end up with a bunch of problems, especially for smaller sites. First, you massively increase the costs of getting sued (and as such, you vastly increase the ability of threats to have their intended effect to take down content that is perfectly legal). Second, in order to prove "reasonableness" many, many, many lawyers are going to say "just do what the biggest companies do" because that will have been shown in court to be reasonable. So, instead of getting more "technological developments that are good for companies and society" you get homogenization. You lose out on the innovation. You lose out on the experimentation for better models, because any new model is just a model that hasn't been tested in court yet and leaves you open to liability.

For the worst of the worst actors (such as sites devoted to nonconsensual porn or illegal gun sales), escaping liability would be tough. It’s hard to show that you have engaged in reasonable content moderation practices if hosting illegality is your business model.

This is... already true? Various nonconsensual porn sites have been taken down by both civil lawsuits and criminal prosecution over the years. Companies entirely engaged in illegal practices still face federal criminal prosecution as well without 230's protections. On top of that, courts themselves have increasingly interpreted 230 to not shield those worst of the worst actors.

Over time, courts would rule on cases to show what reasonableness means, just as courts do in other areas of the law, from tort and data security to criminal procedure.

Right. And then anyone with a better idea on how to build a better community online would never dare to risk the liability that came with having to first prove it "reasonable" in court.

In the near future, we would see social media companies adopt speech policies and practices that sideline, deemphasize or remove illegality rather than optimise to spread it.

Again, no mainstream site wants "illegality" on their site. This entire article is premised on a lie, backed up with misdirection and a historical myth.

There wouldn’t be thousands of sites devoted to nonconsensual porn, deepfake sex videos and illegal gun sales. That world would be far safer and freer for women and minorities.

Except there's literally no evidence to support this argument. We know what happened in the copyright space, which doesn't have 230 like protections, and does require "reasonable" policies for dealing with infringement. Infringement didn't go away. It remained. As for "women and minorities" it's hard to see how they're better protected in such a world. The entire #MeToo movement came about because people could tell their stories on social media. Under Citron's own proposal here, websites would face massive threats of liability should a bunch of people start posting #MeToo type stories. We've already seen astounding efforts by those jackasses who were exposed during #MeToo to silence their accusers. Citron's proposal would hand them another massive weapon.

The bigger issue here is that Citron refuses to recognize how (and how frequently) those in power abuse tools of content suppression to silence voices they don't want to hear. She's not wrong that there's a problem with a few narrow areas of content. And if she just focused on how to deal with those sites, her argument would be a lot more worth engaging with. Instead, she's mixing up different ideas, supported by a fantasy version of what she seems to think Facebook does, and then insisting that if they just moderated the way she wanted them to, it would all be unicorns and rainbows. That's not how it works.

65 Comments | Leave a Comment..

Posted on Techdirt - 12 May 2021 @ 9:44am

Bad Section 230 Bills Come From Both Sides Of The Aisle: Schakowsky/Castor Bill Would Be A Disaster For The Open Internet

from the that's-not-how-any-of-this-works dept

It truly is stunning how every single bill that attempts to reform Section 230 appears to be written without any intention of ever understanding how the internet or content moderation works in actual practice. We've highlighted tons of Republican-led bills that tend to try to force websites to host more content, not realizing how (1) unconstitutional that is and (2) how it will make the internet into a giant garbage fire. On the Democratic side, the focus seems to be much more on forcing companies to takedown constitutionally protected speech, which similarly (1) raises serious constitutional issues and (2) will lead to massive over-censorship of perfectly legal speech just to avoid liability.

The latest bill of the latter kind comes from Reps. Jan Schakowsky and Rep. Kathy Castor. Schakowsky has been saying for a while now that she was going to introduce this kind of bill to browbeat internet companies into being a lot more proactive in taking down speech she dislikes. The bill, called the Online Consumer Protection Act has now been introduced and it seems clear that this bill was written without ever conferring with anyone with any experience in running a website. It's the kind of thing one writes when you've just come across the problem, but don't think it's worth talking to anyone to understand how things really work. It's also very much a kind of "something must be done, this is something, we should do this" kind of bill that shows up way too often these days.

The premise of the bill is that websites "don't have accountability to consumers" for the content posted by users, and that they need to be forced to have more accountability. Of course, this leaves out the kind of basic fact that if "consumers" are treated badly, they will go elsewhere, so of course every website has some accountability to consumers: it's that if they're bad at it, they will lose users, advertisers, sellers, buyers, whatever. But, that's apparently not good enough for the "we must do something" crowd.

At best the Online Consumer Protection Act will create a massive amount of silly busywork and paperwork for basically any website. At worst, it will create a liability deathtrap for many sites. In some ways it's modeled after the idiotic policy we have regarding privacy policies. Almost exactly a decade ago we explained why the entire idea of a privacy policy is dumb. Various laws require websites to post privacy policies, which no one reads, in part because it would be impossible to read them all. The only way a site gets in trouble is by not following its privacy policy. Thus, the incentives are to craft a very broad privacy policy that gives sites leeway -- meaning they have less incentive to actually create more stringent privacy protections.

The OCPA basically takes the same approach, but... for "content moderation" policies. It requires basically every website to post one:

Each social media platform or online marketplace shall establish, maintain, and make publicly available at all times and in a machine-readable format, terms of service in a manner that is clear, easily understood, and written in plain and concise language.

That terms of service will require a bunch of pointless things, including a "consumer protection policy" which has to include the following:

FOR SOCIAL MEDIA PLATFORMS.—For social media platforms, the consumer protection policy required by subsection (a) shall include—

(A) a description of the content and behavior permitted or prohibited on its service both by the platform and by users;
(B) whether content may be blocked, removed, or modified, or if service to users may be terminated and the grounds upon which such actions will be taken;
(C) whether a person can request that content be blocked, removed, or modified, or that a user’s service be terminated, and how to make such a request;
(D) a description of how a user will be notified of and can respond to a request that his or her content be blocked, removed, or modified, or service be terminated, if such actions are taken;
(E) how a person can appeal a decision to block, remove, or modify content, allow content to remain, or terminate or not terminate service to a user, if such actions are taken; and
(F) any other topic the Commission deems appropriate.

It's difficult to look at that list and not laugh and wonder if whoever came up with it has ever been anywhere near a content moderation or trust & safety team, because that's not how any of this works. Trust & Safety is an ongoing effort of constantly needing to adjust and change with the times, and there is no possible policy that can cover all cases. Can whoever wrote this bill listen to the excellent Radiolab episode about content moderation and think through how that process would have played out under this bill? If every time you change the policies to cover a new case you have to publicly update your already ridiculously complex policies -- while the new requirements be that those same policies are "clear, easily understood, and written in plain and concise language" -- you've created an impossible demand.

Hell, someone should turn this around and push it back on Congress first. Hey, Congress, can you restate the US civil and criminal code such that it is "clear, easily understood, and written in plain and concise language?" How about we try that first before demanding that private companies be forced to do the same for their ever changing policies as well?

Honestly, requiring all of this be in a policy is just begging angry Trumpists to sue websites saying they didn't live up to the promises made in their policies. We see those lawsuits today, but they're kicked out of court under Section 230... but Schakowsky's bill says that this part is now exempted from 230. It's bizarre to see a Democratic bill that will lead to more lawsuits from pissed off Trumpists who have been removed, but that's what this bill will do.

Also, what "problem" does this bill actually solve? From the way the bill is framed, it seems like Schakowsky wants to make it easier for people to complain about content and to get the site to review it. But every social media company already does that. How does this help, other than put the sites at risk of liability for slipping up somewhere?

The bill then has separate requirements for "online marketplaces" which again suggest literally zero knowledge or experience with that space:

FOR ONLINE MARKETPLACES.—For online marketplaces, the consumer protection policy required by subsection (a) shall include—

(A) a description of the products, product descriptions, and marketing material, allowed or disallowed on the marketplace;
(B) whether a product, product descriptions, and marketing material may be blocked, removed, or modified, or if service to a user may be terminated and the grounds upon which such actions will be taken;
(C) whether users will be notified of products that have been recalled or are dangerous, and how they will be notified;
(D) for users—

(i) whether a user can report suspected fraud, deception, dangerous products, or violations of the online marketplace’s terms of service, and how to make such report;
(ii) whether a user who submitted a report will be notified of whether action was taken as a result of the report, the action that was taken and the reason why action was taken or not taken, and how the user will be notified;
(iii) how to appeal the result of a report; and
(iv) under what circumstances a user is entitled to refund, repair, or other remedy and the remedy to which the user may be entitled, how the user will be notified of such entitlement, and how the user may claim such remedy; and

(i) how sellers are notified of a report by a user or a violation of the terms of service or consumer protection policy;
(ii) how to contest a report by a user;
(iii) how a seller who is the subject of a report will be notified of what action will be or must be taken as a result of the report and the justification for such action;
(iv) how to appeal a decision of the online marketplace to take an action in response to a user report or for a violation of the terms of service or consumer protection policy; and
(v) the policy regarding refunds, repairs, replacements, or other remedies as a result of a user report or a violation of the terms of service or consumer protection policy.

Honestly, this reminds me a lot of Josh Hawley's bills, in that it seems that both Hawley and Schakowsky want to appoint themselves product manager for the internet. All of the things listed above are the kinds of things that most companies do already because you need to do it that way. But it's also the kind of thing that has evolved over time as new and different challenges arise, and locking the specifics into law does not take into account that very basic reality. It also doesn't take into account that different companies might not fit into this exact paradigm, but under this bill will be required to act like they do. I can't see how that's at all helpful.

And, it gets worse. It will create a kind of politburo for how all internet websites must be run:

Not later than 180 days after the date of the enactment of this Act, the Commission shall conduct a study to determine the most effective method of communicating common consumer protection practices in short-form consumer disclosure statements or graphic icons that disclose the consumer protection and content moderation practices of social media platforms and online marketplaces. The Commission shall submit a report to the Committee on Energy and Commerce of the House of Representatives and the Committee on Commerce, Science, and Transportation of the Senate with the results of the study. The report shall also be made publicly available on the website of the Commission.

Yeah, because nothing works so well as having a government commission jump in and determine the "best" way to do things in a rapidly evolving market.

Also, um, if the government needs to create a commission to tell it what those best practices are why is it regulating how companies have to act before the commission has even done its job?

There are a bunch more requirements in the bill, but all of them are nitty gritty things about how companies create policies and implement them -- something that companies are constantly changing, because the world (and the threats and attacks!) is constantly changing as well. This bill is written by people who seem to think that the internet -- and bad actors on the internet -- are a static phenomena. And that's just wrong.

Also, there's a ton of paperwork for nearly every company with a website, including idiotic and pointless requirements that are busywork, with the threat of legal liability attached! Fun!

FILING REQUIREMENTS.—Each social media platform or online marketplace that either has annual revenue in excess of $250,000 in the prior year or that has more than 10,000 monthly active users on average in the prior year, shall be required to submit to the Commission, on an annual basis, a filing that includes—

(A) a detailed and granular description of each of the requirements in section 2 and this section;
(B) the name and contact information of the consumer protection officer required under subsection (b)(4); and
(C) a description of any material changes in the consumer protection program or the terms of service since the most recent prior disclosure to the Commission

(2) OFFICER CERTIFICATION.—For each entity that submits an annual filing under paragraph (1), the entity’s principal executive officer and the consumer protection officer required under subsection (b)(4), shall be required to certify in each such annual filing that—

(A) the signing officer has reviewed the filing;
(B) based on such officer’s knowledge, the filing does not contain any untrue statement of a material fact or omit to state a material fact necessary to make the statements, in light of the circumstances under which such statements were made, not misleading;
(C) based on such officer’s knowledge, the filing fairly presents in all material respects the consumer protection practices of the social media platform or online marketplace; and
(D) the signing consumer protection officer—

(i) is responsible for establishing and maintaining safeguards and controls to protect consumers and administer the consumer protection program; and
(ii) has provided all material conclusions about the effectiveness of such safeguards and controls.

So... uh, I need to hire a "consumer protection officer" for Techdirt now? And spend a few thousand dollars every year to have lawyers (and, most likely a new bunch of "compliance consultants" review this totally pointless statement I'll need to sign each year? For what purpose?

The bill also makes sure that our courts are flooded with bogus claims from "wronged" individuals thanks to its private right of action. It also, on top of everything else, exempts various state consumer protection laws from Section 230. That's buried in the bill but is a huge fucking deal. We've talked about this for years, as various state Attorneys General have been demanding it. But that's because those state AGs have a very long history of abusing state "consumer protection" laws to effectively shake down companies. A decade ago we wrote a definitive version of this in watching dozens of state attorneys general attack Topix, with no legal basis, because they didn't like how the company moderated its site. They were blocked from doing anything serious because of Section 230.

Under this bill, that will change.

And we've seen just how dangerous that can be. Remember how Mississippi Attorney General Jim Hood demanded all sorts of information from Google, claiming that the company was responsible for anything bad found online? It later came out (via the Sony Pictures hack) that the entire episode was actually funded by the MPAA, with Hood's legal demands written by the MPAA's lawyers, as part of Hollywood explicit plan to saddle Google with extra legal costs.

Schakowsky's bill would make that kind of corruption an every day occurrence.

And, again, the big companies can handle this. They already do almost everything listed anyway. All this really does is saddle tons of tiny companies (earning more than $250k a year?!?) with ridiculous and overly burdensome compliance costs, which open them up to not just the FTC going after them, but any state attorney general, or any individual who feels wronged by the rules.

The definitions in the bill are so broad that it would cover a ton of websites. Under my reading, it's possible that Techdirt itself qualifies as a "social media platform" because we have comments. This is yet another garbage bill from someone who appears to have no knowledge or experience how any of this works in practice, but is quite sure that if everyone just did things the way she wanted, magically good stuff would happen. It's ridiculous.

Read More | 31 Comments | Leave a Comment..

Posted on Free Speech - 11 May 2021 @ 12:07pm

Texas Attorney General Unblocks Twitter Users Who Sued Him; Still Blocking Others

from the that's-not-how-this-works dept

It seems by now that public officials should know that they cannot block critics on social media if they are using their social media accounts for official business. This was thoroughly established the Knight v. Trump case, where the court made it clear that if (1) a public official is (2) using social media (3) for official purposes (4) to create a space of open dialogue (and all four of those factors are met) then they cannot block people from following them based on the views those users express, as it violates the 1st Amendment. Yet over and over again elected officials seem to ignore this.

Alexandria Ocasio-Cortez was sued over this, as was Marjorie Taylor Greene (both of them eventually settled and agreed to unblock people).

Last month, controversy prone Texas Attorney General Ken Paxton was sued over the same thing (again by the Knight First Amendment Institute). As the lawsuit notes, many of the people Paxton blocked found themselves in that situation after they replied to Paxton by reminding him of the still ongoing criminal charges he's been facing his entire time in office. Basically, if you remind Paxton of the fact that he's facing criminal charges, you had a decent shot at getting blocked.

However, last week, Paxton unblocked the 9 users who sued him, perhaps realizing he was clearly going to lose this case. Of course, it looks like he only removed the blocks on those 9 individuals and kept up the blocks on others. Law professor Steve Vladeck (who is at the University of Texas Law School) noted that he's still blocked, even if the plaintiffs in the lawsuit are not:

Vladeck is (of course) correct. The whole point of this is that public officials cannot block anyone from their official accounts like this. If he's just unblocked the people who sued them, that means anyone blocked will have to go through the costly and time consuming process of suing to get unblocked, and that's not how it's supposed to work either.

It seems pretty clear that the lawyers in the case recognize that Paxton isn't really doing what he is required to do under the 1st Amendment:

“We’re pleased that Attorney General Paxton has agreed to unblock our plaintiffs in this lawsuit and are hopeful that he will do the same for anyone else he has blocked from his Twitter account simply because he doesn’t like what they have to say,” said Katie Fallow, a senior staff attorney at the Knight First Amendment Institute.

Anyone taking bets on how many of those other people are going to need to sue first?

15 Comments | Leave a Comment..

Posted on Techdirt - 11 May 2021 @ 9:33am

Disgraced Yale Law Professor Now Defending Anti-Vaxxers In Court With His Nonsense Section 230 Ideas

from the that's-not-how-any-of-this-works dept

Back in January, we wrote about a bizarrely bad Wall Street Journal op-ed co-written by disgraced and suspended Yale Law professor Jed Rubenfeld, arguing that Section 230 somehow magically makes social media companies state actors, controlled by the 1st Amendment. This is, to put it mildly, wrong. His argument is convoluted and not at all convincing. He takes the correct idea that government officials threatening private companies with government retaliation if they do not remove speech creates 1st Amendment issues, and then tries to extend it by saying that because 230 gives companies more freedom to remove content, that magically makes them state actors.

As we noted at the time, that's not how any of this works. Companies' ability to moderate content is itself protected by the 1st Amendment. Section 230 gives them procedural benefits in court to get dumb cases kicked out earlier, but it most certainly does not magically make them an arm of the government. This wacky idea that social media is magically a state actor was rightly shut down by Supreme Court Justice Brett Kavanaugh (who, ironically, is part of another scandal involving Rubenfeld) in the Halleck case, in which the Court stated clearly that you don't just magically make companies state actors. There are rules, man. From the ruling written by Kavanaugh:

By contrast, when a private entity provides a forum for speech, the private entity is not ordinarily constrained by the First Amendment because the private entity is not a state actor. The private entity may thus exercise editorial discretion over the speech and speakers in the forum. This Court so ruled in its 1976 decision in Hudgens v. NLRB. There, the Court held that a shopping center owner is not a state actor subject to First Amendment requirements such as the public forum doctrine....

The Hudgens decision reflects a commonsense principle: Providing some kind of forum for speech is not an activity that only governmental entities have traditionally performed. Therefore, a private entity who provides a forum for speech is not transformed by that fact alone into a state actor. After all, private property owners and private lessees often open their property for speech. Grocery stores put up community bulletin boards. Comedy clubs host open mic nights. As Judge Jacobs persuasively explained, it “is not at all a near-exclusive function of the state to provide the forums for public expression, politics, information, or entertainment.”

However, it appears that not only is Rubenfeld making these arguments in laughably wrong WSJ pieces, but he's now trying to do so in court as well, as he's now representing some anti-vaxxers, who are trying to insist that Facebook's decision to put warning labels on the bogus information the anti-vaxxers were posting somehow violated their 1st Amendment rights.

We had written about this case last summer, noting that it was so stupid and so wrong that I had difficulty writing it up. And that was before Rubenfeld joined the defense team. At issue was that Robert F. Kennedy's blatant misinformation anti-vax propaganda shop, "Children's Health Defense" sued Facebook, claiming that it had "teamed up" with the US government to censor their speech. The reasoning was that Rep. Adam Schiff had (stupidly) threatened to remove Facebook's 230 protections if the company didn't do a better job dealing with misinformation.

As we noted at the time, there is perhaps a weak case they might have against Schiff, but not against Facebook.

Yet, the case goes on. Facebook has rightly moved to have the case dismissed, and that motion is worth a read if only because the exasperation of Facebook's lawyers at Wilmer Hale can be heard quite clearly. There's a lot in there, but the summary covers it pretty thoroughly:

CHD claims that Facebook’s fact-checking program violated its First Amendment rights, restrained it from competing in the marketplace of vaccine “messages,” ... and constituted a RICO enterprise. Those claims turn the First Amendment on its head. The First Amendment is a shield from government action—not a sword to be used in private litigation. It is therefore unsurprising that the SAC contains numerous independent and incurable defects.

First, the SAC does not state a Bivens claim because it does not allege federal action. Facebook and Mr. Zuckerberg are private actors. Facebook exercised its own editorial discretion to reduce the visibility of posts identified by independent fact-checkers as containing false or partially false information. None of the challenged conduct is attributable to the federal government.

Second, far from violating the First Amendment, Facebook’s decisions to label and limit the visibility of CHD’s content are themselves protected by the First Amendment. This Court may not hold Facebook or Mr. Zuckerberg liable for exercising editorial discretion with respect to matters of public concern. And even if the First Amendment did not fully bar CHD’s claims, it requires that CHD, at minimum, plausibly allege that Facebook acted with actual malice. The SAC fails to do so, even though Defendants’ motions to dismiss unquestionably put CHD on notice of this defect.

Third, Section 230 of the Communications Decency Act (“CDA”) shields Facebook from liability for publishing third-party fact checks or restricting access to CHD’s content. None of the SAC’s allegations concerning the relationship between Facebook and third-party fact-checkers strip Facebook or Mr. Zuckerberg of that protection.

Fourth, the Lanham Act claim fails because CHD has not identified a commercial injury that gives it standing under the Act. The Lanham Act protects those engaged in commerce against unfair competition. Because CHD’s alleged injuries are to its interests as a consumer of Facebook’s free service, not as a competitor, they are not cognizable under the Act. And CHD’s allegations do not establish that the purportedly false statements are “promotional statements” covered by the Act.

Fifth, CHD has not stated a civil RICO claim because it has failed, even on its third bite at the pleading apple, to identify any predicate acts of wire fraud. And CHD has alleged neither a sufficiently “direct” injury to confer statutory standing nor a cognizable civil RICO “pattern.”

Sixth, the SAC additionally does not state a claim against Mr. Zuckerberg because it does not allege that he was personally involved in any of the allegedly unlawful conduct. Nor has CHD pleaded the necessary prerequisites for any theory of agency liability.

Seventh, though the SAC contains many paragraphs describing CHD’s views on 5G, CHD nowhere connects those views to an actionable theory of liability

Apparently, Rubenfeld has joined forces with RFK Jr. and showed up in court to defend this idiocy to what would appear to be an appropriately skeptical judge, alongside lawyer Roger Teich (who originally filed the complaint with RFK Jr.).

In a virtual hearing on Facebook’s motion to dismiss the lawsuit Wednesday, Judge Illston asked if the government can ever take steps to counter misinformation without running afoul of the First Amendment.

“Let’s say there was something on the internet that says, ‘If you take a Covid vaccine, you’re going to grow a third head.’ That’s clearly not true. Is it OK to not let that be published?” Illston asked.

CHD attorney Roger Teich replied, “I don’t think it’s OK if the government is calling the shot.”

Illston pressed: “You think it’s inappropriate for the government to say generally, ‘We’d really like it if all these private social media outlets didn’t publish lies about the Covid vaccine?’ That’s not alright to say that?”

Teich answered that it was the CDC’s “underhandedness” in using Facebook to restrict speech that violates the Constitution.

That, of course, is not how any of this works. And someone with Rubenfeld's pedigree should know that. But, instead, he's out there defending this utter and complete nonsense:

“State action must be found whenever government officials are coercing, inducing or encouraging private parties to do what they themselves cannot constitutionally do,” CHD attorney Jed Rubenfeld said.

Sure, if there's actual coercion, then a discussion can be had. But CHD has no evidence of any of that. And it seems to ignore Facebook's own 1st Amendment rights. And when the judge pointed all this out to Rubenfeld, he tries to cook up a wacky theory that because members of Congress or the CDC said something, and then Facebook took action, that magically makes Facebook a state actor.

CHD argued that U.S. Magistrate Judge Virginia DeMarchi in San Jose got it wrong when she dismissed Daniels v. Alphabet Inc. on March 31. The plaintiff in that suit argued Schiff and House Speaker Nancy Pelosi had coerced YouTube, owned by Google’s parent Alphabet, into removing objectionable content. DeMarchi dismissed the suit with leave to amend, finding the plaintiff did “not plead any facts suggesting that Speaker Pelosi or Rep. Schiff were personally involved in or directed the removal” of videos.

CHD attorney Jed Rubenfeld said DeMarchi “was not informed of the precedent” when she issued that ruling.

“What matters is if they gave the private party the standard of decision,” Rubenfeld said. “The CDC gives Facebook the standard of decision.”

“And does it matter if what the CDC said is true,” Illston asked.

Rubenfeld replied by insisting the information his client has posted about vaccines is true, but even if the speech was false, “it would still be constitutionally protected.”

Um. Again, even if this were true (and it's making a lot out of an incredibly weak chain of events), wouldn't CHD's actual cause of action be against the government officials and not Facebook, which retains its own 1st Amendment rights to label nonsense nonsense, or to take down content?

Everything about this case is dumb, and the fact that the disgraced and suspended Rubenfeld is using it to further his nutty legal theories is just the icing on the nonsense cake. Hopefully the judge does the expected thing and dismisses the case with a thorough benchslap for wasting the court's time.

22 Comments | Leave a Comment..

Posted on Techdirt - 7 May 2021 @ 10:45am

Wired's Big 230 Piece Has A Narrative To Tell

from the not-great,-bob dept

I remember when Wired was the key magazine for understanding the potential of innovation. I subscribed all the way back in 1993 (not the first issue, but soon afterward, after a friend gave me a copy of their launch issue). Over the years, the magazine has gone through many changes, but I'm surprised at how much its outlook has changed. The latest example is a big cover story by reporter Gilad Edelman, basically arguing that people who support Section 230 are "wrong" and holding the law up as a "false idol." The piece is behind a paywall, because of course it is.

I should note that, while I have disagreed with Edelman in the past (specifically regarding his reporting on 230, which I have long felt was not accurately presenting the debate), I think he's a very good reporter and usually quite thorough and careful. That's part of the reason I'm disappointed with this particular piece. Also, I will note that my first read of the article made me think it was worse than I did after subsequent reads. But, in some ways, more careful reads also highlighted the problems. While presented as a news piece with thorough reporting and fact checking, it is clearly narrative driven. It reads as though it were written with a story in mind, and then Edelman went in search of quotes to support that narrative -- even setting up strawmen (including myself and Cathy Gellis) to knock down, while not applying any significant scrutiny to those whose views agree with Edelman's. It's fine (if misleading) as an opinion piece you'd see on a blog somewhere. But as a feature article in Wired that was supposedly fact checked (though I am quoted in it, and no one checked with me to see if the quote was accurately presented), it fails on multiple grounds.

The framing of the article is that "everything you've heard about Section 230 is wrong" (that's literally the title), but that's not how the article actually goes. Instead, it comes across as "everyone who supports 230 is wrong." It starts off by talking about "the Big Lie" and the fact that Trumpist cable news -- namely Newsmax, One America, and Fox News -- repeatedly presented blatantly false information regarding voting technology made by Dominion Voting Systems and Smartmatic. It notes that the voting companies sued the news channels, and all of them have been much more circumspect since then about repeating those lies. Edelman then contrasts that with the world of social media:

As some commentators noted, one group was conspicuously absent from the cast of defendants accused of amplifying the voting machine myth: social media companies. Unlike traditional publishers and broadcasters, which can be sued for publishing a defamatory claim, neither Facebook nor YouTube nor Parler nor Gab had to fear any legal jeopardy for their role in helping the lie spread. For that, they have one law to thank: Section 230 of the Communications Decency Act.

This statement is inaccurate on multiple levels. First of all, it's comparing apples to oranges. Traditional publishers and broadcasters face liability because they choose what limited content to publish. Note that while you can sue Fox News for defamation, no one is suing, say, Dish Network for offering Fox News. That's because liability should apply to those responsible for the speech. With Fox News, it's Fox News. They choose what goes on the air. With social media, they don't. They're more like the "Dish Network" in this scenario. The liability is not on them, but the speakers. If Dominion and Smartmatic wanted, they could have gone after the actual speakers on those social media networks for defamation, just as they chose to go after Fox and not Dish.

It's all about the proper application of liability to those actually doing the speaking. But you wouldn't get that message if you read this article.

Even the final line of that quote, saying that platforms have 230 to thank, is not entirely accurate. Even without 230, it's hard to see how a Dominion or Smartmatic could possibly hold Facebook liable for defamatory content on their network. The main difference is that 230 would get any such case dismissed earlier and cheaper, and that makes websites more willing to host user generated content without having to fear the crippling costs of extended litigation.

That's all very important nuance. Nuance that is not adequately presented in laying out Edelman's argument.

The article bends over backwards to present those of us who support Section 230 as being unwilling to admit that there are problems on the internet, and treating Section 230 like apple pie and ice cream.

According to its admirers, Section 230 is the wellspring from which everything good about the modern internet emerged—a protector of free speech, a boon to innovation, and a corner stone of the American economy. The oft-quoted title of a book by the lawyer Jeff Kosseff captures this line of thinking well. It refers to the law’s main provision as "the 26 words that created the internet."

At best, that's an exaggeration and a strawman that's easy to knock down. Kosseff himself notes that this suggests his book is a one-sided hagiography of 230:

But, of course, Edelman's representation is not a fair one of how any of us 230 supporters feel. We don't say that 230 is perfect and ideal. We regularly highlight the challenging and impossible trade-offs that come out of this internet with many companies hosting third-party speech. Jeff's book goes deep into things he doesn't like about the way the internet has developed, partly because of 230. It details many of the reasoned criticisms of 230.

The issue all of us keep pointing out is not that 230 is perfect, but that every suggestion for changing it will create all sorts of problems that make the internet much worse. I've written about this a few times, and the fact that content moderation is impossible to do well. The good thing about Section 230 is not that it makes the internet perfect. It does not, and I've never claimed otherwise. It's that it allows for the necessary experimentation to continually change and improve, and to react to new forms and techniques of bad behavior. So far, every other proposed approach acts as if content moderation is a "solvable" problem and that magically forcing companies into a particular paradigm will work.

This suggestion that supporters of 230 are Pollyannas of the web is a strawman. We are not. We are focused on the different trade-offs and nuances of every approach, and we defend Section 230 because it remains the best approach that we've seen for dealing with a very messy internet in which there are no good solutions, but a long list of very bad ones.

The article then suggests that we supporters of 230 believe all critics don't know what they're talking about. It actually references an event that I put together (though it doesn't mention that) which Edelman attended, where I interviewed the authors of Section 230, Senator Ron Wyden and former Representative Chris Cox. If you'd like to hear that interview for yourself, you can listen to the whole thing on our podcast. Oddly, Edelman names only three out of the ten sponsors we had for that event (Amazon, Twitter, and Yelp) as if it was put together solely by the big internet companies. It does not name the other seven sponsors, which included organizations like the Internet Society and the Filecoin Foundation (which is helping to create a new internet that undermines the big social media companies).

Another article of faith among Section 230’s champions? That people who criticize the law have no clue what they’re talking about. Section 230 recently turned 25 years old, and the occasion was celebrated by a virtual event whose sponsors included Twitter, Amazon, and Yelp. Senator Ron Wyden and former congressman Chris Cox, the authors of the statute, fielded questions from the audience, typed into a chat window. The most upvoted question was, “How best can we get folks to properly understand Sec 230? Particularly when it seems that many are either reluctant to realize they don’t understand or, even worse, they don’t want to understand?”

Note that Edelman's assertion here -- that it's an "article of faith" among 230 supporters that "people who criticize the law have no clue what they're talking about" -- is not actually supported by the highest-voted question during the Q&A portion of the session we held. It's a factual statement that many people talking about 230 don't understand it. And in the context of the conversation, that question was referring to people like former President Trump and Senator Josh Hawley, who think that Section 230 is why websites can remove policy-violating users -- something that is just demonstrably wrong. So the question, in context, was not suggesting that everyone criticizing 230 "have no clue what they're talking about" but trying to deal with the fact that many people talking about 230 demonstrably do not understand it and seem to have no interest in doing so.

So this line may fit Edelman's preset narrative, but in context it does not say what he wants it to say. It's cherry-picked. Edelman does say that Trump's (and Biden's) view of the law is not "terribly coherent," more or less admitting that the question from our event was accurate. But within the context of his article, it's presented as if we're unwilling to dig in and recognize that the internet is not perfect, and believe everyone who pushes back on 230 is doing so in bad faith.

Of course, that's false. The issue is that there are many bad faith attacks on 230. However, when there are good faith criticisms of Section 230, we're perfectly happy to address them as such, and highlight why those approaches -- even if meant in good faith -- might backfire. That is not how we are presented in this article. Instead, we're presented as one side of a black-and-white battle against the realists who recognize the problems of the law.

This is repeated later, when Edelman briefly quotes me as another out-of-context strawman to blow over:

Other guardians of 230 sound even more apocalyptic notes when the law comes up for debate. After a group of Democratic senators proposed a bill to limit the law’s protections in early February, Mike Masnick, founder of the venerable policy blog TechDirt, wrote that the changes could force him to shut down not just the comments section but his entire website. Section 230 coauthor Ron Wyden, now a US senator, said the bill would “devastate every part of the open internet.”

And I did say that we would likely have to shut down if the SAFE TECH Act became law, but that was about that particular law. We have not said that about every possible change to the law. And we said it about the SAFE TECH Act because of just how poorly drafted that law is. My article about the problems of the SAFE TECH Act (a bill which Edelman praised effusively, while also complaining that it didn't go far enough) goes into great detail on the many problems with the specific approach it laid out. But in Edelman's view, it seems, because we said this bill would likely force us to shut down, that means we're apocalyptic about any situation "when the law comes up for debate." That's just blatantly inaccurate. I've already mentioned the reasons we're happy to engage with those who are looking to make changes in good faith, to understand their issues and explore solutions. I've talked happily to many Congressional staffers and other government officials about their ideas for this very reason.

But the point we keep raising is just how much detail and nuance there is in these items, which few of the critics seem willing to get into. Instead, the focus is on painting "internet bad!" with a broad brush, and that's the trap much of Edelman's article falls into.

It does the same thing with another aspect of our own advocacy, calling out our amicus brief in the Armslist case, written by Cathy Gellis. Here's how Edelman frames that:

In fact, a lot of the most passionate pro-230 discourse makes more sense when you recognize it as a species of garden-variety libertarianism—a worldview that, to caricature it only slightly, sees any government regulation as a presumptive assault on both economic efficiency and individual freedom, which in this account are pretty much the same thing to begin with. That spirit animated Section 230 when it was written, and it animates defenses of the law today. So you have Cathy Gellis, a lawyer who blogs ardently for TechDirt in support of Section 230’s immunity, filing an amicus brief in the Armslist case insisting that a post listing a gun for sale is speech that must be protected.

That's... well, quite something. Considering that neither Cathy nor I are "garden-variety libertarians" and neither of us see "any government regulation as a presumptive assault on both economic efficiency and individual freedom," it's already misrepresenting our views. It also completely misrepresents the nuances, context, and framing of our advocacy in the Armslist case. Our argument correctly notes that advertisements are a form of speech. Edelman may not like that, but it's a factual statement -- not some crazy utopian libertarian idea. Indeed, Cathy's opening to the brief details just how difficult cases like this are, and how they force us to challenge many of our assumptions.

Tragic events like the one at the heart of this case can often challenge the proper adjudication of litigation brought against Internet platforms. Justice would seem to call for a remedy, and if it appears that some twenty-year old federal statute is all that stands between a worthy plaintiff and a remedy, it can be tempting for courts to ignore it in order to find a way to grant that relief.

The problem is, as in cases like this one, there is more at stake than just the plaintiff’s interest. This case may look like a domestic violence case, a gun policy case, or even a negligence case, but it is actually a speech case. Laws that protect speech, such as the one at issue in this appeal, are on the books for good reason. They are ignored at our peril, because doing so imperils all the important expression they are designed to protect.

You would not get that from Edelman's piece at all. Instead, it suggests that we argued that there's no issue here since this is just speech. That's not an accurate portrayal of what we said by any basic reading of what we wrote. Cathy's brief highlighted the challenging issues in the case, and brought them back to the key point behind 230: that it's about putting liability on the actual responsible party, rather than seeking to dump it on the most easily targeted party like the platform hosting problematic third-party speech.

The article also goes after Professor Eric Goldman, who is one of the top scholars on Section 230 -- first quoting a regular critic of his giving an extremely one-sided description of Goldman, and then again presenting a strawman of Goldman's views, focusing on his important paper about why 230 is better than the 1st Amendment. Yes, the title of that piece is provocative, but in the Edelman article it's presented as some sort of evidence of how extreme Goldman's views are:

But Goldman is not only Section 230’s most up-to-speed observer; he may also be its biggest fan. When reporters call him for an expert quote, they get a very particular perspective—one capably summarized in the title of his 2019 paper, “Why Section 230 Is Better Than the First Amendment.” In Goldman’s view, the rise of platforms featuring user-generated content has been an incredible boon both to free speech and to America’s economic prosperity. The #MeToo movement; the more than $2 trillion combined market cap of Facebook and Alphabet; blogs, customer reviews, online marketplaces: We enjoy all of this thanks to Section 230, Goldman argues, and any reduction in the immunity the law provides could cause the entire fortress to crumble. No domain of user-generated content would be safe. If the law were repealed, he recently told the Committee to Protect Journalists, “comments sections for newspapers would easily go.”

Edelman makes little effort to engage with why Goldman says any of this, or even to explore the details of Goldman's "230 is better than the 1st Amendment" paper until much later in the article, when he no longer presents it as connected to that paper. Instead, Edelman presents the title of Goldman's paper, without providing the proper context -- context he only obliquely raises elsewhere in the article. What that paper actually says is important, and not quite as radical or extreme as Edelman presents. The paper goes into great detail about a kind of wonky legal argument: that 230 has procedural benefits that help both companies and users deal with the kind of heckler's veto that would occur if we had to rely on the 1st Amendment to deal with the lawsuits. The argument is that 230, as a procedural tool, kicks these cases out early. If we had to rely on the 1st Amendment, you're talking about a much more expensive legal process, turning an issue that could be disposed of for tens of thousands of dollars into one that will require hundreds of thousands.

That is perhaps deep in the legal wonkery weeds, but it's a legitimate point. Much later in the article, Edelman does finally quote Goldman directly making this point (the only supporter of 230 he appears to have interviewed, though it looks as though he interviewed and quoted at least three fierce critics of Section 230 -- without ever critiquing any of their arguments), but it's so far separated from the framing that Edelman used above that no one who hasn't been deeply engaged in this debate will recognize it:

You might think, for example, that something like Citron’s proposed “reasonableness” standard would be widely seen as a commonsense, compromise reform. In fact, even this suggestion draws fierce opposition. Eric Goldman, the influential law professor, told me it would be tantamount to repealing the entire law.

“A key part of 230’s secret sauce comes in its procedural advantages,” he said. Today, the law doesn’t just help companies defeat lawsuits; it helps them win fast, at the earliest possible step, without having to rack up legal bills on discovery, depositions, and pretrial filings. Forcing defendants to prove that they meet some standard of care would make litigation more complicated. The company would have to submit and gather evidence. That would require more attention and, most importantly, money.

Perhaps the biggest companies could handle this, Goldman said, but the burden would crush smaller upstarts. Tweaking Section 230 this way, in other words, would actually benefit monopolies while stifling competition and innovation. Faced with a deluge of defamation lawsuits, the large platforms would err on the side of caution and become horribly censorious. Smaller platforms or would-be challengers would meanwhile be obliterated by expensive legal assaults. As Ron Wyden, Section 230’s coauthor, puts it, Citron’s proposal, though “thoughtful,” would “inevitably benefit Facebook, Google and Amazon, which have the size and legal muscle to ride out any lawsuits.”

And... all of that is true. But rather than deal with that fact, and highlight that this is the point all of Section 230's supporters are trying to make, Edelman brushes it off as typical anti-regulation nonsense.

The thing about this argument is that a version of it gets trotted out to oppose absolutely any form of proposed corporate regulation. It was made against the post-recession Dodd-Frank Wall Street Reform and Consumer Protection Act of 2010, which the conservative Heritage Foundation declares “did far more to protect billionaires and entrenched incumbent firms than it did to protect the little guy.” Federal food safety rules, fuel economy standards, campaign spending limits: Pick a regulation and a free-market advocate can explain why it kills competition and protects the already powerful.

This is incredibly unfair. And it paints Goldman, Gellis, and myself as if we're regular fighters against any corporate regulation, which is simply not true (I mean, hell, look at our net neutrality coverage). Also, it's weird that this article comes out on the same day that the Heritage Foundation (according to Edelman, the kind of free market entity that fights back against any kind of regulations) came out and said 230 must be reformed or repealed. To lump us in with them as if we're all just "free market libertarians" is just weird. Especially when the "free market" groups he names... are on the other side on this issue.

There is no attempt to seriously deal with the critiques that we raise about the various proposals to reform 230 and our explanations of why they would be problematic. They're just brushed off as anti-regulation.

On the other hand, the Section 230 critics Edelman spoke to have their views presented without qualification or critique. It's as if Edelman has decided they are correct, and thus he does not need to test their theories, and that we are wrong, so our theories can be blithely dismissed.

Separately, it's worth addressing one key argument the article raises, which I've seen many others raise before: that 230 must not be necessary for an open internet because other countries don't have it and everything is "fine" there. In this article, the comparison offered is... Canada.

Maybe, as Lunney suggests, the common law would have developed something similar to the immunity provided by Section 230. But courts also could have come up with rules to take into account the troubling scenarios: bad Samaritan websites that intentionally, rather than passively, host illegal or defamatory content; platforms that refuse to take down libel, threats, or revenge porn, even after being notified. They might have realized that the publisher-distributor binary doesn’t capture social media platforms and might have crafted new standards to fit the new medium. Section 230, with its broad, absolute language, prevented this timeline from unfolding.

This hypothetical scenario isn’t even all that hypothetical. The United States is the only country with a Section 230, but it’s not the only country with both a common law tradition and the internet. Canada, for example, has nothing analogous to Section 230. Its libel law, meanwhile, is more pro- plaintiff, because it doesn’t have the strong protections of the First Amendment. Despite all that, user-generated content is alive and well north of the border. News sites have comments sections; ecommerce sites display user reviews. Neutral providers of hosting or cloud storage are not hauled into court for selling their services to bad guys.

Both of these paragraphs are worth addressing on their own, but it's important to see the two combined to highlight the issues with this argument. It is possible that the common law would have developed to create a 230-like situation. Indeed, as some will remember, in the early 2000s I had said that I didn't think 230 was necessary, since it seemed obvious that a website shouldn't be held liable for third-party content, and I hoped that courts would easily recognize this. However, history has made it clear that my belief was wrong. Over and over again we've seen individuals (and even a few courts) get this wrong, and assume that hosting third-party content should lead to liability. Section 230's purpose was to avoid the headache of having to go through this over and over again.

That's a key part of what Goldman is talking about regarding the procedural benefits of 230.

But the second paragraph is one that has made some people nod in agreement. Unfortunately, it elides many important details. First, it says that Canada is "alive and well" with third-party content, but that leaves out a lot of context, such as the nature of litigation in each country. According to a Harvard study on litigation rates of different countries, the US is way more litigious, with 5,806 lawsuits filed per 100,000 people, compared to just 1,450 per 100,000 in Canada. For better or worse, the US is a much more litigious society. That makes a difference.

And there are all sorts of differences in the Canadian litigation context as well, including that Canada (like much of the rest of the non-US world) has a common law system in which the loser usually pays at least a portion of the winner's legal fees, deterring a significant amount of frivolous litigation. The US doesn't have that except in extreme cases, or a few very limited conditions (anti-SLAPP laws, certain copyright cases, extremely vexatious litigation). That explains a huge part of the reason why abusive litigation is so much more popular in the US. Plaintiffs often don't care if they win or lose, because the goal is just to hurt the defendant. In Canada that's harder to accomplish.

Second, it leaves out the actual impact on speech in Canada, and simply rolls it all up as "alive and well." Except that's not quite true. While it does admit that libel law is "more plaintiff friendly" it leaves out how that works in practice, which shows why the first paragraph above is misleading as well. A perfect example of this was the saga of Jon Newton and Wayne Crookes that we discussed on Techdirt. At issue was that Newton, the operator of P2PNet, had simply linked to an article that a Green Party official, Wayne Crookes, believed was defamatory. In the US, such a case would have been kicked out of court quickly under 230. In Canada, the case that began in 2007 had to go through many years and many appeals and didn't end until late 2011 when Canada's Supreme Court finally ruled that merely linking was not defamatory.

Literally two months after that case concluded -- even though Newton won -- he announced that he was done with the site.

That story alone highlights the issues with the "it's fine in Canada" approach. It's not fine. And for a small site, it required years spent fighting a draining lawsuit that, while it eventually resulted in a win, meant that the site in question was basically done. And we've seen this in lots of other countries as well, including Argentina and India. While some other countries have eventually had 230-like rules established through the courts, it's often a long and arduous process for sites, and in the meantime makes them much quicker about pulling down any speech that might get them in trouble.

Even worse, the idea that "Canada is fine, just a bit more pro-plaintiff" fails to take into account other realities of Canadian intermediary liability jurisprudence, including the infamous Equustek decision that argued the Canadian government could order Google (a non-party to the court case in question) to block a website from being accessed not just in Canada, but around the globe. That kind of decision should raise serious questions about Canada's actual commitment to free speech and whether or not such content is truly "alive and well" up north.

Indeed, Edelman then states that maybe Canada's internet isn't really that open, giving the example of a media site that removes a bunch of comments in part because of legal reasons and the risk of being dragged into court. Bizarrely, he spins this as evidence that we don't need 230.

Yes, websites with user-generated content do have to be more careful. Jeff Elgie, the founder of Village Media, a network of local news sites in Canada, told me that the possibility of getting sued was one thing the company had to take into account when building its comments system, which combines AI with human moderation. But it’s hardly the extinction-level threat that Section 230 diehards warn about. (Elgie said that, overall, only around 5 to 10 percent of comments get blocked on Village Media sites, and only a small subset of those are for legal reasons.) It is simply not true that “the internet” relies on Section 230 for its continued existence.

Except no one says that the internet would go away completely. We just say that it would be a very different kind of internet -- one in which marginalized voices are less able to get through, stories like #MeToo get stifled in their crib, and smaller sites like mine are unable to exist. Indeed, there's plenty of empirical evidence of over-blocking, especially in countries without 230-like protections. Edelman doesn't address that beyond saying that Canada is fine. And, sure it's "fine" because we can't point to all the content that no one can see because it's never posted or not posted for very long due to over-blocking out of fear of legal liability. Edelman, a top journalist working for one of the largest media publishers in the world, may not care much as to how that impacts the less fortunate, the marginalized, and such. But we do.

Finally, a point that we've made in the past regarding this "other countries" argument is that if you look around, you don't see any of those other countries producing many successful internet companies that rely on third-party content. That's certainly true of Canada. There's... Wattpad? Who else? Edelman dismiss this argument as "a pivot" (though it's not a pivot, it's the very important nuance we're trying to explain) and then dismisses it entirely saying it's not clear 230 really matters here.

In response to this observation, staunch supporters of Section 230 generally pivot. They concede that other countries have blogs and comments sections but point out that these countries haven’t produced user-generated content juggernauts like Facebook and YouTube.(Set aside China, which has a totally different legal system, a closed internet, and private companies that are more obedient to the state.) Section 230 might not be responsible for the internet’s literal existence, they say, but it is necessary for the internet as we know it.

There are a few ways to respond to this. One is that it’s hard to prove Section 230 is the reason for the success of American social media giants. The internet was invented in the US, which gave its tech sector an enormous head start. America’s biggest tech successes include corporate titans whose core businesses don’t depend on user-generated content: Microsoft, Apple, Amazon. Tesla didn’t become the world’s most valuable car company because of Section 230.

This isn't a particularly compelling response. After all, while the US may have pioneered the internet, the biggest user-generated content (social media) companies were started at a time when the internet was truly global and widely adopted. Facebook launched in 2004. YouTube in 2005. Twitter in 2006. That's well past the time when the internet was new and just in the US. Furthermore, you can look at other evidence to tease out some of the differences -- as we did in our Don't Shoot the Message Board report in 2019. In that report, we looked at a wide variety of intermediary liability regimes and how they impacted startup creation and investment. One key finding was that the US didn't have nearly as much success with regards to startups in the copyright space as it did in other areas, and some of that could be explained by the fact that the DMCA is much more limiting that 230. In the music world, there are lots of examples of successful companies coming out of Europe -- such as Spotify, Soundcloud, Deezer and more. In other words, when we have more restrictive intermediary liability law, the evidence shows less successful US company creation.

Edelman's final response to this argument is... just pure speculation.

Another response is that even if Facebook does owe its wild success to Section 230, perhaps that’s not a reason to pop champagne. The reason we’re talking about reforming tech laws in the first place is that “the internet as we know it” often seems optimized less for users than for the shareholders of the largest corporations. Section 230’s defenders may be right that without it, Facebook and Google would not be the world-devouring behemoths they are today. If the law had developed slowly, if they faced potential liability for user behavior, the impossibility of careful moderation at scale might have kept them from growing as quickly as they did and spreading as far. What would we have gotten in their place? Perhaps smaller, more differentiated platforms, an ecosystem in which more conversations took place within intentional communities rather than in a public square full of billions of people, many of them behaving like lunatics.

And, I mean... sure? Maybe? And maybe it wouldn't have happened that way, and we'd have something a lot worse that enabled a lot less free expression. Perhaps we would have had an internet where it was much harder to call out the rich and powerful for sexual assault or casual bigotry. That "maybe" seems like a difficult one to hang your "it'll be okay to change 230" hat on. And, indeed, just as Edelman points to Canada as his "proof" that the internet is fine without 230, we can point to the rest of the globe to say that his speculation here does not seem to be proven either. Indeed, what we've seen (as noted above) is much more aggressive suppression of speech, which is a big part of what we're concerned with.

Also, if we look again at the copyright context, where no 230 exists, but rather a much more restrictive DMCA, we do not see this utopian better internet that Edelman speculates might have happened absent 230. We don't see "smaller, more differentiated platforms." Instead, we see the opposite. In the copyright realm, we see giant companies -- the few that have been able to hire giant legal teams to negotiate expensive deals. The smaller, more innovative startups mostly got driven out of the market by lawsuits quickly, even when they had strong legal claims. The actual evidence in the US context is that increasing legal liability doesn't lead to more "intentional communities," but simply fewer communities, and a very tiny number of giant companies with no real alternatives (unlike in the social media space, where there remain tons of alternatives).

All in all, the article is still worth reading -- and Edelman does present a thorough look at much of the 230 debate. It's just pretty clear what he believes. And that's fine for an opinion blog where the goal is to make your own views clear. But Wired presents this as a featured cover story that highlights factual claims about how 230's supporters are wrong, and it doesn't actually do that. It sets up strawmen, ignores nuances and context, and tells a predetermined story with cherrypicked, non-representative examples.

And that seems like a wasted opportunity.

147 Comments | Leave a Comment..

Posted on Techdirt - 6 May 2021 @ 3:46pm

Devin Nunes' Favorite Lawyer On The Hook For Over $20k In Sanctions

from the pay-up-biss dept

Last month we wrote that Rep. Devin Nunes' favorite lawyer, Steven Biss, who has been filing frivolous, vexatious SLAPP suit after frivolous, vexatious SLAPP suit, was finally facing some sanctions. The specific case did not directly involve Nunes, but rather one of his aides, Derek Harvey, who had filed a ridiculous SLAPP suit against CNN. As we wrote last month, the court had easily tossed the original lawsuit and warned Biss not to file an amended complaint unless he had a credible legal theory. Biss did not have a credible legal theory, but he still filed an amended complaint. And thus, the court issued sanctions, saying that Harvey, Biss and other lawyers would be on the hook for CNN's legal fees.

The latest filing in the case is the bill coming due. Harvey and Biss need to pay CNN $21,437.50 in legal fees (and an additional $52.26 in costs and expenses). That might not seem like that much in the grand scheme of things (especially for a lawyer who has claimed his client, Devin Nunes, is owed over a billion dollars for defamation, but it is still real money that someone is going to need to pay -- though it remains an open question as to who is actually going to pay it).

There's not much to see in the ruling itself, as it basically says that the fees CNN's lawyers outlined are within the standards that the court's local rules say are "presumptively reasonable." The lawyers admit that they're actually asking for less than they normally charge in order to keep them "reasonable" in the Court's eyes, and the Court basically says "sounds good."

It does often seem that lawyers who file tons of frivolous and vexatious lawsuits are able to get away with it for a while, with courts giving them many, many chances and being extremely reluctant to issue sanctions. And, even when sanctions are issued, they tend to be relatively low. However, with such repeat offenders, we've often seen that courts across the country take notice, and once one court has sanctioned this kind of behavior, it can open the floodgates. We'll see what happens in other Biss lawsuits.

Read More | 34 Comments | Leave a Comment..

Posted on Techdirt - 6 May 2021 @ 12:07pm

Why Is A Congressional Staffer Teaming Up With A Hollywood Lobbyist To Celebrate Expansion Of Criminal Copyright Laws?

from the this-just-seems-blatantly-corrupt dept

Late last year, we wrote about how bizarre it was that Senator Thom Tillis was trying to force through a felony streaming bill by attaching it to an end-of-the-year appropriations bill. There were so so many problems with this both in terms of what the bill would do, and in the procedural way it was done. First, Tillis got it attached to the "must pass" appropriations bill before he'd even introduced it. That meant that there was no debate and no direct votes on his bill.

You can kinda maybe (but not really?) see where that might make sense for uncontroversial bills, but the felony streaming bill... was not that. Long time readers of Techdirt will know that Hollywood has been pushing for a felony streaming bill for over a decade, and it was originally set to be attached to the infamous SOPA/PIPA bill until the internet rose up and made it clear that it would not accept Congress passing such a dangerous bill. Given that, you'd think that any one who had an honest reason for pushing such a bill would open it up to debate, rather than hide it away in a giant bill. That should give you one giant hint as to why Tillis pushed it the way that he did.

Second, there have been multiple reports about just how much Hollywood has invested in Senator Tillis. And we've heard from multiple people now that Tillis bristles at the idea that he's somehow owned and operated by Hollywood lobbyists. Of course, it would help if he didn't repeat their talking points at every turn, and turn around and introduce massive copyright reform that was basically an early Christmas gift for Hollywood.

But if Tillis wants to claim that he's not just doing Hollywood's billing, you'd think he would not have allowed this to happen. His chief staffer working on these copyright bills, Brad Watts, teamed up with Fox's chief DC lobbyist, Gail Slater, to write an article patting each other on the back for getting the felony streaming bill passed.

I've spoken to multiple DC policy folks both inside and outside of Congress and literally none can think of any other example when a Congressional staffer and a top corporate lobbyist teamed up to write an op-ed together. It's literally unprecedented. More than one person I spoke to expressed complete bewilderment that this op-ed even came to be. "How did no one in Tillis' office not realize that this was a bad idea?" was the quote a staffer in another Senate office told me. "It's shocking."

But even worse than this out-and-out admission that Tillis does what Hollywood asks him to do, is the content of this article, which is not just revisionist history, but actually celebrates the sneaky way in which Watts (and apparently Slater!) helped sneak this bill through.

Some public policy issues are solutions in search of a problem, but unlawful streaming of copyrighted content is emphatically not one of those issues. U.S. Senators Thom Tillis (R-N.C.) and Patrick Leahy’s (D-Vt.) Protecting Lawful Streaming Act of 2020 (PLSA) became law in December 2020 as part of the Consolidated Appropriations Act, 2021. The importance of this law cannot be overstated. Not only did the PLSA modernize criminal copyright law in a long-overdue and positive direction, but it may also signal a new model for legislating digital copyright law going forward.

First of all, I call bullshit that this was "long overdue," or that "the importance cannot be overstated." The article notes, rightly, that legal streaming has become more common, but takes it on faith that "illegal streaming" somehow "costs the U.S. economy nearly $30 billion per year." Their support for that is... a link to a CNN article quoting Tillis. So, Tillis's staffer, who is in charge of all of his copyright efforts, is quoting his boss giving a citation that this same staffer almost certainly told his boss to say in the first place. Nifty.

The COVID-19 pandemic further exacerbated the harm from unlawful streaming as worldwide lockdowns led to a surge in online streaming. Not surprisingly, this surge in streaming included an aggressive uptick in unlawful streaming. According to analytics firm Muso, the unlawful streaming of films alone increased by 33 percent globally during lockdowns. The rise was even higher in the United States at an eye-popping 41 percent increase in unlawful streaming during lockdowns.

I mean, I don't want to make too many assumptions here, but maybe (just maybe) the reason for the uptick in illegal streaming was because millions of people lost their jobs, had no money because Senator Thom Tillis tried to block stimulus packages, and are stuck at home because there's a global freaking pandemic going on. So, maybe it's not like those people have the spare cash to sign up for authorized streaming services at this moment, and it's not exactly a priority given everything else going on.

The article goes on to falsely claim that streaming not being a felony was "a loophole." It was not. As was discussed when this first came up a decade ago, there were legitimate reasons why Congress chose not to make infringing streaming a felony offense. Indeed, there are strong arguments that copyright should solely be a civil offense, and never a criminal one. Making it criminal basically is making US law enforcement the private tort enforcer for Hollywood, which represents a massive subsidy to those industries, such that they no longer have to get their own hands dirty (or spend their own money) on taking infringers to court.

Then, the article engages in some incredibly historical revisionism regarding the original attempt at making streaming a felony, and what happened with SOPA/PIPA.

Despite careful crafting by the legislation’s sponsors, PIPA and SOPA were met with opposition from a range of legitimate stakeholders representing internet and consumer equities. Their advocacy against PIPA/SOPA culminated in over 5 thousand petitions per minute to the U.S. Congress, about 4 million tweets on the legislation, and petitions submitted to Congress containing 8 million signatures.

Concerns about the felony streaming provisions in PIPA/SOPA centered on the perception that, as drafted, it could lead to criminal prosecution of individual artists who regularly used platforms such as YouTube to upload their performances.

Ultimately, the sheer intensity of the opposition to PIPA/SOPA culminated in the legislation being withdrawn from consideration. This opposition took creative content industries and legislators by surprise and resulted in an unwillingness, for many years, to address what was perceived as such a controversial, complicated, and even unfixable issue. 

I mean, just the very idea that SOPA/PIPA were crafted "carefully" is laughable for anyone who knows the real story, in which Lamar Smith did a Leroy Jenkins move, yanking the bill away from Rep. Bob Goodlatte (who had tried to write a more carefully constructed bill) and lit it up like a Tillis-style Christmas tree for Hollywood.

Then there's this fun bit of nonsense:

So, What Changed? Why Now? In the years since PIPA/SOPA, the entire internet and digital copyright ecosystem has changed. Simultaneously, traditional lines dividing content creator industries and tech-heavy startups have blurred, creating more shared interests and equities. Several internet platforms have evolved their business models and are now original content creators themselves.

No, what changed this time was that you refused to introduce it through the normal process, kept it hidden until after it was already lumped into the must pass appropriations bill that was being debated contentiously for other reasons between Congress and a lame duck President in the middle of a pandemic (and an insane propaganda campaign to undermine the results of an election). That's what changed.

Senator Tillis and Leahy’s bill evaded the criticisms that the felony streaming provision in PIPA/SOPA received and does not capture individual internet users or legitimate businesses and content creators, including, likely to some people’s disappointment, Justin Bieber.

Members of Congress and copyright stakeholders across the board were invited to the negotiating table on an equal footing. Negotiations proceeded in good faith and no stone was left unturned as stakeholders gamed out the real-world implications of the draft legislative text.

No, this is not what happened. At all. I spoke to stakeholders from consumer rights groups and internet platforms, and they said that they were just as blindsided by this bill as we were. Again, if this was all about getting all the stakeholders together and coming up with a workable bill for everyone why didn't Tillis just release it as normal? Why did he get it stuffed into the appropriations bill, and not even release the text of the bill until it was clear that there would never be an up-and-down vote on the bill itself?

And that's also why this bill "evaded criticism." Because it was done in a way and at a time when so much other stuff was going on.

That's only underlined by the fact that Tillis' top copyright staffer felt he could reveal "the sausage making process" in combination with one of Hollywood's top lobbyists, without anyone blinking an eye. The fix was in, and that fix sure looks corrupt. At the very least, this is the kind of "soft corruption" that we've talked about before. Even if everything was legitimate, just the fact that Watts and Slater know they can co-author an article about how they got this controversial bill approved gives the public the impression of corruption, and supports the idea that Tillis is completely in the tank for Hollywood.

It damages public trust in government, as it underlines the idea that Senators like Tillis are there to serve the desires of their funders, and not the public he was elected to represent.

18 Comments | Leave a Comment..

Posted on Tech & COVID - 5 May 2021 @ 7:23pm

Huge News: US Gov't Agrees To Support Intellectual Property Waiver To Help Fight COVID

from the devil-in-the-details dept

Earlier this week we wrote about the absolutely ridiculous coalition of folks who were lobbying against the US supporting a TRIPS intellectual property waiver to support fighting COVID. As we noted, it was totally expected that Big Pharma would object to it, but the surprising thing was seeing Hollywood and the legacy entertainment industry -- an industry that needs COVID to go away to get back to normal -- coming out strongly against the waiver as well. They claimed they had to do so since the waiver would apply to copyright as well, but that's nonsense. The waiver (1) explicitly excluded entertainment products and (2) is expressly limited to "prevention, containment or treatment of COVID-19."

On top of that, the waiver process was built into the TRIPS agreement, and if a full on global pandemic that has already killed over 3 million people (and counting) isn't the time to use the waiver, then the waiver is effectively meaningless.

Thankfully, the US has now announced that it will be supporting a waiver. USTR Katherine Tai made the announcement:

Her quote:

“This is a global health crisis, and the extraordinary circumstances of the COVID-19 pandemic call for extraordinary measures. The Administration believes strongly in intellectual property protections, but in service of ending this pandemic, supports the waiver of those protections for COVID-19 vaccines. We will actively participate in text-based negotiations at the World Trade Organization (WTO) needed to make that happen. Those negotiations will take time given the consensus-based nature of the institution and the complexity of the issues involved.

“The Administration’s aim is to get as many safe and effective vaccines to as many people as fast as possible. As our vaccine supply for the American people is secured, the Administration will continue to ramp up its efforts – working with the private sector and all possible partners – to expand vaccine manufacturing and distribution. It will also work to increase the raw materials needed to produce those vaccines.”

Of course, the details here matter. Tai says the US will support a waiver for vaccines... but did not definitively say if it will support the waiver currently applied for from South Africa and India. It would be just like the US to say it supports the waiver to get everyone who supports the effort to cheer... and then go into negotiations and push for a much, much narrower (and potentially effectively meaningless) waiver. Hopefully that's not the case.

Still, just getting the USTR to support any waiver was a big step. This was far from the most likely outcome. The pharma industry is incredibly powerful at the lobbying game, and when you add Hollywood's muscle to it as well, many people felt that the US would refuse to support the waiver. Hell, earlier this week they even got Dr. Fauci to come out leaning against it, saying he was agnostic on the actual waiver, but thought there were better ways to fight COVID (Fauci may be an expert in infectious diseases, but his expertise in intellectual property is... that he holds a few patents of his own). And, of course, Biden has always had a close relationship with Hollywood and has long been a copyright maximalist.

And, while Fauci may be correct that this may not be the most important thing for fighting COVID, no one is saying this is the only thing. This is just one of a long list of things, and it will undoubtedly help deal with restrictions in some areas that are costing people lives.

In the end, this came down to a simple question: is the best way to protect the global economy to protect the monopoly interests of a few giant companies, or to use knowledge, information, and expertise to help spread better treatments and vaccines faster. The US chose the latter, and it was the only moral choice.

18 Comments | Leave a Comment..

Posted on Techdirt - 5 May 2021 @ 12:10pm

What If The Media And Politicians Tried To Hold A Techlash... And No One Joined Them

from the oh,-look-at-that dept

There's been plenty of talk lately about the "Techlash" which has become a popular term among the media and politicians. However, what if the general public feels quite differently? Vox, which is not exactly known for carrying water for the tech industry, has released a new poll that shows that the public is overwhelmingly optimistic about technology, and thinks that technology has been a force for good in the world. This applies across the board for Democrats, Republicans, and independents.

Seventy-one percent of likely voters agreed with the tech-optimist statement: “Technology is generally a force for good. Large tech companies have provided innovations like vaccines, electric vehicles, bringing down the cost of batteries that store green energy, vegetarian meat options, and other ways that have improved our quality of life.” Only 19 percent agreed with the tech-pessimist statement: “Technology is generally a force for bad. Large tech companies are bad for workers, inequality, and democracy. The technological innovations they produce are not worth the cost.”

When put into chart form, the results are really, really striking:

Obviously, "technology" covers a lot more than the big internet companies -- and the messaging that Vox tested highlights mostly non-internet innovations. But, still. The fact that the "control" group -- ones who didn't even receive the specific messaging -- felt even stronger about the good technology does in the world than those who were first primed with statements about other kinds of technology is really something.

There are plenty of examples, certainly, of tech gone awry, but it really seems that the general public recognizes all of the good that innovation and technology have done for the world, and feel optimistic about it. Of course, none of that will stop "the narrative" of the techlash, because it's just too useful for many pushing it.

18 Comments | Leave a Comment..

Posted on Free Speech - 4 May 2021 @ 5:35pm

Trump Shows Why He Doesn't Need Twitter Or Facebook, As He Launches His Own Twitter-Like Microblog

from the you-know-what-they-say-about-blog-sizes dept

In a few hours, the Oversight Board will announce its decision regarding Facebook's decision to ban Donald Trump from its platform. As we noted back when Trump was removed from Twitter and Facebook, Trump does not lack in ways to be heard. Indeed, we suggested that he could very, very easily set up his own website with tweet-like statements, and it was likely that those would be shared widely.

And... as we wait for the Oversight Board ruling, it looks like Trump has done exactly that. He's launched a new blog site that has short Tweet-style posts, and includes simple sharing buttons so people can post the text to both Twitter and Facebook:

It's not hard to see how that... looks quite like his Twitter feed. For what it's worth, a friend notes that while you can "like" Trump's new missives, you cannot unlike them once you've done so (this is a metaphor for something, I'm sure).

The messages on the site go back to March 24, even though the site was just launched today, so it makes you wonder if this is the infamous rumored result of Trump writing down "insults and observations" that he would have said on Twitter if he still had an account.

In a video he currently has posted to the top of the site, announcing the site, Trump says that it will be "a beacon of freedom" and "a place to speak freely and safely" (whatever that means). It's unclear if they just mean for Trump himself, or if this is the rumored first pass of his own social network.

Either way, if he doesn't let anyone else post to the site, under his own definition of censorship, wouldn't that mean that he's censoring everyone but himself? Or, if he does allow others to post, it will be absolutely fascinating to see what content moderation policies he ends up putting in place. The existing terms of service on the site makes it clear that he wants to be able to moderate everything:

Although Save America has no obligation to do so, it reserves the right, and has absolute discretion, to remove, screen or edit any User Content posted or stored on the Sites at any time and for any reason without notice, and you are solely responsible for creating backup copies of and replacing any User Content you post or store on the Sites at your sole cost and expense. Any use of the Interactive Areas or other portions of the Sites in violation of the foregoing violates these Terms of Service and may result in, among other things, termination, or suspension of your rights to use the Interactive Areas and/or the Sites.

It also notes:

Save America takes no responsibility and assumes no liability for any User Content posted, stored or uploaded by you or any third party, or for any loss or damage thereto, nor is Save America liable for any mistakes, defamation, slander, libel, omissions, falsehoods, obscenity, profanity or other objectionable content you may encounter.... As a provider of interactive services, Save America is not liable for any statements, representations, or User Content provided by its users in any Interactive Area.

The site also has a long list of content you're not allowed to publish -- much of it that is perfectly legal under the 1st Amendment (even as Trump's friends have been pushing rules that say only unprotected speech can be taken down):

The Sites may include interactive areas or services ("Interactive Areas"), such as forums, blogs, chat rooms or message boards, or other areas or services in which you or other users may create, post, share or store content, messages, materials, data, information, text, graphics, audio, video, or other items or materials on the Sites ("User Content"). You are solely responsible for your use of such Interactive Areas and use them at your own risk. By using any Interactive Areas, you agree not to post, upload, transmit, distribute, store, create, or otherwise publish to or through the Sites any of the following:

  • User Content that is unlawful, libelous, defamatory, obscene, pornographic, indecent, lewd, suggestive, harassing, discriminatory, threatening, invasive of privacy or publicity rights, abusive, inflammatory, fraudulent, deceptive or misleading;
  • User Content that would constitute, encourage or provide instructions for a criminal offense, violate the rights of any party, or that would otherwise create liability or violate any local, state, national or international law;
  • User Content that may infringe any patent, trademark, trade secret, copyright or other intellectual or proprietary right of any party;
  • User Content that impersonates any person or entity or otherwise misrepresents your affiliation with a person or entity;
  • Unsolicited promotions, advertising, or solicitations;
  • Private or personally identifying information of any third party, including, without limitation, addresses, phone numbers, email addresses, Social Security numbers and credit card numbers;
  • Viruses, corrupted data or other harmful, disruptive or destructive files; and
  • User Content which violates the terms of any Save America guidelines, policies or rules posted on the Site or otherwise provided to you; and
  • User Content that, in the sole judgment of Save America, is objectionable or which restricts or inhibits any other person from using or enjoying the Interactive Areas or the Sites, or which may expose Save America or its users to any harm or liability of any type.

On the whole, though, this is a good thing. I'm glad that Trump has set up his own site (no matter what happens with Facebook). More people should do that themselves as well, and recognize that then you get to set your own moderation rules and your own system, and don't have to deal with not violating someone else's rules. But it also shows how Facebook and Twitter removing him wasn't censorship -- it was just them saying he needs to find somewhere else to speak.

110 Comments | Leave a Comment..

Posted on Techdirt - 4 May 2021 @ 12:04pm

Oversight Board Tells Facebook It Needs To Shape Up And Be More Careful About Silencing Minorities Seeking To Criticize The Powerful

from the pay-attention-to-this dept

Tomorrow, the Oversight Board is set to reveal its opinion on whether Facebook made the right decision in banning former President Trump. And that will get tons of attention. But the Board came out with an interesting decision last week regarding a content takedown in India, that got almost no attention at all.

Just last week, we wrote about an ongoing issue in India, where the government of Prime Minister Narendra Modi has failed in almost every way possible in dealing with the COVID pandemic, but has decided the best thing to focus on right now is silencing critics on Twitter. That backdrop is pretty important considering that the very next day, the Oversight Board scolded Facebook for taking down content criticizing Modi's government.

That takedown was somewhat different and the context was very different. Also, it should be noted that as soon as the Oversight Board agreed to take the case, Facebook admitted it had made a mistake and reinstated the content. However, this case demonstrates something important that often gets lost in all of the evidence free hand-wringing about "anti-conservative bias" from people who wrongly insist that Facebook and Twitter only moderate the accounts of their friends. The truth is that content all across the board gets moderated -- and often the impact is strongest on the least powerful groups. But, of course, part of their lack of power is that they're unable to rush onto Fox News and whine about how they're being "censored."

The details here are worth understanding, not because there was some difficult decision to make. Indeed, as noted already, Facebook realized it made a mistake almost immediately after the Oversight Board decided to look into this, and when asked why the content was taken down, basically admitted that it had no idea and that it was a complete and total mistake. Here was the content, as described by the Oversight Board ruling:

The content touched on allegations of discrimination against minorities and silencing of the opposition in India by “Rashtriya Swayamsevak Sangh” (RSS) and the Bharatiya Janata Party (BJP). RSS is a Hindu nationalist organization that has allegedly been involved in violence against religious minorities in India. “BJP” is India’s ruling party to which the current Indian Prime Minister Narendra Modi belongs, and has close ties with RSS.

In November 2020, a user shared a video post from Punjabi-language online media Global Punjab TV and an accompanying text. The post featured a 17-minute interview with Professor Manjit Singh, described as “a social activist and supporter of the Punjabi culture.” In its post, Global Punjab TV included the caption “RSS is the new threat. Ram Naam Satya Hai. The BJP moved towards extremism.” The media company also included an additional description “New Threat. Ram Naam Satya Hai! The BJP has moved towards extremism. Scholars directly challenge Modi!” The content was posted during India’s mass farmer protests and briefly touched on the reasons behind the protests and praised them.

The user added accompanying text when sharing Global Punjab TV’s post in which they stated that the CIA designated the RSS a “fanatic Hindu terrorist organization” and that Indian Prime Minister Narendra Modi was once its president. The user wrote that the RSS was threatening to kill Sikhs, a minority religious group in India, and to repeat the “deadly saga” of 1984 when Hindu mobs attacked Sikhs. They stated that “The RSS used the Death Phrase ‘Ram naam sat hai’.” The Board understands the phrase "Ram Naam Satya Hai" to be a funeral chant that has allegedly been used as a threat by some Hindu nationalists. The user alleged that Prime Minister Modi himself is formulating the threat of “Genocide of the Sikhs” on advice of the RSS President, Mohan Bhagwat. The accompanying text ends with a claim that Sikhs in India should be on high alert and that Sikh regiments in the army have warned Prime Minister Modi of their willingness to die to protect the Sikh farmers and their land in Punjab.

The post was up for 14 days and viewed fewer than 500 times before it was reported by another user for “terrorism.” A human reviewer determined that the post violated the Community Standard on Dangerous Individuals and Organizations and took down the content, which also triggered an automatic restriction on the use of the account for a fixed period of time. In its notification to the user, Facebook noted that its decision was final and could not be reviewed due to a temporary reduction in its review capacity due to COVID-19. For this reason, the user appealed to the Oversight Board.

So, you had an ethnic minority -- one who had been attacked in the past -- warning about those currently in power. And Facebook took it down, refused to review the appeal... until the Oversight Board turned its eye on it, and then admitted it was a mistake, and basically threw its hands in the air and said it had no idea why it had been taken down in the first place.

According to Facebook, following a single report against the post, the person who reviewed the content wrongly found a violation of the of the Dangerous Individuals and Organizations Community Standard. Facebook informed the Board that the user’s post included no reference to individuals or organizations designated as dangerous. It followed that the post contained no violating praise.

Facebook explained that the error was due to the length of the video (17 minutes), the number of speakers (two), the complexity of the content, and its claims about various political groups. The company added that content reviewers look at thousands of pieces of content every day and mistakes happen during that process. Due to the volume of content, Facebook stated that content reviewers are not always able to watch videos in full. Facebook was unable to specify the part of the content the reviewer found to violate the company’s rules.

Got that? Facebook is basically saying "yeah, it was a mistake, but that was because it was a long video, and we just had one person reviewing who probably didn't watch the whole video."

Here's the thing that the "oh no, Facebook is censoring people" don't get. This happens all the time. And none of us hear about it because the people it happens to often are unable to make themselves heard. They don't get to run to Fox News or Parler or some other place and yell and scream. And, this kind of "accidental" moderation especially happens to the marginalized. Reviewers may not fully understand what's going on, or not really understand the overall context, and may take the "report" claim at face value, rather than having the ability or time to fully investigate.

In the end, the Oversight Board told Facebook to put back the content, which was a no-brainer since Facebook had already done so. However, more interesting were its policy recommendations (which, again, are not binding on Facebook, but which the company promises to respond to). Here, the Oversight Board said that Facebook should make its community standards much more accessible and understandable, including translating the rules into more languages.

However, the more interesting bit was that it said that Facebook "should restore human review and access to a human appeals process to pre-pandemic levels as soon as possible while fully protecting the health of Facebook’s staff and contractors." There were some concerns, early in the pandemic, about how well content moderation teams could work from home, since a lot of that job involves looking at fairly sensitive material. So, there may be reasons this is not really doable just yet.

Still, this case demonstrates a key point that we've tried to raise about the impossibility of doing content moderation at scale. So much of it is not about biases, or incompetence, or bad policies, or not wanting to do what's right. A hell of a lot of it is just... when you're trying to keep a website used by half the world operating, mistakes are going to be made.

14 Comments | Leave a Comment..

Posted on Techdirt - 4 May 2021 @ 9:32am

Salesforce Asks Appeals Court To Say It's Protected Under 230; After Its Own CEO Said We Should Get Rid Of 230

from the incredible dept

We were quite perplexed in late 2019 when Salesforce.com founder and CEO Marc Benioff (never one to shy away from expressing his opinions on anything at all) announced that Section 230 should be abolished. It seemed like an extremely poorly thought-out statement from a CEO who was wholly unfamiliar with the issues, but who has sort of relished tweaking the noses of the big consumer internet companies over the past few years (after spending the first decade or so of Software.com's existence tweaking the noses of enterprise software companies). As we wrote at the time, Benioff didn't seem to understand 230 at all, and seemed just angry at Facebook.

Of course, this is coming back to bite him hard. Just a few months later, lawyer Annie McAdams, who seems to have made it her life's mission to file blatantly silly attacks on Section 230 in court, sued Salesforce.com (and not for the first time!), claiming that because Backpage.com had used Salesforce as its CRM system, Salesforce was somehow magically liable for any sex trafficking that happened on the platform. In the complaint, McAdams cited Benioff's comments:

Salesforce will claim that no matter what role Salesforce played in the development and amplification of Backpage’s business model, they should be completely shielded and not have to answer any questions or be held accountable in any manner by asking the Court to dismiss the case at the initial stage.

The distortion and use of the Communications Decency Act as a sword by technology companies such as Salesforce is an outright distortion of the intent of Congress in regard to the development of the internet.

The Communications Decency Act (“CDA”) was never intended to protect technology companies from being held accountable for unlawful conduct or sex trafficking.

Salesforce’s own CEO, Marc Benioff, on October 16, 2019, has demanded Section 230 of the CDA be abolished with the need for “standards and practices be decided by law”

And, in late March, she actually prevailed. In a somewhat terrible decision, federal judge Andrew Hanen refused to grant Salesforce's motion to dismiss, and said that Salesforce's use of 230 here did not let it off the hook with a bizarre ruling that goes against pretty much all 230 precedent on the books:

... the Court cannot hold as a matter of law that CDA 230's protections apply to Salesforce. In particular, the Court is not persuaded that Salesforce is a provider of "an interactive computer service" entitled to protection.

WHAT?!? I mean, every other court has recognized that any website is considered an interactive computer service. I'm honestly having trouble recalling another cases where this definition is even an issue at all.

Moreover, it is unclear to the Court whether CDA 230 is even relevant, because Plaintiff has alleged that Salesforce directly and "knowingly benefitted" from providing services to facilitate sex trafficking. That allegation, if true, would elevate Salesforce's role beyond that of a mere publisher, which is the touchstone of CDA 230(c)(1).

This is also... just wrong. Having knowledge does not, in any way, elevate a website's role "beyond that of a mere publisher." There are multiple cases that say so, and nothing in the law says that knowledge changes anything. The whole thing is bizarre.

For what it's worth, the court did reject a bunch of McAdams' other claims regarding negligence and civil conspiracy, recognizing that it's a stretch to argue that providing the CRM tool to a service that provided tools to other third parties, some of whom used it for trafficking, somehow makes them liable. But, without the 230 immunity, the case still has to go on concerning Texas' anti-sex trafficking law.

Now, Salesforce is in the position of trying to ask the 5th Circuit appeals court to fix this awful ruling. Its first move is just to get the district court to let it pause the case at this point to get the 5th Circuit to take a look. And it's leaning hard on Section 230, the same law its CEO says should be abolished.

Both sides would benefit from resolving sooner rather than later the threshold, potentially dispositive issue whether section 230 of the Communications Decency Act applies to Salesforce and bars this lawsuit in its entirety.

In laying out the argument for why the 5th Circuit should get to review the case at this stage, Salesforce lawyers point out that if this is not allowed, it completely destroys the whole reason that 230 immunity exists in the first place -- to make one immune to these kinds of mistargeted lawsuits:

The section 230 issue presents a controlling question of law—particularly given that the Fifth Circuit considers section 230(c)(1) an “immunity provision[]” and regards its applicability as a threshold legal issue to resolve at the outset of litigation. MySpace, 528 F.3d at 418; accord Diez v. Google, Inc., 831 F. App’x 723, 724 (5th Cir. 2020) (per curiam). That necessity is driven “not because of the expense of litigation but because of the irretrievable loss of immunity from suit.” McSurely v. McClellan, 697 F.2d 309, 317 n.13 (D.C. Cir. 1982) (per curiam). So courts “aim to resolve the question of [section] 230 immunity at the earliest possible stage of the case because that immunity protects [providers] not only from ultimate liability,” but also from litigation itself. Nemet Chevrolet, Ltd. v. Consumeraffairs.com, Inc., 591 F.3d 250, 255 (4th Cir. 2009) (citation omitted).

As a result, an interlocutory appeal is warranted to ensure meaningful appellate review of whether section 230 bars plaintiffs’ suit against Salesforce. If the litigation continues, even though Salesforce may eventually be protected “from ultimate liability,” it will have lost—irretrievably—section 230’s protection against being sued in the first place and “having to fight costly and protracted legal battles.” Nemet Chevrolet, 591 F.3d at 255 (citation omitted). Thus, as “resolution on appeal . . . would impact the course of litigation and could terminate the suit in this Court,” La. State Conf. of NAACP v. Louisiana, --- F. Supp. 3d. ---, No. 19-479-JWD-SDJ, 2020 WL 6130747, at *9 (M.D. La. Oct. 19, 2020), the section 230 issue is a controlling question of law that warrants immediate review.

As Salesforce points out, Judge Hanen's claim that Salesforce might not even be an "interactive computer service" goes against lots of other courts, even those who have reviewed whether or not Salesforce itself qualifies:

The Court’s initial hesitation as to whether Salesforce is an interactive computer service provider covered by section 230 is contrary to at least two other courts that have decided that issue as to Salesforce specifically. And decisions from courts across the country magnify the substantial ground for disagreement on when an interactive computer service provider is “treated” as a “publisher” under section 230.

To start, two other courts have very recently held that Salesforce meets section 230’s definition of an interactive computer service provider in cases involving virtually identical allegations. A California state court determined that “[t]he term ‘interactive computer service’ . . . applies to software providers such as [Salesforce].” Does #1 through #90 v. Salesforce.com, Inc., No. CGC-19-574770, slip op. at 4 (S.F. Super. Oct. 3, 2019).

That court explained that Salesforce’s “customer relationship management . . . software” supplies “enabling tools” that users can access through the internet. Does #1 through #90, at 4 (quoting 47 U.S.C. § 230(f)(4)). Those software tools, the court concluded, put Salesforce well within the “broad statutory definition[]” of an “access software provider,” because Salesforce provides “software . . . or enabling tools” that “transmit,” “receive,” “cache,” “search,” and “organize” data. Id. at 4–5 (quoting 47 U.S.C. § 230(f)(4)). And because “multiple users” can “access” the “computer server” with those tools, 47 U.S.C. § 230(f)(2), Salesforce is a “provider . . . of an interactive computer service” under section 230(c)(1), see Does #1 through #90, at 4.

The court also ruled that the plaintiffs’ claims impermissibly “treat[ed] [Salesforce] as the publisher of [third-party] information.” Does #1 through #90, at 5. The court explained that the plaintiffs’ claims relied on third-party content to establish Salesforce’s liability. Simply put, Salesforce could “only be liable if . . . linked to the[ ] advertisements” that were used to traffic the plaintiffs. Id. Because the plaintiffs alleged that Salesforce was linked to the ads for the reason that “its ‘platform and CRM’ [software] enabled Backpage to publish and disseminate” them, the court determined that the plaintiffs’ claims necessarily implicated Salesforce “as a publisher.” Id. The court therefore concluded that the claims were barred by section 230(c)(1)....

A Texas state court has held the same—namely, that Salesforce falls within section 230’s definition of an interactive computer service and that materially identical claims were barred under section 230. In Re: Jane Doe Cases, Tex. MDL Cause, No. 2020-28545, slip op. at 1–2 (Aug. 28, 2020). Thus, whether Salesforce is entitled to section 230 immunity may depend on whether the parties are in state or federal court in Texas—creating a risk of forum shopping between Texas courts that heightens the need for interlocutory review.

Then they note that there are lots of other cases regarding non-Salesforce defendants showing how broad the definition of an ICS truly is.

Relying on the plain statutory text, courts have held that many different types of providers are covered under the statute’s capacious definitions of an “interactive computer service” and “access software provider.” 47 U.S.C. § 230(f)(2) & (4); see, e.g., Zango, Inc. v. Kaspersky Lab, Inc., 568 F.3d 1169, 1175–76 (9th Cir. 2009) (provider of anti-malware software); Davis v. Motiva Enterprises, L.L.C., No. 09-14-00434-CV, 2015 WL 1535694, at *1, *3–4 (Tex. App.—Beaumont Apr. 2, 2015) (employer whose employee used the company’s “technology and facilities”); GoDaddy.com, LLC v. Toups, 429 S.W.3d 752, 758–59 (Tex. App.—Beaumont 2014) (website host).

Other courts might therefore conclude (and indeed have concluded) that Salesforce—a provider of software that “enables . . . access by multiple users to a computer server,” 47 U.S.C. § 230(f)(2), where those users can use various tools to “transmit,” “receive,” “cache,” “search,” and “organize” customer data, id. § 230(f)(4)— falls within these definitional provisions, too.

As such, they note that plenty of courts -- including the 5th Circuit in which this district court resides -- have held that 230 immunizes websites from all kinds of civil liability if that liability touches on 3rd party content.

Likewise, the Fifth Circuit and a host of other courts have held that the “plain text” of section 230 immunizes interactive computer service providers from “any cause of action” that treats them as a “publisher” by seeking to hold them “liable for information originating with a third-party user of the service.” Diez, 831 F. App’x at 724; accord MySpace, 528 F.3d at 418 (negligence claims); see O’Kroley v. Fastcase, Inc., 831 F.3d 352, 354 (6th Cir. 2016) (invasion of privacy and other tort claims); Jane Doe No. 1 v. Backpage.com, LLC, 817 F.3d 12, 19–20 (1st Cir. 2016) (human trafficking claims); Barnes v. Yahoo!, Inc., 570 F.3d 1096, 1101–02 (9th Cir. 2009) (collecting citations).

Hopefully they succeed, and the 5th Circuit overturns the initial ruling.

However, the bigger question may be whether or not Benioff has learned his lesson and might actually understand why Section 230 is so important for the wider internet, including his company?

Read More | 17 Comments | Leave a Comment..

Posted on Techdirt - 3 May 2021 @ 12:06pm

Rep. Lauren Boebert Decides To Streisand Parody Site Making Fun Of Her, Threatens To Take Legal Action Against It

from the supporting-the-1st-Amendment dept

Rep. Lauren Boebert is one of the new crew of elected Republicans who claims to be "pro-Constitution" and "pro-freedom" but when you get down into the details, it seems that the only part of the Constitution that matters to her is the 2nd Amendment. The website for her campaign proudly states that she's "Standing for Freedom" and is "Pro-Freedom, Pro-Guns, Pro-Constitution."

You do have to wonder if she skipped over the 1st Amendment in her rush to defend the 2nd, however. This morning, her press secretary Jake Settle (who came to her office after working on Mike Pence's communications team) sent quite a fascinating threat email to the operator of a Lauren Boebert parody site, TheLaurenBoebert.com.

The operator of that site, comedy writer Toby Morton, tweeted an image of the letter this morning:

I have since seen the original email that does, indeed, appear to come from Jake Settle. I have emailed Jake to confirm his side of the story, and asked him to answer a few questions as well. At the time of writing he has not responded. The email says the following:

To whom it may concern,

This website (https://www.thelaurenboebert.com/) needs to be taken down since the photos on here are copyrighted property of the U.S. Federal Government. They are the property of the office of Congressman Lauren Boebert, and your use of them is unauthorized and illegal.

Additionally, the entire website is a defamatory impersonation, and it goes against relevant terms of service and U.S. law. Please remove immediately or face further action.

Sincerely,

Jake Settle | Press Secretary
Rep. Lauren Boebert (CO-03)

If you're wondering what the parody site looks like, it does use the same main image as Lauren's official Congressional site (different from her campaign site). Here's what the mobile version of the parody site looks like:

And here's her official Congressional site (note the same image):

The parody site honestly doesn't have that much more on it. It shows a couple Boebert tweets, then has links to some other parody sites of wacky Republican members of Congress and Senators, and says that it's a parody site (which isn't just a talisman where saying it automatically makes it true). Update: There actually is a bit more on the website that I had missed on first pass: under the "blog" tab, there are some posts that include a number of images of Boebert. It is extremely unlikely that the copyright to any of those works are held by the US government. It is possible that some are held by Boebert herself (unclear if her Congressional Office would hold the copyright), but we'll get there.

Before we even dig into the legal analysis of Settle's threat letter, let's just make one thing clear: whether or not there's a legal leg to stand on, Settle's threat is stupid. All this has served to do is to Streisand a parody site that likely wasn't receiving much if any traffic prior to this. Indeed, Morton has confirmed to me that the site hadn't received much traffic, but now tons of people are looking at it. At best, Boebert comes off looking like a thin-skinned insecure whiner who can't take a mild parody. At worst, she comes off as a censorial bully who has no respect for "freedom" if it's associated with the 1st Amendment.

As for the legal issues... Settle's email is a mess of confusing concepts, so it's not even remotely clear what any actual legal claim might look like (which is not to say there are none -- just that Settle's email most certainly does not lay out a clear theory of one). First up, the copyright claims are a mess.

This website (https://www.thelaurenboebert.com/) needs to be taken down since the photos on here are copyrighted property of the U.S. Federal Government. They are the property of the office of Congressman Lauren Boebert, and your use of them is unauthorized and illegal.

It's not entirely clear how they could be both the "copyrighted property" (which is not a thing) of "the U.S. Federal Government" and "the property of the office of Congressman Lauren Boebert" at the same time. There's only the one image on the front of the site as far as I can see, and it might be true that Boebert holds the copyright to it. A lot of people responded to Toby's tweet and falsely claimed that since it's on a government website it's public domain. That is not true. US copyright law does say that works created by the government are in the public domain and not subject to copyright. But (and this is important) that does not mean every work the government uses or posts to its website is automatically in the public domain. Other copyright holders can transfer a work to the government, and the government could then retain the copyright.

In this case, it seems highly unlikely that the work was created by the federal government. It is quite likely that it was created by Lauren Boebert's campaign or someone closely associated with Boebert and the campaign. There are then all sorts of possibilities about the copyright. It could be held by the photographer. It could be held by the Boebert campaign, or by Boebert herself if the copyright was assigned to her. In theory, it could have been assigned to the federal government, but that seems highly unlikely.

The claim that it is the "copyrighted property" of the US government seems like it is likely nonsense. The claim that its held by Boebert's office is not entirely crazy. However, even if that were true, Morton would have a very strong fair use argument, seeing as that he's set up a parody site. Parody is one of the quintessential examples of fair use. As the Supreme Court has said, the context of the use of the original work in a parody does matter, so it's not automatically fair use.

In parody, as in news reporting, see Harper & Row, supra, context is everything, and the question of fairness asks what else the parodist did besides go to the heart of the original.

So, perhaps there's some argument somewhere that would persuade a court that this is not fair use, but that seems unlikely. The fact that this is parodying a politician, and criticizing or even mocking politicians is part of what the US considers an important element of our 1st Amendment free speech protections, it seems highly likely that any court would come down on the side of fair use should a copyright claim be brought.

As for the images on the "blog" portion of the site, there is perhaps an argument that some of those copyrights are held by Boebert (certainly not the federal government). Could those lead to a lawsuit? Very possibly, but if that was the case, the copyright holder should have sent a takedown notice first. Whether or not those images are fair use is a tougher call. They are used for criticism and commentary, which is part of the fair use analysis, but there isn't that much commentary on them, and so it really would be up to the court where this landed. Still, at the very least, it doesn't make much sense for her press secretary to be sending out that threat letter, though.

As for the other claim of "defamatory impersonation" well...

Additionally, the entire website is a defamatory impersonation, and it goes against relevant terms of service and U.S. law. Please remove immediately or face further action.

"Defamatory impersonation" is not a thing. Defamation is. But it's difficult to see anything on the website that would qualify as a defamatory statement of fact. The only real statements on the website about Boebert are calling her a "racist" and a "Qanon sympathizer" and both of those are either protected opinion, or substantially true. Either way, there's simply no way any defamation claim here would meet the actual malice standard necessary for defamation of a public figure (and as a member of Congress, Boebert is undoubtedly a public figure).

So, even if there is a legal claim buried in here, it's difficult to see it getting very far. But, either way, just sending such a threat is inherently stupid.

55 Comments | Leave a Comment..

Posted on Techdirt - 3 May 2021 @ 10:47am

What3Words Sends Ridiculous Legal Threat To Security Researcher Over Open Source Alternative

from the never-use-what3words dept

A couple years we wrote about What3Words, and noted that it was a clever system that created an easy way to allow people to better share exact locations in an easily communicated manner (every bit of the globe can be described with just 3 words -- so something like best.tech.blog is a tiny plot near Hanover, Ontario). While part of this just feels like fun, a key part of the company's marketing message is that the system is useful in emergency situations where someone needs to communicate a very exact location quickly and easily.

However, as we noted in our article, as neat and clever as the idea is, it's very, very proprietary, and that could lead to serious concerns for anyone using it. In our article, we wrote about a bunch of reasons why What3Words and its closed nature could lead to problems -- including the fact that the earth is not static and things move around all the time, such that these 3 word identifiers may not actually remain accurate. But there were other problems as well.

And, apparently one of those problems is that they're censorial legal bullies. Zach Whittaker has the unfortunate story of how What3Words unleashed its legal threat monkeys on a security researcher named Aaron Toponce. Toponce had been working with some other security researchers who had been highlighting some potentially dangerous flaws in the What3Words system beyond those we had mentioned a few years back. The key problem was that some very similar 3 word combos were very close to one another, such that someone relying on them in an emergency could risk sending people to the wrong location.

The company insists that this is rare, but the research (mainly done by researcher Andrew Tierney) indicates otherwise. He seemed to find a fairly large number of similar 3 word combos near each other. You can really see this when Tierney maps out some closely related word combos:

In a follow up article, Tierney detailed a bunch of examples where this confusion could be dangerous. Some of them are really striking. Here's just one:

“I think I’m having a heart attack. I’m walking at North Mountain Park. Deep Pinks Start.” – 1053m.

(Try reading both out)

https://what3words.com/deep.pink.start

https://what3words.com/deep.pinks.start

Anyway, Toponce had been tweeting about Tierney's findings, and talked about WhatFreeWords, which had been "an open-source, compatible implementation of the What3Words geocoding algorithm." It was a reverse engineered version of the proprietary What3Words system. That tool was created back in 2019, but a week after it went online, What3Words lawyers sent incredibly overbroad takedown letters about it to everyone who had anything even remotely connected to WhatFreeWords, and had it pulled offline basically everywhere.

First up: this is ridiculous. While reverse engineering is unfortunately fraught with legal risk, there are many areas in which it is perfectly legal. And it seems like WhatFreeWords implementation should be legal. But it appeared to have been a fun side project, and not worth the legal headache.

Even though WhatFreeWords was disappeared from the world in late 2019, it appears that Toponce still had some of the code. So in tweeting about Tierney's research, he offered up the tool to researchers to help investigate more problems with What3Words, similar to what Tierney had found.

And that's when What3Words' lawyers pounced. And, in pouncing, the mere chilling effects of the legal threat worked:

Toponce also admits he couldn't even sleep after receiving the threat letter. This is an underappreciated aspect of the insanely litigious nature of many censorial bullies these days. Even if you're in the right, getting sued can be completely destructive. Toponce was trying to help security researchers better research an application that is promoted for being safe and security researchers should be allowed to make use of reverse engineering to do exactly that. But, What3Words and their bullying lawyers made sure that's impossible.

To be fair to their bullying lawyers, the threat letter is not as aggressive as some others, and they even make it explicit that they are not seeking that Toponce stop criticizing the company:

In this connection, and to be clear, our client does not require the deletion of your criticism of and feedback in respect of its service.

But... it still makes pretty stringent demands.

i) delete all copies of "What Free Words" and any other works derivative of W3W's software and wordlist presently in your possession or under your control;
ii) confirm, to the best of your knowledge, the identities of all parties / individuals to whom you have provided copies or derivations of the software and/or wordlist;
iii) agree that you will not in the future make further copies or derivations of and/or distribute copies or derivations of the software and/or wordlist;
iv) delete any Tweets or other online references made to the copies / derivations of our client's software and wordlist and that are connected with or emanate from the "What Free Words", and agree not to make similar representations in the future.

Of course, there are some questions about what intellectual property is actually being infringed upon here as well. When the company's lawyers got the original WhatFreeWords site taken down, they claimed copyright and trademark rights, though extraordinarily broadly. They claim their own software is covered by copyright, but WhatFreeWords isn't using their software. They also claim that all the 3 word combos are covered by copyright and... eh... it might be in the UK where W3W is based, but in the US, it would be harder to claim that three random word combos are creative enough to get a copyright. Also, in the US there would be a strong fair use defense. Unfortunately, in the UK, there is a ridiculous concept known as "database rights" that let you claim a right over a mere collection of things, even if you have no claim to the underlying rights. But, even so, it seems that there should be a fair use defense here. The UK has a fair dealing exception for research and private study, which seems like it should apply as well.

As for the trademark claims, well, no one's going to get confused about it, since it's pretty clear that WhatFreeWords was designed explicitly not to be from What3Words, and in this particular case, it's not being offered widely, just to knowledgeable security researchers. Even more insane: the original threat letter over WhatFreeWords claimed that there could be criminal penalties for violating consumer protection laws, and that's just insane.

Still, as Mike Dunford notes in his thread about this situation, W3W's decision to focus on locking up and threatening everyone perhaps explains why so few people know about or use What3Words. Imagine if they had built this as an open tool that others could build on and incorporate into other offerings. Then they could have others experiment and innovate and get more people to adopt it. By making it proprietary, and locking it down with threats and asshole lawyers, there's simply no reason to bother.

The only proper response to this is never, ever use What3Words for anything that matters. Beyond not giving in to censorial, abusive bullies, their legal reaction to a security researcher doing reverse engineering work to help find potentially dangerous problems with What3Words screams loudly to the world that What3Words has no confidence that it's products are safe. They're scared to death of security researchers being able to really test their work.

Both of these reasons means that What3Words should be remembered as little more than a failed.dumpster.fire rather than the cool.mapping.idea it could have been.

Read More | 24 Comments | Leave a Comment..

Posted on Tech & COVID - 3 May 2021 @ 9:30am

Hollywood Lobbyists So Afraid Of Any Public Benefit From 'Intellectual Property' That They're Trying To Block COVID Vaccine Sharing

from the you-did-what-now? dept

Throughout the COVID pandemic, it's been truly shameful to watch how patent maximalists have tried to insist that we just need more patents to deal with COVID -- even though the incredible breakthroughs that brought such quick development of vaccines were not due to patents, but rather the free and open flow of information from a bunch of researchers and scientists who didn't care about whether or not information was locked up for profit, but did care about saving millions of lives.

And now that we've got vaccines, we're dealing with significant problems in rolling them out around the world -- and patents are often in the way, holding that rollout back. And we actually have a way of dealing with that: what's known as a TRIPS waiver. TRIPS is the Agreement on Trade-Related Aspects of Intellectual Property Rights, which set up a variety of standards among member nations and the WTO regarding intellectual property. I have many problems with TRIPS (and the WTO), but TRIPS does include a process to grant waivers on intellectual property rights. This was in response to (very legitimate!) concerns by less well off nations that rich nations would use the patent system to block access to important life saving medicines.

So, to ease such concerns, the TRIPS agreement includes a process by which the WTO can grant a compulsory licensing regime that will allow others to make patented drugs, and thus increase availability. A key point of this so-called waiver is that it allows for better allocations of certain drugs during medical emergencies. Given that, issuing such a waiver right now seems like a no-brainer. But... it has not been.

India and South Africa put forth a a fairly straightforward waiver request for dealing with COVID-19. The key part of the request is that intellectual property requirements under TRIPS solely in relation to the "prevention, containment or treatment of COVID-19" should be waived during the course of the pandemic. It seems pretty straightforward. Even reliable patent maximalist sites like IP Watchdog are now publishing articles saying that the TRIPS waiver "is a necessary first step towards facilitating increased, rapid production of vaccines" and noting that it won't undermine the value of innovation in any way.

We've already noted that Big Pharma is lobbying against it -- which is to be expected. However, what is perhaps less expected is the fact that Hollywood is vehemently lobbying against it as well. Why? Well, they claim that because the waiver is not limited to just patents, it will be used to wipe away copyright as well.

This is... misleading at best. It is true that the waiver would cover copyrights, but only in an extremely limited fashion. As the part I quoted above notes, it only applies to intellectual property protections that are blocking the prevention, containment, and treatment of COVID-19. And, that can include a very limited set of copyrights. For example, there still remain shortages of ventilators in many parts of the world, and early on in the pandemic, people were working on 3D printing replacement parts to help deal with this extreme shortage. However, with some companies issuing threats over these 3D printed parts, there are legitimate concerns that copyright could be used to shut down such operations. Another area where a copyright waiver is likely to help is in allowing researchers easier access to important scientific journals and research that may help them develop more and better solutions.

As if to make Hollywood calm down, South Africa and India included an explicit statement in the waiver request to say that the waiver cannot be used for entertainment products: "The waiver in paragraph 1 shall not apply to the protection of Performers, Producers of Phonograms (Sound Recordings) and Broadcasting Organizations under Article 14 of the TRIPS Agreement." That's literally the 2nd paragraph in a four paragraph waiver request. Already, it's kind of insulting that officials crafting this waiver request in an attempt to save lives had to waste time making sure that Hollywood wouldn't get angry at them.

And even then it didn't work.

The Motion Picture Association, which represents major movie and television studios, deployed five lobbyists to influence Congress and the White House over the waiver. The Association of American Publishers as well as Universal Music have similarly revealed that they are actively lobbying against it.

Neil Turkewitz, a former Recording Industry Association of America official, blasted the proposal on Twitter, claiming it will harm musicians, performers, and other cultural workers who are already struggling. 

“As COVID has undermined the livelihoods of creators around the [globe emoji], you want to further expand their precarity—in the name of justice?” Turkewitz wrote.

The Turkewitz quote is particularly disgusting. There is nothing in the waiver that will harm the livelihood of creators. Indeed, getting the world vaccinated is how we bring things back to normal to help open up the world to help those musicians, performers, and other cultural workers survive. For him to even suggest that this waiver somehow harms them is not just disinformation, it's disinformation that will kill people. It's disgusting.

And the lobbying by Hollywood goes beyond just what was reported in the above linked Intercept article. ITIF, the Information Technology and Innovation Foundation, which may sound like a think tank that is focused on the tech industry, but which has long had close ties to Hollywood (and, indeed, an ITIF paper was the basis for the terrible SOPA/PIPA laws a decade ago), recently came out with a laughably ridiculous attack on the waiver, claiming that there's no possible way copyrights should be included in it:

This latest affront to IP rights is, to say the least, ill-placed, if not misinformed. There is simply no compelling reason to focus on the suspension of copyright in this case.

Oh come on. People are fucking dying and this is the fight you want to have? It's not "suspension of copyright" that people are asking for. They're asking for a narrowly tailored, specific exemption to excessively restrictive copyright solely in cases where that exception is needed to help fight COVID. The idea that it is "ill-placed" or "misinformed" is pure propaganda.

And then, just as I was putting the finishing touches on this article, Senator Thom Tillis, who has made it clear that his main goal in the Senate is to push for Hollywood's extremist interests, wrote up one hell of an oped against the waiver. It is chock full of nonsense.

Yet, waiving intellectual property rights abroad would not hasten the end of COVID-19. It would harm our domestic IP industries, hand India and China valuable government-supported research free of charge and weaken the global IP system for decades to come. Just last week, in remarks before the Intellectual Property Owners Association (IPO) Spring Summit Daren Tang, Director General of the World Intellectual Property Organization (WIPO), stated that a strong intellectual property ecosystem was primarily responsible for allowing COVID-19 vaccines to “be brought to people in the fastest time in history.” I wholeheartedly agree...

First off, it wouldn't "harm" any domestic industry. That's nonsense. And if the research is for saving lives and (as Tillis states) was "government-supported" then it should be freely available to anyone. Government supported research means that the public paid for it and it should be widely available to anyone.

Second, just because a long time advocate of patent and copyright maximalism says something, doesn't automatically make it true. There is no evidence whatsoever that "strong intellectual property... was primarily responsible for allowing COVID-19 vaccines" to come about. Indeed, the stories about how the vaccines were developed show the opposite. They show how the free flow of information and ideas among researchers and scientists around the globe, and them agreeing to work together, rather than trying to lock up ideas, is what helped make it possible.

I can understand pharma companies fighting against it, even if that alone is disappointing given the situation. That Hollywood and its friends are flat out lying about it and creating a moral panic, claiming this will somehow hurt the creative industries, is dangerous disinformation.

Read More | 33 Comments | Leave a Comment..

Posted on Free Speech - 30 April 2021 @ 10:45am

Disney Got Itself A 'If You Own A Themepark...' Carveout From Florida's Blatantly Unconstitutional Social Media Moderation Bill

from the welcome-to-GoogleLand-and-FacebookWorld dept

Earlier this year, we noted that a wide variety of states (mostly those controlled by angry, ignorant Republicans) were looking to pass blatantly unconstitutional bills that sought to force social media companies to host all speech and not moderate. As we noted in that article, Florida seemed to be leading the way, and now both houses of the Florida legislature have passed the bill that is blatantly unconstitutional, and will only serve to waste a large amount of taxpayer dollars to have this law thrown out in court.

The bill, like so many other such state bills, would violate the 1st Amendment by compelling websites to host speech they have no desire to host. It's not even worth going through the bill bit by bit to explain its many different unconstitutional parts, but like so many of these bills, it tries to say that social media websites (of a certain size) will be greatly restricted in any effort to moderate their website to make it safer. There is no way this is even remotely constitutional.

But, it gets worse. Seeing as this is Florida, which (obviously) is a place where Disney has some clout -- and Disney has famously powerful lobbyists all over the damn place -- it appears that Disney made sure the Florida legislature gave them a carveout. Florida Senator Ray Rodriques introduced an amendment to the bill, which got included in the final vote. The original bill said that this would apply to any website with 100 million monthly individual users globally. The Rodriques amendment includes this exemption:

The term does not include any information service, system, Internet search engine, or access software provider operated by a company that owns and operates a theme park or entertainment complex as defined in 509.013, F.S.

In other words, Disney (which owns a ton of companies with large internet presences) will be entirely exempt. Ditto for Comcast (Universal studios) and a few others. For what it's worth, the backers of this amendment claimed it was needed so that Disney could moderate reviews on its Disney Plus streaming service... but that makes no sense at all.

First, Disney Plus has nothing to do with theme parks. If the goal is to allow moderation of reviews on streaming platforms, then shouldn't the carveout be... for review sections on streaming platforms? Second, just the fact that the original bill would have created problems for the famously family friendly Disney to moderate reviews shows the problem with the entire bill. The whole point of 230 and content moderation is to allow websites to moderate in a way they see fit for their own community -- so sites like Disney can moderate to keep a "family friendly" experience, and others can moderate to match their own community standards.

Of course, that also means that if this bill is somehow found to be constitutional (and it will not be...), it will not be long until you start seeing 25 acres (the minimum amount necessary) somewhere in Florida suddenly under construction for the opening of GoogleLand, FacebookWorld or TwitterVillage. I, for one, can't wait to ride the AlgoSwings in GoogleLand and the Infinite Scroll Coaster at Twitter Village.

84 Comments | Leave a Comment..

More posts from Mike Masnick >>


This site, like most other sites on the web, uses cookies. For more information, see our privacy policy. Got it