Mike Masnick’s Techdirt Profile

mmasnick

About Mike Masnick Techdirt Insider

Mike is the founder and CEO of Floor64 and editor of the Techdirt blog.

He can be found on Twitter at http://www.twitter.com/mmasnick



Posted on Techdirt - 13 September 2019 @ 10:45am

Content Moderation Is Impossible: Facebook Settles Legal Fight Over Famous Painting Of A Woman's Genitals

from the which-policy-does-that-fit-under? dept

Just a few months ago, as part of our ongoing "content moderation at scale is impossible" series, we wrote about how Facebook has spent over a decade now struggling with how to deal with naked female breasts. There are a lot more details in that post, but it initially had a "no nudity" policy, but that got difficult when someone would post famous artwork or breastfeeding mothers. Facebook's policy keeps trying to change to adapt, but no matter what it does it keeps running into more and more edge cases.

For the last eight years, Facebook has been fighting in French courts over something similar. A French school teacher had post a copy of Gustave Courbet's 1866 oil painting, The Origin of the World. I'm not going to post a thumbnail here, because I'm sure it'll set off all sorts of other content moderation algorithms. You can click above to see it, though it's basically a painting of a naked woman, from a point of view in between her legs looking upward (which may or may not be SFW depending on where you work, so be warned). Facebook cancelled the teacher's account and he sued.

Much of the dispute resolved around jurisdiction. Facebook wanted the case handled in California. The teacher, not surprisingly, wanted it tried in France. The teacher won. Back in early 2018, the French court ruled that Facebook was wrong to shut his account down -- but since the teacher had apparently been able to sign up for a second account, said he wasn't entitled to any damages. The teacher was going to appeal, but, according to Artnet, the case has now settled, with both parties agreeing to make a donation to Le MUR, which is described as "the French street art association."

Given the situation, that seems like a perfectly reasonable end result (though an 8 year legal dispute does not). I also find it somewhat amusing that a French court decided to get into the business of determining whether or not Facebook's moderation choices were "wrong," but again it highlights the point that we've raised over and over again. Everyone who thinks it's easy to make these moderation decisions is wrong. Even with this particular piece of art, I'd bet there are a big difference in opinions (especially between the US and France). Just a few months ago, we had various US Senators and some prudish panelists whining about the awful content that kids were exposed to online. I'm guessing they would not have approved of Courbet's work showing up on Facebook at all.

And, of course, that helps to demonstrate the problem. What is Facebook supposed to do here? You have a French court telling them it must be left up, while you have American politicians saying stuff like this must be taken down. There is no right answer, which is kind of the point.

23 Comments | Leave a Comment..

Posted on Techdirt - 13 September 2019 @ 6:49am

Twitter Stands Up For Devin Nunes' Parody Accounts: Won't Reveal Who's Behind Them

from the good-for-twitter dept

A couple weeks ago, we noted that the judge in Virginia presiding over Devin Nunes' bullshit censorial lawsuit against Twitter, some parody Twitter accounts, and political strategist Liz Mair, had demanded that Twitter reveal to the judge who was behind the two parody accounts (for "Devin Nunes' Cow" and "Devin Nunes' Mom.") As we pointed out at the time, this request was highly unusual. Yes, the judge was in the process of determining if the case did not belong in Virginia, so he wanted to know if the people behind the accounts were based in Virginia, but there are ways to do that that protect the anonymity of the account holders (anonymity being a 1st Amendment right). Specifically, he could have just asked whether or not the account holders appeared to be based in Virginia.

We also wondered if Twitter would refuse the request -- as it has done in the past. And the answer is yes. Twitter has told the judge it won't comply, but did say that neither of the account holders lived in Virginia -- which should satisfy the only legal reason why the judge might want to know who they were.

Twitter on Wednesday told the judge it does not intend to disclose the names of the authors of accounts known as Devin Nunes’ Cow and Devin Nunes’ Mom, according to documents obtained by McClatchy.

“Defending and respecting the user’s voice is one of our core values at Twitter,” a Twitter spokesperson said in response to questions about the court filing. “This value is a two-part commitment to freedom of expression and privacy.”

Twitter in a message to other defendants in the case said it told the judge that the authors of the accounts do not live or work in Virginia....

[....]

“Undersigned counsel has been in contact with lawyers who have advised Twitter that they represent, respectively, the user or users of the @DevinCow account and the user or users of the @DevinNunesMom account,” the letter states. “Each of those counsel has authorized me to inform the Court, through this letter, that their respective client or clients do not reside or work in Virginia and never used the account while physically present in Virginia.”

This is good to see, though it remains to be seen how the judge feels about Twitter pushing back on his request. Again, the whole case is ridiculous and should be thrown out for a whole variety of reasons. But it's already worrisome that the judge thought it was fine to unmask the parody account holders, even if he promised that he would keep the identities secret.

23 Comments | Leave a Comment..

Posted on Techdirt - 12 September 2019 @ 3:30pm

Encryption Working Group Releases Paper To 'Move The Conversation Forward'

from the what-conversation? dept

One of the frustrating aspects of the "debate" (if you can call it that) over encryption and whether or not law enforcement should be able to have any kind of "access" is that it's been no debate at all. You have people who understand encryption who keep pointing out that what is being asked of them is impossible to do without jeopardizing some fairly fundamental security principles, and then a bunch of folks who respond with "well, just nerd harder." There have been a few people who have suggested, at the very least, that "a conversation" was necessary between the different viewpoints, but mostly when that's brought up it has meant non-technical law enforcement folks lecturing tech folks on why "lawful access" to encryption is necessary.

However, it appears that the folks at the Carnegie Endowment put together an actual working group of experts with very varying viewpoints to see if there was any sort of consensus or any way to move an actual conversation forward. I know or have met nearly everyone on the working group, and it's an impressive group of very smart, and thoughtful people -- even those I frequently disagree with. It's a really good group and the paper they've now come out with is well worth reading. I don't know that it actually moves the conversation "forward" because, again, I'm not sure there is any conversation to move forward. But I do appreciate that it got past the usual talking points. The paper kicks off by saying that it's going to "reject two straw men," which are basically the two positions frequently stated regarding law enforcement access to encrypted communication:

First of all, we reject two straw men—absolutist positions not actually held by serious participants, but sometimes used as caricatures of opponents—(1) that we should stop seeking approaches to enable access to encrypted information; or (2) that law enforcement will be unable to protect the public unless it can obtain access to all encrypted data through lawful process. We believe it is time to abandon these and other such straw men.

And... that's fine, in that the first of those statements is not actually the position those who support strong encryption actually hold. I mean, there have been multiple reports detailing how we're actually in the "golden age of surveillance", and that law enforcement has so much greater access to basically every bit of communications possible, and that there are plenty of tools and ways to get information that is otherwise encrypted. Yes, it's true that some information might remain encrypted, but no one has said that law enforcement shouldn't do their basic detective work in trying to access information. The argument is just that they shouldn't undermine the basic encryption that protects us all to do so.

Where the paper gets perhaps more interesting is that it suggests that any debate about access to encrypted data should focus on "data at rest" (i.e., data that is encrypted on a device) rather than "data in motion" which is the data that is being transferred across a network or between devices in some form. The paper does not say that we should poke holes in encryption that protects data at rest, and says, explicitly:

We have not concluded that any existing proposal in this area is viable, that any future such proposals will ultimately prove viable, or that policy changes are advisable at this time

However, it does note that if there is a fruitful conversation on this topic, it's likely to be around data at rest, rather than elsewhere. And, from there it notes that any discussion of proposals for accessing such data at rest must take into account both the costs and the benefits of such access to determine if it is viable. While some of us strongly believe that there is unlikely to ever be a proposal where the costs don't massively outweigh the benefits, this is the correct framework for analyzing theses things. And it should be noted that, too often, these debates involve one group only talking about the benefits and another only talking about the costs. Having a fruitful discussion requires being willing to measure both.

From there, the group sets up a framework for how to weigh those costs and benefits -- including setting up a bunch of use cases against which any proposal should be tested. Again, this seems like the right approach to systematically exploring and stress testing any idea brought forth that claims it will "solve" the "problem" that some in law enforcement insist encryption has created for them. I am extremely skeptical that any such proposal can pass such a stress test in a manner that suggests that the benefits outweigh the costs -- but if those pushing to undermine encryption require a "conversation" and want people to explore the few proposals that have been brought up, this is the proper, and rigorous, way to do so.

The question, though, remains as to whether or not this will actually "move the conversation forward." I have my doubts on that, in part because those who keep pressing for undermining encryption have never appeared to have much interest in actually having this type of conversation. They have mostly only seemed interested in the "nerd harder, nerds" approach to this, that assumes smart techies will give them their magic key without undermining everything else that keeps us secure. I fully expect that it won't be long before a Willam Barr or Chris Wray or a Richard Burr or a Cy Vance starts talking nonsense again about "going dark" or "responsible encryption" and ignores the framework set out by this working group.

That's not so say this wasn't a useful exercise. It likely was, if only to be able to point to it the next time one of the folks listed above spout off again as if there are no tradeoffs and as if it's somehow easy to solve the "encryption problem" as they see it.

27 Comments | Leave a Comment..

Posted on Techdirt - 12 September 2019 @ 9:26am

Intellectual Property Is Neither Intellectual, Nor Property: Discuss

from the have-at-it dept

Well over a decade ago I tried to explain why things like copyright and patents (and especially trademarks) should not be considered "intellectual property," and that focusing on the use of "property" helped to distort nearly every policy debate about those tools. This was especially true among the crowd who consider themselves "free market supporters" or, worse, "against government regulations and handouts." It seemed odd to me that many people in that camp strongly supported both copyright and patents, mainly by pretending they were regular property, while ignoring that both copyrights and patents are literally centralized government regulations that involve handing a monopoly right to a private entity to prevent competition. But supporters seemed to be able to whitewash that, so long as they could insist that these things were "property", contorting themselves into believing that these government handouts were somehow a part of the free market.

For years I got strong pushback from people when I argued that copyright and patents were not property -- and a few years ago, I modified my position only slightly. I pointed out that the copyright or the patent itself can be considered property (that is, the "right" that is given out by the government), but not the underlying expression or invention that those rights protect. Indeed, these days I think so much of the confusion about the question of "property", when it comes to copyright and patents, is that so many people (myself included at times) conflate the rights given by the government with the underlying expression or invention that those rights protect. In other words, the government-granted monopoly over a sound recording does have many aspects that are property-like. But the underlying song does not have many property-like aspects.

Either way, it's great to see the Niskanen Center, a DC-think tank that continually does good work on a variety of subjects, has decided to try to re-climb that mountain to explain to "free market" and "property rights" supporters why "intellectual property is not property." If you've been reading Techdirt for any length of time, most of the arguments won't surprise you. However, it is a very thoughtful and detailed paper that is worth reading.

Imagine two farms sitting side by side in an otherwise virgin wilderness, each of them homesteaded by a husband-and-wife couple (let’s call them Fred and Wilma and Barney and Betty) — two parcels of newly created private property appropriated from the commons by productive labor. One day, as Fred and Wilma are both working outside, they both notice Betty walking through the orchard of apple trees that Barney and she had planted some years back and which are now just ready to bear fruit for the first time. As Betty picks some of the first ripening apples to use in baking a pie, she sings an enchantingly lovely ballad that she and Barney had made up together back when they were courting. For the rest of the day Wilma can’t stop thinking about that beautiful song, while Fred can’t stop thinking about those trees full of delicious apples. That night Wilma sings the song to her baby daughter as a lullaby. Fred, meanwhile, sneaks over onto Barney and Betty’s property, picks a sack full of apples, tiptoes back to his property and proceeds to eat the lot of them, feeding the cores to his pigs before heading back inside.

Do you think that Fred and Wilma both did something wrong? Are they both thieves? Did both of them violate Barney and Betty’s rights? After all, Fred stole their apples, and Wilma “stole” their song — that is, she sang it to someone else without asking for permission. If you’re having trouble seeing Fred and Wilma’s actions as morally equivalent, it’s because of a fundamental difference between the two types of “property” they took from Barney and Betty.

That fundamental difference is that Barney and Betty’s song, like all ideal objects, is a nonrivalrous good. In other words, one person’s use or consumption of it in no way diminishes the ability of others to use or consume it. As expressed with characteristic eloquence by Thomas Jefferson (who perhaps not coincidentally viewed patents and copyrights with skepticism), the “peculiar character [of an idea] is that no one possesses the less, because every other possesses the whole of it. He who receives an idea from me, receives instruction himself without lessening mine; as he who lights his taper at mine, receives light without darkening me.”

By contrast, physical objects like apples are rivalrous: Once Fred and his pigs had finished devouring the ones Fred stole, they were gone and nobody else could consume them. Even when physical objects aren’t physically consumed by their owners — think paintings or plots of land — there is still unavoidable rivalry in using, enjoying, and disposing of them. The owner exercises that control over the owned object, and therefore nobody else does.

This is why it’s clear that Fred inflicted harm on his neighbors, since he took the fruit that they grew and now they don’t have it anymore. But Barney and Betty still have their song; the fact that Wilma sang it did nothing to prevent them from singing it anytime they want to. So, if Wilma did harm to Barney and Betty, what exactly is it?

The whole paper is really worth reading, and digs in on how and why people create, the nature of externalities in the creative process, and what actual data shows on the incentives of copyright and patents in driving innovation and creativity. The paper also digs deep on how excessive monopoly rights vastly hinder follow-on creativity and innovation (which is how most innovation and creativity come about in the first place).

In the case of copyright, excessive internalization is an impediment to the process of borrowing that is essential for the growth of creative works. While each artist may contribute new ideas to the cultural landscape, their contributions are based on the previous body of work. We all begin as consumers of ideas — and then some of us go on to create new ones. Take the case of Star Wars. The Jedi, Darth Vader, and the Death Star were all new in 1977, but George Lucas relied heavily on older ideas to make them possible. It is common knowledge that Lucas borrowed from Joseph Campbell’s Hero With a Thousand Faces when crafting the hero’s journey of Luke Skywalker. But the borrowing didn’t stop there. The famous opening crawl is virtually identical to those at the beginning of episodes of Flash Gordon Conquers the Universe. Telling the story from the perspective of two lowly characters, the droids R2-D2 and C-3P0, was inspired by Kurosawa’s The Hidden Fortress — something Lucas freely admits.

But while Lucas’s borrowing was permissible under copyright law, other borrowing is not, as current law gives rights holders control over broadly defined “derivative works.” A number of Star Wars fan films have been shuttered or severely limited in their scope (mostly by prohibiting commercialization) due to threats of litigation by Disney. The genre of fan fiction is a legal gray area, with many tests to determine whether it constitutes fair use, including commercialization and how “transformative” the work is. While the vast majority of these works will never amount to much, their existence is more tolerated than established as a clear-cut case of fair use. A more aggressively enforced copyright regime would almost certainly be the end of most fan fiction.

Thankfully, the paper also takes on the "fruits of our labor" view of both copyright and patents and why that doesn't make much sense either.

The idea that people should be able to enjoy the fruits of their labor has clear intuitive appeal, but its invocation as a justification for stopping other people from making use of your ideas without your permission suffers a fatal difficulty: The argument proves far too much. Indeed, the problem goes beyond the widely understood “negative space” of intellectual creations that stand outside of patent and copyright protection: scientific discoveries, fashion, comedy, etc. Given that every new business venture starts with an idea, why shouldn’t every first entrant in a new industry be able to claim a monopoly? Or, for that matter, why not every first entry in a geographic market? If someone has the bright idea that their hometown needs a Thai restaurant and succeeds in making a go of it, why shouldn’t she be able to prevent competitors from coming in to poach her good idea — at least for a couple of decades? On the other hand, given that every new idea is in some way adapted from earlier ideas, why shouldn’t those first entrants in new industries and new markets be seen as “thieves” and “pirates” who are infringing on earlier ideas? Once you really start working through the implications, the whole argument collapses in a hopeless muddle.

The problem is this: The claim that enjoying the fruits of one’s intellectual labor entitles you to stop competitors has no inherent limiting principle, and thus the claim can be extended headlong into absurdity — as indeed it frequently has been. Of course, one can impose limits on the claim, but those limits have to be based on other principles — in particular, some sense of relative costs and benefits. But now we’re doing policy analysis and the case-specific comparison of costs and benefits, at which point the grandiose-sounding claim that patent and copyright law combat injustice shrivels and fades.

The paper then suggests some reforms for both copyright and patent law that seem quite reasonable. On copyright, they suggest reducing terms, requiring registration, limiting infringement to commercial exploitation, expanding fair use, narrowing derivative works, and ending anti-circumvention (a la DMCA 1201). These are all good suggestions, though the "commercial exploitation" one is one that sounds good but is often hard to implement, because what is and what is not "commercial exploitation" can be somewhat gray and fuzzy at times. But the intent here is sound.

On patents, the paper suggestions are to eliminate both software and business method patents, greatly tighten eligibility requirements, and no infringement in cases of independent invention. To me, as I've argued, the independent invention point is the most important. Indeed, I've argued that we should go further than just saying that independent invention is a defense against infringement. Instead, we should note that independent invention is a sign of obviousness, meaning not only that the second invention isn't infringing, but that the initial patent itself should likely be invalid, as patents are only supposed to be granted if the idea is not obvious to those skilled in the art.

All in all, this is a great and thorough paper, especially for those who really want to insist that copyrights and patents should be treated like traditional property, and position themselves as supporters of "free markets." I fully expect -- as I've experienced in the past -- that those people will not engage seriously with these arguments and will rage and scream about them, but it's still important to make these points.

61 Comments | Leave a Comment..

Posted on Free Speech - 11 September 2019 @ 3:40pm

The NY Times Got It Backwards: Section 230 Helps Limit The Spread Of Hate Speech Online

from the get-it-straight dept

A few weeks back, we wrote about the NY Times absolutely terrible front page of the Business Section headline that, incorrectly, blamed Section 230 for "hate speech" online, only to later have to edit the piece with a correction saying oh, actually, it's the 1st Amendment that allows "hate speech" to exist online. Leaving aside the problematic nature of determining what is, and what is not, hate speech -- and the fact that governments and autocrats around the globe regularly use "hate speech" laws to punish people they don't like (which is often the marginalized and oppressed) -- the entire claim that Section 230 "enables" hate speech to remain online literally gets the entire law backwards.

In a new piece, Carl Szabo, reminds people about the second part of Section 230, which is what says that websites aren't held liable for their moderation choices in trying to get rid of "offensive" content. Everyone focuses on part (c)(1) of the law, the famous "26 words" that note:

No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

But section (c)(2) is also important, and part of what makes it possible for companies to clean up the internet:

No provider or user of an interactive computer service shall be held liable on account of—

(A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or
(B) any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1)

That part was necessary to respond to (and directly overrule) the ruling in Stratton Oakmont v. Prodigy, in which a colorful NY judge ruled that because Prodigy moderated its forums to keep them "family friendly," it was then legally liable for all the content it didn't moderate. The entire point of 230 was to create this balancing carrot and stick, in which companies would have incentive both to allow third parties to post content but also to make their own decisions and experiment with how to moderate.

As Szabo notes, it's this part of (c)(2) that has kept the internet from getting overwhelmed by spam, garbage and hate speech.

Section 230(c)(2) enables Gmail to block spam without being sued by the spammers. It lets Facebook remove hate speech without being sued by the haters. And it allows Twitter to terminate extremist accounts without fear of being hauled into court. Section 230(c)(2) is what separates our mainstream social media platforms from the cesspools at the edge of the web.

[....]

While some vile user content is posted on mainstream websites, what is often unreported is how much of this content is removed. In just six months, Facebook, Twitter, and YouTube took action on 11 million accounts for terrorist or hate speech. They moderated against 55 million accounts for pornographic content. And took action against 15 million accounts to protect children.

All of these actions to moderate harmful content were empowered by Section 230(c)(2).

What isn't mentioned is that, somewhat oddly, the courts have mostly ignored (c)(2). Even in cases where you'd think the actions of various internet platforms are protected under (c)(2), nearly every court notes that (c)(1)'s liability protections also cover the moderation aspect. To me, that's always been a bit weird, and a little unfortunate. It gets people way too focused on (c)(1), without realizing that part of the genius in the law is the way it balances incentives with the combination of (c)(1) and (c)(2).

Either way, for those who keep arguing that Section 230 is why we have too much garbage online, the only proper response is that they're wrong. Section 230 also encourages platforms to clean up the internet. And many take that role quite seriously (sometimes too seriously). But it has resulted in widespread experimentation on content moderation that is powerful and useful. Taking away Section 230's protections, or limiting them, will make it that much more difficult.

40 Comments | Leave a Comment..

Posted on Techdirt - 11 September 2019 @ 11:55am

Yes, News Sites Need To Get Out Of The Ad Surveillance Business -- But Blame The Advertisers As Well

from the takes-two-to-tango dept

Doc Searls has a great recent blog post in which he rightly points out why Bernie Sanders' "plan to save journalism" is completely misguided and will fail. It's worth reading -- with the key point being that Sanders' plan to save journalism assumes a world that does not exist, and one where heavy regulations will somehow magically save journalism, rather than stifle it. As Searls notes, that's not the world we live in. We live in a world of informational abundance, which changes everything:

Journalism as we knew it—scarce and authoritative media resources on print and air—has boundless competition now from, well, everybody.

But there's an interesting point Searls makes later in his piece, suggesting that part of the problems with the news today is that the old school news publications have bought into "surveillance" based business models -- and nothing will change until they dump that and move back towards brand advertising:

Meanwhile, the surviving authoritative sources in that mainstream have themselves become fat with opinion while carving away reporters, editors, bureaus and beats. Brand advertising, for a century the most reliable and generous source of funding for good journalism (admittedly, along with some bad), is now mostly self-quarantined to major broadcast media, while the eyeball-spearing “behavioral” kind of advertising rules online, despite attempts by regulators (especially in Europe) to stamp it out. (Because it is in fact totally rude.)

He later says:

I think we’ll start seeing the tide turn when when what’s left of responsible ad-funded online publishing cringes in shame at having participated in adtech’s inexcusable surveillance business—and reports on it thoroughly.

And, to some extent, I agree. I've pointed out a few times now that, especially for news publishers, the evidence suggests that there's no real benefit to behavioral advertising that requires sucking up all the data.

But this is not just about the publishers. You may note that we at Techdirt use some tracking in our advertising. Because, if we didn't we'd have no advertising, and no advertising revenue at all.

Every single time I write about this, I point out that we have eagerly approached tons of advertisers, even those who promote themselves as supporting privacy, and offer what we think is a great freaking deal to do no-tracking, brand advertising on Techdirt -- which we think our users would appreciate. And every single time one of two things happens: we never ever hear back or we eventually get passed on to some cog in the advertising machine with a spreadsheet who simply can't understand what we're trying to offer, and the whole thing falls apart. We've had multiple long conversations with large companies -- some of whom are "famous" for supporting privacy, and we point out all the benefits of doing a brand advertising program that doesn't track, and we just get politely brushed off or ignored.

So, yeah, I'd love it if the media -- including us! -- went back to brand advertising that doesn't require surveilling on visitors (though, lots of you already use adblockers, which is totally cool by us). But, since not a single advertiser seems willing to buy such ads, we're kinda left in the lurch. So, as I do every time, I'll again say that if you happen to have an advertising budget and believe that supporting Techdirt in a way that we can highlight that you support us without requiring sucking up data on our community please contact us. Given our experiences so far, I'm not holding my breath.

42 Comments | Leave a Comment..

Posted on Techdirt - 11 September 2019 @ 9:28am

Hotel Lobbyists Push Forward Their Plan To Kill The Internet Because They Hate Competing Against Airbnb

from the come-on dept

In the midst of this "techlash" atmosphere, it seems that basically every industry whose business models have been upended by competition brought about by the internet is now cynically using the anger directed at successful internet companies as an opportunity to kneecap the wider internet. We've recently pointed out that many of the efforts to undermine Section 230 of the CDA (the law that makes much of the good parts of the internet possible) are actually being pushed by Hollywood out of frustration that they're no longer able to maintain their monopoly rents in a gatekeeper business. Similarly, the big telcos have been using this opportunity to pull a "but look over there!" to point at the big internet companies, while trying to distract from the much greater privacy violations they regularly engage in.

Not to be left out, it appears the hotels are now making a major push to attack the internet, because they're sick of competing against Airbnb. This is no surprise. Two years ago, we wrote how the hotel industry had mapped out a secret plan (which was leaked to the NY Times) to kneecap Airbnb through bogus litigation and getting friendly politicians to help them attack the company. Sometimes politicians were more obvious than others about helping the hotel industry out in this plan, like the time that (now disgraced) former NY Attorney General Eric Schneiderman flat out admitted that he was attacking Airbnb to protect local hotels from competition.

As we noted a few weeks ago, a former top hotel exec, Ed Case, got elected to Congress last year. Case was actually on the board of the hotel industry's main lobbying group, the American Hotel & Lodging Association (AHLA). We wrote that he was planning to introduce a bill to undermine Section 230 at the behest of his former employers. On Monday of this week, he did exactly that with a press release that quotes the AHLA (leaving out that until just recently, Case was on the board of that organization (corruption? what corruption?)). The bill, H.R.4232 or the Protecting Local Authority and Neighborhoods Act (PLAN Act), would amend Section 230 to make it clear that it does not apply to Airbnb. Literally, that's the entire point of the law.

Case's explanation of the bill is hilariously misleading:

His “Protecting Local Authority and Neighborhoods” (PLAN) Act would end abusive litigation by Internet-based short-term rental platforms like AirBnb, HomeAway, VRBO, Flipkey, and others attempting to avoid accountability for profiting from illegal rentals.

Note what he's saying? It's about "ending abusive litigation." What?!? The "abusive litigation" is merely Airbnb asking courts to say that Section 230 means that they -- as an internet platform -- shouldn't be liable for the postings of individuals users if those users violate local city and state laws (exactly what Section 230 was designed to do). Cities and states are free to pass whatever laws they want regarding short-term rentals. But they shouldn't be able to pin liability on 3rd party platforms just because local officials don't want to bother to enforce their own laws. But Case's description of the law flips all of this on its head, and suggests that Airbnb's attempt to have liability properly applied to those violating the law... is "abusive litigation."

And, of course, he leaves out that courts right now appear to already believe that Section 230 doesn't protect Airbnb in such situations (which I think is an obvious misreading of the law, but...).

The AHLA quote on this is even more ridiculous.

The national organization of the AHLA also released a statement in support. “For too long, these Big Tech short-term rental platforms have been hiding behind this antiquated law in order to bully and threaten legal action against local elected officials who are simply trying to protect their residents from illegal rentals that are destroying neighborhoods and access to affordable housing,” said Chip Rogers, President and CEO of the national American Hotel & Lodging Association. “These Big Tech rental platforms are invoking a loophole in a federal law to snub their noses at local government leaders across the country, while continuing to profit from illegal business transactions.”

Get that? It's an "antiquated law" and they're "invoking a loophole." Neither is accurate. Section 230 is about the proper allocation of liability. You don't blame the tool for what the user does with it -- but that's what many of these laws are designed to do.

Either way, the AHLA is going full-court press on this. And they're not just focused on pushing this loophole for Airbnb. It appears they're going all in on stripping Section 230 protections from any internet service hosting 3rd party content. As part of this, they recently released what can only be described as a push poll to mislead people about Airbnb, the laws around these issues, and Section 230. Each question in the poll is at best actively misleading and at worst, completely bullshit.

The poll was designed, not surprisingly, to get people to "vote" for the position the hotel industry wants. So the questions are phrased in a manner such that I'm almost surprised they didn't get 100% of people supporting their campaign.

If Airbnb is making a profit from short-term rentals on its site, Airbnb should ensure the owner renting the property is following the local laws and safety requirements.

If you're not familiar with the nuances here, you're probably going to say you agree with that statement. But it leaves out what that means. How is Airbnb supposed to know whether or not the homeowners are following every local law that impacts them? If the homeowners are violating local laws, isn't that up to local officials to enforce, and not Airbnb? Should DoorDash be required to police whether or not the restaurants that use it for delivery are complying with local health laws? Or, more directly, should AHLA hotels be required to police whether or not any of their customers do anything illegal inside their hotel rooms? Because that's basically what AHLA is asking for here. If a drug deal or prostitution happens inside an AHLA hotel, should that hotel be held legally liable?

Airbnb should be required to remove rental listings from its website that are classified as illegal or banned by local government laws.

Again, that sounds good, but leaves out this: how the fuck does Airbnb know whether or not certain listings are "illegal"? Isn't that, again, up to local officials to enforce? The AHLA ignores all of that and pretends that it's somehow obvious which listings are legal and which are not. So, once again, should AHLA hotels be required to stop any activities in its members' hotels that is "classified as illegal or banned by local government laws"? I'm sure the AHLA would say that's impossible because they don't know what's happening in those rooms. Well, that's the fucking point. Airbnb doesn't know whether or not these listings do everything to comply with local laws. That's on the people doing the listings and the local enforcement officials.

From there, there are a bunch of questions not targeting AIrbnb specifically, but the wider premise of CDA 230:

The Communications Decency Act should be amended to remove potential loopholes, that allow Internet companies to profit off illegal activity on their web sites.

Uh, yikes. Again, this assumes that it's somehow obvious what is "illegal activity." This "amendment" would basically kill the open internet. Because the risk of liability here would be huge. How could any company that allows 3rd party speech know whether any content violates any laws? That's insane, and completely goes against the entire point of Section 230.

The Communications Decency Act should be amended to make it clear that web sites are accountable for removing illegal products or services.

Again, this assumes, incorrectly, that there's some sort of obvious "illegal products or services." These things don't show up with a flag -- just as someone checking into an AHLA member hotel to do a drug deal doesn't wear a giant sign over their head announcing the same.

Honestly, the funniest thing about the push poll is that the question that got the least amount of support was the one that has nothing to do with the internet and Section 230, but rather is just a "hey, don't you just hate Airbnb" type question -- and that got the least support:

Short-term rentals are depleting housing options for local residents and increasing the cost to rent or own a home. Short-term rental sites should comply with local laws protecting housing for permanent residents.

People tend to like Airbnb. It provides a better service, often at a lower price. Hotels tend to offer pretty bad services at inflated prices. And again, this whole line of questioning completely misrepresents the point. If local officials want to undermine tourism and business travel in their regions -- that's their decision, no matter how short-sighted. But they shouldn't be able to force a company into policing their laws just because local officials are too lazy to do so.

46 Comments | Leave a Comment..

Posted on Techdirt - 10 September 2019 @ 10:56am

Big News: Appeals Court Says CFAA Can't Be Used To Stop Web Scraping

from the this-is-good dept

Two years after a lower court correctly decided that LinkedIn couldn't use the CFAA to stop third parties from scraping their site, the 9th Circuit appeals court has upheld that decision in a very important decision for the future of an open web. For a long time we've talked about how various internet companies -- especially the large ones -- have abused the CFAA to stop competition and interoperability. If you're unaware, the CFAA is basically the US's "anti-hacking" law, which was designed to make it a crime (and a civil infraction) to "break into" someone else's computer. But for years it's been interpreted way too broadly (to the point that it's referred to as "the law that sticks" when trying to get someone for "doing something bad on a computer."

While we have tremendous concerns about criminal CFAA prosecutions, the use of CFAA in civil contexts by companies trying to block competition is perhaps just as troubling. We've called out Craigslist and, especially, Facebook for abusing the CFAA to stop companies from building on what they've built and providing a better service. To this day, we remain troubled by the 9th Circuit siding with Facebook in declaring the CFAA an okay tool to block a third party from building a better service for Facebook users and believe (somewhat strongly) that this particular decision and abuse is part of why Facebook is in the position its in today and that there are no significant competitors it faces. In that decision, the 9th Circuit ruled that because Facebook had sent a cease-and-desist letter to Power, any access after that was now "without authorization" and thus violated the CFAA.

And that's part of what makes this new HiQ v. Linkedin decision, done by the very same court, so fascinating. It seems to go the other way. While Facebook was allowed to use the CFAA to stop Power users from scraping content from Facebook (with permission from the account holder), here, the 9th Circuit has ruled that LinkedIn can't (at this stage) use the CFAA to stop HiQ from scraping its site.

The fact that the results in HiQ and Power came out differently deserves some exploration -- and we can highlight ways in which both decisions are weird and troubling. But from a pure policy standpoint, saying that scraping a site does not violate the law is an undeniably good thing and we should be happy with the overall outcome. Though, the it's now set up a weird system where the 9th Circuit itself seems to disagree with itself and there's a wider circuit split -- meaning it's possible that the Supreme Court could take up this issue at some point.

In discussing the CFAA, this 9th Circuit panel seems to fully understand the intention of the CFAA: to stop hacking. Not to stop companies from blocking people/companies they dislike:

The 1984 House Report on the CFAA explicitly analogized the conduct prohibited by section 1030 to forced entry: “It is noteworthy that section 1030 deals with an ‘unauthorized access’ concept of computer fraud rather than the mere use of a computer. Thus, the conduct prohibited is analogous to that of ‘breaking and entering’ . . . .’” H.R. Rep. No. 98-894, at 20 (1984); see also id. at 10 (describing the problem of “‘hackers’ who have been able to access (trespass into) both private and public computer systems”). Senator Jeremiah Denton similarly characterized the CFAA as a statute designed to prevent unlawful intrusion into otherwise inaccessible computers, observing that “[t]he bill makes it clear that unauthorized access to a Government computer is a trespass offense, as surely as if the offender had entered a restricted Government compound without proper authorization.”11 132 Cong. Rec. 27639 (1986) (emphasis added). And when considering amendments to the CFAA two years later, the House again linked computer intrusion to breaking and entering. See H.R. Rep. No. 99-612, at 5–6 (1986) (describing “the expanding group of electronic trespassers,” who trespass “just as much as if they broke a window and crawled into a home while the occupants were away”).

In recognizing that the CFAA is best understood as an anti-intrusion statute and not as a “misappropriation statute,” Nosal I, 676 F.3d at 857–58, we rejected the contract-based interpretation of the CFAA’s “without authorization” provision adopted by some of our sister circuits.

That's all good -- and because of that, the court finds that LinkedIn can't claim that scraping their site is a CFAA violation, even after a cease-and-desist. But, it tries to differentiate from the Facebook v. Power decision by saying that one involves a password, and the other does not. So it's the fact that the information being scraped on LinkedIn is public information that changes the calculus here.

We therefore conclude that hiQ has raised a serious question as to whether the reference to access “without authorization” limits the scope of the statutory coverage to computer information for which authorization or access permission, such as password authentication, is generally required. Put differently, the CFAA contemplates the existence of three kinds of computer information: (1) information for which access is open to the general public and permission is not required, (2) information for which authorization is required and has been given, and (3) information for which authorization is required but has not been given (or, in the case of the prohibition on exceeding authorized access, has not been given for the part of the system accessed). Public LinkedIn profiles, available to anyone with an Internet connection, fall into the first category. With regard to such information, the “breaking and entering” analogue invoked so frequently during congressional consideration has no application, and the concept of “without authorization” is inapt.

Neither of the cases LinkedIn principally relies upon is to the contrary. LinkedIn first cites Nosal II, 844 F.3d 1024 (9th Cir. 2016). As we have already stated, Nosal II held that a former employee who used current employees’ login credentials to access company computers and collect confidential information had acted “‘without authorization’ in violation of the CFAA.” Nosal II, 844 F.3d at 1038. The computer information the defendant accessed in Nosal II was thus plainly one which no one could access without authorization.

So too with regard to the system at issue in Power Ventures, 844 F.3d 1058 (9th Cir. 2016), the other precedent upon which LinkedIn relies. In that case, Facebook sued Power Ventures, a social networking website that aggregated social networking information from multiple platforms, for accessing Facebook users’ data and using that data to send mass messages as part of a promotional campaign. Id. at 1062–63. After Facebook sent a cease-and-desist letter, Power Ventures continued to circumvent IP barriers and gain access to password-protected Facebook member profiles. Id. at 1063. We held that after receiving an individualized cease-and-desist letter, Power Ventures had accessed Facebook computers “without authorization” and was therefore liable under the CFAA. Id. at 1067–68. But we specifically recognized that “Facebook has tried to limit and control access to its website” as to the purposes for which Power Ventures sought to use it. Id. at 1063. Indeed, Facebook requires its users to register with a unique username and password, and Power Ventures required that Facebook users provide their Facebook username and password to access their Facebook data on Power Ventures’ platform. Facebook, Inc. v. Power Ventures, Inc., 844 F. Supp. 2d 1025, 1028 (N.D. Cal. 2012). While Power Ventures was gathering user data that was protected by Facebook’s username and password authentication system, the data hiQ was scraping was available to anyone with a web browser.

That last bit... confuses me. Yes, the information that was at stake in the Power case was locked up with password protection, but (and this is the important part), it was the user whose password it was that gave permission to Power to access the data in Facebook on their behalf. So I have trouble seeing how it's really that different than this HiQ case. This ruling seems to suggest that there's some magical property to a password that doesn't seem supported by the law. In the Power case, the access is still very much "authorized" because the holder of the password is giving it out. But the court tries to dance around this by pretending that the authorization question is different. I don't see how that makes any sense -- even if I'm happy that at least scraping of public info is considered fair game. Still, the panel leans in hard on the password question to distinguish these two cases:

For all these reasons, it appears that the CFAA’s prohibition on accessing a computer “without authorization” is violated when a person circumvents a computer’s generally applicable rules regarding access permissions, such as username and password requirements, to gain access to a computer. It is likely that when a computer network generally permits public access to its data, a user’s accessing that publicly available data will not constitute access without authorization under the CFAA. The data hiQ seeks to access is not owned by LinkedIn and has not been demarcated by LinkedIn as private using such an authorization system. HiQ has therefore raised serious questions about whether LinkedIn may invoke the CFAA to preempt hiQ’s possibly meritorious tortious interference claim.

Orin Kerr -- who probably knows more about the CFAA than anyone else -- has done a deep dive on this ruling as well, which is worth reading. As he notes, part of the weirdness in this case is procedural. HiQ is focused on getting a preliminary injunction stopping LinkedIn from using the CFAA to stop them from scraping the LinkedIn site. That sets the standards a bit lower than might otherwise be, and means that the ruling is not necessarily the final world on the CFAA in this situation. He also notes that this should be seen as a big win for the open internet, and (in many ways) isolates the Power decision as "an outlier."

I also think this decision renders Power Ventures an outlier. I may be biased, as I thought Power Ventures was wrong. As regular readers may remember, I represented Power Ventures on the petition for rehearing to try to get the panel decision overturned. But Power Ventures seemed to give cease-and-desist letters magical powers given their clarity and notice. It was possible to read Power Ventures broadly as saying that as long as the computer owner sends the cease-and-desist letter, the computer owner's written directive controls the CFAA question—the recipient is sent into Brekka-land where their access rights were withdrawn.

HiQ Labs now places a critical limit on Power Ventures. Under HiQ Labs, the cease-and-desist letter only controls access rights to non-public data. That seems to reduce Power Ventures to a limited application of Nosal II. Under both Nosal II and Power Ventures-as-construed-in-HiQ, once a computer owner tells you to go away, you can't then rely on a current legitimate user's permission to let you back in.

Putting the cases together, the Ninth Circuit law right now seems to go like this. You can scrape a public website, and you can violate terms of service, without violating the CFAA. However, you can only access non-public areas of a computer if you haven't had your access rights canceled before, either through a cease-and-desist letter or through the relationship ending that had granted you access rights.

As Kerr and the 9th Circuit itself note, however, there remains a circuit split between the 9th's hodge podge interpretations of the CFAA and other appeals courts. That certainly suggests that this could end up before the Supreme Court at some point.

One other note: I've seen a few lawyers, including those I respect, worry that this decision could actually lead to restrictions on the tools that sites themselves use to block more malicious parties. As Eric Goldman noted in his analysis:

Meanwhile, if server operators can’t restrict who can access their servers, then it will embolden data scavengers–including trolls, malefactors, and governments–who intend to weaponize the data against users.

This is one of the rare cases where I disagree with Goldman's analysis. I don't see how the ruling would lead to such a result. The ruling does suggest that it's tortious interference (this is separate from the CFAA analysis) for LinkedIn to block HiQ, since doing so undermine's HiQ's entire business. But I don't see how that same analysis would apply "trolls, malefactors, and governments." I do find the tortious interference discussion a bit confusing in its own right. While I don't think LinkedIn should be able to use the law to stop HiQ from scraping its site, it seems silly (and of questionable legality) to argue that it can't even use technical measures to block HiQ. But that's what the ruling appears to say:

LinkedIn’s threats to invoke the CFAA and implementation of technical measures selectively to ban hiQ bots could well constitute “intentional acts designed to induce a breach or disruption” of hiQ’s contractual relationships with third parties.

I agree on the use of the CFAA. But disagree on the point about "technical measures." One involves using the power of the government to block perfectly reasonable activity -- but the other is a purely technical question. And I don't see how or why the law should block any site from implementing technical measures to prevent access, even if overall public policy should encourage such access.

All in all this seems like a mostly good decision, with this oddity, combined with the tap dance to distinguish it from other rulings. Add in the big circuit splits and you can rest assured that this is nowhere near the last word on this matter.

Read More | 22 Comments | Leave a Comment..

Posted on Techdirt - 9 September 2019 @ 10:44am

Power Outage For Federal Court Computer System Screws Up Three Months Worth Of Job Applications?!?

from the how-is-that-possible? dept

For years, we've talked about what a total joke the federal courts' PACER system is. That's the computer system the federal courts use for accessing court documents. It acts like it was designed in about 1998 and hasn't been touched since (and even when it was designed, it wasn't designed well). But that's not the only fucked up computer system that the federal courts use. A few years back when I was an expert witness in a federal case, I had to make use of a different US court website just to get paid by the government -- and while it's been a few years, I still remember that it required you to use Internet Explorer. Internet Explorer! It had lots of other issues as well.

By now you may have realized that every computer system in the federal court system seems to be antiquated and poorly designed. And now we've got even more evidence of that. On Friday, the federal court system announced that a "power outage" probably fucked up clerkship and staff attorney applications going back three months.

Law school students and graduates who filed applications for federal court clerkships and staff attorney positions from June 7 to Aug. 31, 2019 using the OSCAR system may have to refile some documents in their applications. Notifications and instructions for refiling will be sent early next week.

Documents filed during that period may have been affected by a major power failure at one of the Judiciary’s service providers. The electrical outage affected the Online System for Clerkship Application and Review (OSCAR), which is used to process clerkship and staff attorney applications. The OSCAR system is back up and running.

Only applications filed during the period June 7 to Aug. 31 were affected. Judges and staff attorney offices that accepted applications through the OSCAR system during this period also are being notified.

Okay, sure, power outages happen. But... how is it possible that three months worth of applications and documents may be messed up from a single power outage? What sort of backup system are they running over there? I get that federal computer systems are antiquated, but this makes literally no sense at all. At some point in the last two decades, someone should have designed a computer system that doesn't lose documents in the event of a power outage. Or, at the very least, it should have only lost documents that were filed in like the split second prior to the power outage.

Honestly, I'm beginning to wonder if PACER is honestly "the best" our federal court system can do.

19 Comments | Leave a Comment..

Posted on Techdirt - 6 September 2019 @ 10:47am

FTC's Latest Fine Of YouTube Over COPPA Violations Shows That COPPA And Section 230 Are On A Collision Course

from the this-could-be-an-issue dept

As you probably heard, earlier this week, the FCC fined Google/YouTube for alleged COPPA violations in regards to how it collected data on kids. You can read the details of the complaint and proposed settlement (which still needs to be approved by a judge, but that's mostly a formality). For the most part, people responded to this in the same way that they responded to the FTC's big Facebook fine. Basically everyone hates it -- though for potentially different reasons. Most people hate it because they think it's a slap on the wrist, won't stop such practices and just isn't painful enough for YouTube to care. On the flip side, some people hate it because it will force YouTube to change its offerings for no good reason at all and in a manner that might actually lead to more privacy risks and less content for children.

They might all be right. As I wrote about the Facebook fine and other issues related to privacy, almost every attempt to regulate privacy tends to make things worse, in part, because people keep misunderstanding how privacy works. Also, most of the "complaints" about how this "isn't enough," are really not complaints directed at the FTC, but at Congress, because the FTC can only do so much under its current mandate.

Separately, since this fine focused on COPPA violations, I'll separately note that COPPA has always been a ridiculous law that makes no real sense -- beyond letting politicians and bureaucrats pretend they're "protecting the children" -- while really creating massive unintended consequences that do nothing to protect children or privacy, and do quite a bit to make the internet a worse place.

But... I'm not even going to rehash all of that today. Feel free to dig into the past links yourselves. What's interesting to me is something specific to this settlement, as noted by former FCC and Senate staffer (and current Princeton professor), Jonathan Mayer: the FTC, in this decision, appears to have significantly changed its interpretation of COPPA, and done so in a manner that is going to set up something of a clash with Section 230. What happened is a little bit subtle, so it requires some background.

The key feature of COPPA -- and the one you're probably aware of whether or not you know it -- is that it has specific rules if a site is targeting children under the age of 13. This is why tons of sites say that you need to be over 13 to use them (including us) -- in an attempt to avoid dealing with many of the more insane parts of COPPA compliance. Of course, in practice, this just means that many people lie. Indeed, as danah boyd famously wrote nearly a decade ago, COPPA seems to be training parents to help their kids lie online -- which is kinda dumb.

Of course, the key point under COPPA is not actually the "under 13" users, but rather whether or not a website or online service is "directed to children under 13 years of age." Indeed, in talking about it with various lawyers, we've been told that most sites (including our own) shouldn't even worry about COPPA because it's obvious that such sites aren't "directed to children" as a whole and therefore even if a few kids sneak in, they still wouldn't be violating COPPA. In other words, the way the world has mostly interpreted COPPA is that it's not about how whether any particular piece or pieces of content are aimed at children -- but whether the larger site itself is aimed at children.

This new FTC settlement agreement changes that.

Basically, the FTC has decided that, under COPPA, it no longer needs to view the service as a whole, but can divide it up into discrete chunks, and determine if any of those chunks are targeted at kids. To be fair, this is well within the law. The text of COPPA clearly says in definitional section (10)(A)(ii) that "a website or online service directed to children" includes "that portion of a commercial website or online service that is targeted to children." It's just that, historically, most of the focus has been on the overall website -- or something that is more distinctly a "portion" rather than an individual user's channel.

Except, that under the law, it seems that it should be the channel operator who is held liable for violations of COPPA under that channel, rather than the larger platform. In fact, back in 2013, the last time the FTC announced rules around COPPA it appears to have explicitly stated, that it would apply COPPA to the specific content provider who was directed at children and not at the general platform they used. This text is directly from that FTC rule, which went through years of public review and comment before being agreed upon:

... the Commission never intended the language describing ‘‘on whose behalf’’ to encompass platforms, such as Google Play or the App Store, when such stores merely offer the public access to someone else’s child-directed content. In these instances, the Commission meant the language to cover only those entities that designed and controlled the content...

But that's not what the FTC is doing here. And so it appears that the FTC is changing the definition of things, but without the required comment and rulemaking process. Here, the FTC admits that channels are "operators" but then does a bit of a two-step to say that it's YouTube who is liable.

YouTube hosts numerous channels that are “directed to children” under the COPPA Rule. Pursuant to Section 312.2 of the COPPA Rule, the determination of whether a website or online service is directed to children depends on factors such as the subject matter, visual content, language, and use of animated characters or child-oriented activities and incentives. An assessment of these factors demonstrates that numerous channels on YouTube have content directed to children under the age of 13, including those described below in Paragraphs 29-40. Many of these channels self-identify as being for children as they specifically state, for example in the “About” section of their YouTube channel webpage or in communications with Defendants, that they are intended for children. In addition, many of the channels include other indicia of child-directed content, such as the use of animated characters and/or depictions of children playing with toys and engaging in other child-oriented activities. Moreover, Defendants’ automated system selected content from each of the channels described in Paragraphs 29-40 to appear in YouTube Kids, and in many cases, Defendants manually curated content from these channels to feature on the YouTube Kids home canvas.

Indeed, part of the evidence that the FTC relies on is the fact that YouTube "rates" certain channels for kids.

In addition to marketing YouTube as a top destination for kids, Defendants have a content rating system that categorizes content into age groups and includes categories for children under 13 years old. In order to align with content policies for advertising, Defendants rate all videos uploaded to YouTube, as well as the channels as a whole. Defendants assign each channel and video a rating of Y (generally intended for ages 0-7); G (intended for any age); PG (generally intended for ages 10+); Teen (generally intended for ages 13+); MA (generally intended for ages 16+); and X (generally intended for ages 18+). Defendants assign these ratings through both automated and manual review. Previously, Defendants also used a classification for certain videos shown on YouTube as “Made for Kids.”

That's a key point that the FTC uses to argue that YouTube knows that its site is "directed at" children. But here's the problem with that. Section 230 of the Communications Decency Act, specifically the often forgotten (or ignored) (c)(2) is explicit that no provider shall be held liable for any moderation actions, including "any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable." One way to do that is... through content labeling policies, such as those that YouTube used and described by the FTC.

So here, YouTube is being found partially liable because of its content ratings, which is being shown as evidence that it's covered by COPPA. But, CDA 230 makes it clear that there can't be any such liability from such a rating system.

This won't get challenged in court (here) since Google/YouTube have agreed to settle, but it certainly does present a big potential future battle. And, frankly, given the way that some courts have been willing to twist and bend CDA 230 lately, combined with the general "for the children!" rhetoric, I have very little confidence that CDA 230 would win.

Read More | 16 Comments | Leave a Comment..

Posted on Techdirt - 5 September 2019 @ 3:39pm

Judge Orders White House To Restore Reporter's Press Pass It Illegally Removed

from the well-duh dept

Just a few weeks ago, we wrote about how the White House was clearly setting itself up for another embarrassing failure in court when it removed the press pass of Brian Karem. This wasn't new. The same thing had happened a year ago. And yet, our comments filled up with a lot of nonsense about how we were wrong and "there is no right to a White House press pass" and a bunch of other nonsense.

I'll be curious to hear the response from those same individuals now that a federal judge has ordered the press pass restored.

As the Court will explain below, Karem has, at this early stage of the proceedings, shown that he is likely to succeed on this due process claim, because the present record indicates that Grisham failed to provide fair notice of the fact that a hard pass could be suspended under these circumstances. Meanwhile, Karem has shown that even the temporary suspension of his pass inflicts irreparable harm on his First Amendment rights. The Court therefore grants Karem’s motion for a preliminary injunction and orders that his hard pass be restored while this lawsuit is ongoing.

The court focuses mainly on the 5th Amendment due process claims, noting that those alone suffice to show that Karem is correct here. The judge goes into great detail about how the White House never did anything to suggest special decorum rules for these events, and thus the decision to ban Karem was arbitrary. The White House brought up all sorts of bizarre explanations insisting that it had provided adequate notice to Karem, but the judge points out that's just not true.

First, the letter’s language, taken in its entirety, is ambiguous as to whether the White House even intended to regulate events other than formal press conferences. Indeed, by expressly limiting the scope of the promulgated rules—including the warning about the “suspension or revocation of . . . hard pass[es]”—to formal press conferences, the White House arguably suggested that it was not going to police reporter behavior at other events, unless “unprofessional behavior occur[red]” and it was “forced to reconsider [its] decision” by publishing explicit rules.

Also, whatever "rules" there might have been were way too vague:

The letter refers only to “professional journalistic norms, ” Acosta Letter at 2, which is just as amorphous as the “reasons for security” language that the D.C. Circuit found insufficient in Sherrill, 569 F.2d at 130. Though “professionalism” has a well-known common meaning, it is inherently subjective and context-dependent. Such abstract concepts may at times indicate what is allowed and disallowed at the furthest margins, but they do not clearly define what is forbidden or permitted in common practice within those margins. The vagueness doctrine guards against this danger by ensuring that regulated parties are able to discern, as a practical matter, “what is required of them so they may act accordingly.” Fox, 567 U.S. at 253.

The judge also notes that Karem's lawyers presented plenty of evidence of obviously much worse behavior that did not lead to press pass revocation:

Defendants appear to argue that, even if the meaning of “professionalism” may be debatable in certain instances, Karem’s behavior was clearly unprofessional in this instance. This contention appears to be grounded in the notion that “a plaintiff who ‘engages in some conduct that is clearly proscribed cannot complain of the vagueness of the law as applied to the conduct of others.’”... Again, though, “professionalism” is context-dependent, and Karem has provided some evidence that White House press events are often freewheeling and that aggressive conduct has long been tolerated without punishment. That evidence includes a characterization of the White House press corps as “an unruly mob of reporters.” Ex. C at 5. It includes stories of how journalists have “rudely interrupted” presidents and “berated” press secretaries, Ex. D at 1; have “breach[ed] etiquette” by “heckling” during presidential remarks, Ex. I at 1; and have shouted questions at the conclusion of Rose Garden events, drawing the ire of honored guests in attendance, see Ex. E at 2; Ex. C at 4. The evidence even includes an account of how two reporters once “engaged in a shoving match over positions in the briefing room.” Ex. C at 5. This kind of behavior may have occasionally led the White House to speak with reporters’ employers... but it apparently never resulted in the revocation or suspension of a hard pass.... And, as noted above, the Acosta Letter does not unambiguously signal a departure from that regime. In fact, the letter could reasonably be read to mean that the pre-existing regime would be maintained for the time being.

Defendants, meanwhile, have submitted no evidence in support of their contention that Karem’s conduct was clearly proscribed under the existing “professionalism” policy. They instead rest entirely on Grisham’s August 16 letter and its conclusions that “Karem’s actions, as viewed by a reasonable observer, (1) insulted invited guests of the White House, (2) threatened to escalate a verbal altercation into a physical one to the point that the Secret Service deemed it prudent to intervene, and (3) re-engaged with . . . Gorka in what quickly became a confrontational manner while repeatedly disobeying a White House staffer’s instruction to leave.” Ex. 10 at 8. But in light of the evidence that Karem has presented the first and third conclusions do not seem clearly sanctionable in the context of the White House press corps. And the second conclusion is not supported by the various video recordings of the July 11 incident. No doubt, Karem’s remark that he and Gorka could “go outside and have a long conversation,” id. at 3, was an allusion to a physical altercation, but the videos make clear that it was meant as an irreverent, caustic joke and not as a true threat. And the videos belie the notion that a Secret Service agent had to intervene to prevent a fight: the agent walks right past Karem as the exchange with Gorka is concluding (before returning upon hearing someone call Karem a “punk ass”). See Ex. 63 at 0:30–0:36; Ex. 61 at 0:23–0:27. Rather, Karem and Gorka each had ample opportunity to initiate a physical altercation, and they each made the decision not to.4 Plus, Karem’s interaction with Gorka in the Rose Garden was brief—about twenty seconds, see Ex. 63 at 0:09–0:31—and it came after the President’s remarks had concluded. This event was also one where jocular insults had been flying from all directions.... There is no indication in the record that other offenders were reprimanded, or even told to stop.

The court notes that it need not really get into the 1st Amendment arguments, given the 5th Amendment points raised above, other than to order the immediate return of the press pass, because taking it away creates irreparable harm to Karem's 1st Amendment rights.

It is not merely an abstract, theoretical injury, either. As Sherrill recognized, “where the White House has voluntarily decided to establish press facilities” that are “open to all bona fide Washington-based journalists,” the First Amendment requires “that individual newsmen not be arbitrarily excluded from sources of information.” ... Such exclusion is precisely what Karem is suffering here. His First Amendment interest depends on his ability to freely pursue “journalistically productive conversations” with White House officials.... Yet without his hard pass, he lacks the access to pursue those conversations—even as an eavesdropper. And given that the news is time-sensitive and occurs spontaneously, that lack of access cannot be remedied retrospectively.

The case is not over, but for the time being the White House needs to restore Karem's pass. And I'll be eagerly waiting to see what those who insisted this case would go the other way have to say in our comments.

Read More | 56 Comments | Leave a Comment..

Posted on Techdirt - 5 September 2019 @ 10:47am

Chinese Giant Tencent Is Suing Bloggers Who Criticize The Company For 'Reputational Damage'

from the free-speech dept

It appears that the idea of SLAPP suits has moved to China. The Chinese internet giant Tencent is apparently fed up with its own users criticizing the company on its own WeChat blogging platform, and has sued a bunch of them (possibly paywalled -- here's another link for the story). The details are pretty ridiculous, even recognizing that China doesn't (by a long shot) have a history of protecting free expression. What's incredible here, of course, is that Tencent could have just shut down the accounts of the WeChatters. But, instead it's trying to completely destroy them with these lawsuits.

“It’s very weird,” said Jianfei Yan, who was faced with a Rmb1m ($140,000) defamation lawsuit from Tencent in March after writing an article about the dominance of the “super powerful” WeChat platform and its potential for data breaches. “If Tencent questioned my comments, they could [have stopped] me publishing them on WeChat . . . but they just directly appealed to the court and sued me.”

Tencent declined to comment on the cases. But in a document submitted in May after a court hearing against Jihua Ma, another of the bloggers, it said it opted against deleting the offending articles on WeChat because doing so “would further cause damage to Tencent’s reputation”.

But suing someone and trying to destroy their lives is not going to cause further damage to Tencent's reputation? How does that work? And, honestly, the lawsuits seem to be targeting fairly mild criticism or people reporting potential bugs. But it also seems most targeted at those who are unable to afford to fight back.

Xuyang Sun, the third blogger, was sued by Tencent for Rmb5m earlier this year after he pointed out that the company’s efforts to reduce children’s time spent gaming could be circumvented. “I think they just pick the soft persimmon,” he said, arguing that his critique was milder than similar attacks levelled by the state-owned People’s Daily newspaper.

And, yes, these lawsuits can ruin people's lives. As Martin Chorzempa, from the Peterson Institute, notes in a tweet, because of China's relatively new social credit system (and a lack of personal bankruptcy), losing such a case when you can't afford to pay the sums Tencent is demanding, can literally destroy your life.

Of course, now some of us are finding out that Tencent is apparently so thin skinned and unable to take even mild criticism, it's going to get people much more interested in what it is Tencent is trying to hide. How do you say "Streisand Effect" in Mandarin?

16 Comments | Leave a Comment..

Posted on Free Speech - 5 September 2019 @ 6:47am

Devin Nunes Drops One Ridiculous Lawsuit, Only To File Another One

from the featuring-rico! dept

A month ago we wrote about Devin Nunes' third lawsuit against his critics over their speech, and noted that he was promising in the press that more lawsuits were coming. We noted that the latest lawsuit was slightly odd in that he actually filed it in California, rather than Virginia (as with his first two lawsuits), and in California he could face real anti-SLAPP penalties (i.e., paying the other side's legal fees). Perhaps that's why that lawsuit was not actually filed by Nunes himself, but rather his campaign. If it got tossed out via anti-SLAPP, then suckers who donated to his campaign would foot the bill, rather than Nunes directly himself. Either way, we'll likely never find out because as suddenly as that case was filed, it's now been dismissed by Nunes. Amusingly, Nunes' lawyer is claiming victory:

“We gathered further evidence which supports the plaintiff’s overriding concerns that dark money is being used to influence our elections,” Kapetan said. “Given the new evidence recently discovered, the Nunes campaign committee voluntarily dismissed the lawsuit and the allegations underlining the lawsuit will be incorporated in a (racketeering) lawsuit filed in Virginia today.”

And that brings us to the new lawsuit that Nunes filed on Wednesday. It's a $10 million racketeering (RICO) case against Fusion GPS and Glenn Simpson along with the Campaign for Accountability. You may recall them from such past news as the infamous "Steele Dossier." Nunes has spent months insisting (incorrectly) that the Steele Dossier is some evidence of some massive overreach by US law enforcement or the intelligence community (leaving aside that under the previous administration Nunes was one of the most vocal cheerleaders for increased surveillance powers of the intelligence community) to spy on Americans. Yet when those powers may have been used (well within the law) to do surveillance on his friends, suddenly he's clutching pearls?

Anyway, the new lawsuit is a real piece of work. It's alleging racketeering by GPS Fusion and the Campaign for Accountability. As Ken "Popehat" White famously notes, IT'S NOT RICO, DAMMIT, and this case is no exception.

The Defendants are persons associated in fact (a RICO enterprise) who engage in interstate commerce by the use of one or more instrumentalities, including, but not limited to, the Internet, computers, telephones, mails and facsimile. In 2018, the Defendants received income derived, directly or indirectly, from a pattern of racketeering activity and have used or invested such income, directly or indirectly, in the establishment or operation of an enterprise engaged in, or the activities of which affect, interstate commerce. Through a pattern of racketeering activity, involving acts of wire fraud in violation of Title 18 U.S.C. § 1343 and acts of obstruction of justice in violation of Title 18 U.S.C. §§ 1503(a), 1512(b), 1512(d) and 1513(e), the Defendants have maintained, directly or indirectly, an interest in or control of an enterprise which is engaged in, or the activities of which affect, interstate commerce. While associated with such enterprise, the Defendants conducted or participated, directly or indirectly, in the conduct of such enterprise’s affairs through a pattern of racketeering activity. Between 2018 and the present, the Defendants have engaged in activity that is prohibited by Title 18 U.S.C. §§ 1962(a), 1962(b), and 1962(c).

The Defendants’ ongoing and continuous racketeering activities are part of a joint and systematic effort to intimidate, harass, threaten, influence, interfere with, impede, and ultimately to derail Plaintiff’s congressional investigation into Russian intermeddling in the 2016 U.S. Presidential Election. In furtherance of their conspiracy, the Defendants, acting in concert and with others, filed fraudulent and retaliatory “ethics” complaints against Plaintiff that were solely designed to harass and intimidate Plaintiff, to undermine his Russia investigation, and to protect Simpson, Fusion GPS and others from criminal referrals.

Plaintiff was injured in his business, property and reputation by Defendants’ racketeering activity and tortious misconduct. Plaintiff brings this action (a) to recover money damages for injuries caused by the Defendants’ racketeering activity, (b) for disgorgement of the ill-gotten gains and fruits of Defendants’ unlawful enterprise, (c) to impose reasonable restrictions on Defendants’ future activities, including Defendants’ use of fraudulent opposition research and fraudulent “ethics” complaints to intimidate members of Congress and other law enforcement officers in the performance of their official duties, (d) to enjoin the Defendants from committing wire fraud and from obstructing justice, and (e) to order the dissolution or reorganization of Fusion GPS and the CfA so as to prevent or restrain the Defendants from committing fraud, lying to the Federal Bureau of Investigation (“FBI”), Department of Justice (“DOJ”), Congress and Senate, obstructing justice, and violating Title 18 U.S.C. § 1962 in the future.

The details are basically that Nunes thinks that political research against him and corresponding ethics complaints are racketeering. That is not how any of this works. Indeed, if opposition research and filing ethics complaints is racketeering, basically every politician might be guilty of racketeering.

As noted by Nunes' lawyer, it is true that this new lawsuit includes some overlap with the California lawsuit that was just dropped -- mainly about his anger at some of his critics filing perfectly legal FOIA requests for Nunes' wife's emails (Nunes' real complaint here seems to be with how FOIA works for state employees, but to him it's all RICO):

Upon information and belief, Fusion GPS and/or CfA directed Seeley to make a request under the California Public Records Act (“PRA”) for emails received by Plaintiff’s wife, Elizabeth, an elementary school teacher. Seeley’s request targeting Plaintiff’s wife ended up costing the Tulare County Office of Education thousands of dollars in unnecessary cost and expense. Seeley published Elizabeth Nunes’ emails online and included the names and email addresses of numerous school administrators and teachers, resulting in extensive harassment of these innocent, hard-working citizens of Tulare County, including hateful accusations that they teach bigotry and racism. [https://www.scribd.com/user/399236302/Michael-Seeley]. In fact, the school was so concerned about security problems resulting from this situation that it adopted enhanced security measures.

As we stated when the original lawsuit was filed, you can certainly understand why this is annoying and frustrating, but it's a perfectly legal use of FOIA until the laws are changed.

Like the previous lawsuits, this one has little likelihood of going anywhere. Doing opposition research and filing ethics complaints with Congress is not racketeering in any sense of the law. But, of course, once again we have a lawsuit in Virginia over protected speech activity. It's too bad Virginia has a relatively weak anti-SLAPP law.

Read More | 37 Comments | Leave a Comment..

Posted on Techdirt - 4 September 2019 @ 3:49pm

Three Years Later And The Copyright Office Still Can't Build A Functioning Website For DMCA Agents, But Demands Everyone Re-Register

from the and-pay-up dept

In early 2016, we wrote about an absolutely ridiculous plan by the Copyright Office to -- without any basis in the law -- strip every site of its registered DMCA agent. In case you're not aware, one of the conditions to get the DMCA's Section 512 safe harbors as a platform for user content, is that you need to have a "Designated Agent." As per 512(c)(2), it says:

Designated agent.—The limitations on liability established in this subsection apply to a service provider only if the service provider has designated an agent to receive notifications of claimed infringement described in paragraph (3), by making available through its service, including on its website in a location accessible to the public, and by providing to the Copyright Office, substantially the following information:

(A) the name, address, phone number, and electronic mail address of the agent.

(B) other contact information which the Register of Copyrights may deem appropriate.

The Register of Copyrights shall maintain a current directory of agents available to the public for inspection, including through the Internet, and may require payment of a fee by service providers to cover the costs of maintaining the directory.

Note that this says that Register of Copyrights shall maintain such a list. However, the Copyright Office, decided back around 2016 that there were too many "old" registrations in the database, and decided to literally dump every single registration, despite the law not allowing it to do so. It then instituted a new plan that said -- again, without any legal basis -- that every site not only needed to register, but it would need to re-register every three years or it would lose the safe harbor protections, which could expose sites to massive liability.

In late 2016, this plan went into effect, and I detailed the incredibly bad computer system that the Office had put in place to handle such registrations, starting with the fact that the password requirements literally violate the federal government's own rules for passwords. Back in 2016, NIST told government agencies, among other things, to stop requiring random characters, upper and lower case, etc. and to stop expiring passwords with no reason.

Verifiers SHOULD NOT impose other composition rules (e.g., requiring mixtures of different character types or prohibiting consecutively repeated characters) for memorized secrets. Verifiers SHOULD NOT require memorized secrets to be changed arbitrarily (e.g., periodically). However, verifiers SHALL force a change if there is evidence of compromise of the authenticator.

So we were, well, not surprised back in 2016, that the Copyright Office's system ignored that rule not to include composition rules, and highlighted how they stupidly said:

Passwords must have at least 12 characters, with at least one lower case letter, upper case letter, number, and special character "!@#$%^&*()", and must not have any repeated letters, numbers, or special characters.

Not only did this violate NIST's guidelines, but it actually makes passwords significantly less secure by reducing the randomness of passwords, making them less secure.

Anyway, three years have almost passed, and as per the new rules, the Copyright Office is about to kick everyone off again. For no good reason at all. Even better, they sent an email over the Labor Day weekend to alert people that they're at risk of losing their registrations if they don't re-register -- because it's not like people miss random, poorly formatted emails that literally come from "donotreply@loc.gov" when going through emails coming back from a long weekend. Thankfully, I also saw Eric Goldman's blog post about this, though I'm guessing not everyone who owns a website that needs 512 safe harbors protection reads his blog (unfortunately).

Incredibly, it looks like the Copyright Office has done literally nothing to fix the problems of the system. Indeed, it turns out that things are even worse than before. Not only does the system still require "composition rules" that violate NIST's guidelines, it also expired everyone's passwords (which also violates the guidelines).

It actually proved significantly more difficult than expected to create a new password. Like everyone in the world should, I use a password manager to generate and store my passwords. But because of the Copyright Office's dumb rules, none of the passwords my password manager generated would work. I kept getting error message after error message, just telling me the same dumb, pointless, rules over and over again:

Even though it's literally bad practice to make your own passwords, I even tried to "edit" some of the auto-generated passwords to meet the rules, but it still didn't work, though I'm not sure why. One thing I discovered, while it says you have to use "special character" the list shown in that image is the entire set of allowed special characters. So, passwords using other special characters don't work, even though the Copyright Office's system doesn't bother to explain why it rejected your password. But special characters like "\>{]" and such don't work, even though there's no reason why they shouldn't, and most password generators will (smartly!) include them. Oh yeah, also this one stymied me for a really long time. The " mark is not allowed in a password, even though it sorta looks like it's included in that list. But it's not. It's just a pointless set of "quote marks" around the allowed symbols. This is not an intuitive system. It is not user friendly. It's is dumb, insecure, and violates NIST's rules -- as it did three years ago when I complained about it.

Then you log in... and the information given to you is sorely lacking. First, at the very top, you get a message saying that the entire website may be offline for three whole days... a month ago. What? What the hell are they doing that they need to take a site offline for three whole days? And if they had to do system upgrades for that long, how the hell have they not made anything actually work right? And, most importantly, if that shutdown happened a month ago, why are they still showing the damn warning message?

From there, you are shown a weird chart with a lot of useless information -- but it is not at all clear how you re-register. There is no indication that you need to re-register. There is just your "service provider name," "registration number," "status," "last updated" and the ever useless "Action" box.

It turns out, to re-register, you have to click that little pencil, which the tooltip tells me is to "Edit." But I'm not "editing" anything. I just want to renew so I still am protected by the DMCA's safe harbors. It then makes me review everything multiple times, before telling me I need to pay $6, and sending me to a sketchy looking payment site (which I get is not run by the Copyright Office itself, but still).

I was almost afraid to give it my credit card.

Either way, eventually it "worked," but in the most fucked up of ways. The website itself is then not exactly clear if this renewal adds on to my existing -- meaning do I get three more years from the date of my original three year registration in 2016 (which would be December 1), or if it simply starts the clock anew, as of the date I paid. It sure looks like they just started a new three year clock yesterday -- meaning they cheated me out of 3 months of coverage because I dared to renew promptly. So by being good and renewing in their stupid system nearly 3 months before I need to, they just chop off 3 months of the "service" they're providing me? How the fuck is that allowed? If you look at my original listing -- even though I'd paid up for 3 full years, they now show it as "inactive" and list the new one as "active."

And that's kinda fucked up. The current listing says "Active" for "September 3, 2019 to Present" which almost certainly means this one will expire September 3, 2022, even though it should go until December 1, 2022.

All of this is a complete mess. It's entirely unnecessary, and as Eric Goldman notes in his piece, when the Copyright Office rolled this out it "promised a smooth renewal process." This was anything but smooth -- and it's likely that plenty of sites may miss the fact that they have to do this, or get caught up in trying to get the damn system to work. While, thankfully, this hasn't impacted any sites directly that I'm aware of, it's only a matter of time until a site that thought it had a successful DMCA agent finds out it no longer does because the Copyright Office decided to change the entire process, and apparently can't build a freaking website that works or is even up to basic federal website standards.

And, sure, $6 is cheap, but it's still pretty messed up that the Copyright Office simply lopped off three months of service they owed me because their own system is too poorly implemented to know to add on another three years at the end of my existing "subscription." It seems like something that shouldn't happen -- and one hopes that someone at the Copyright Office or the Library of Congress figures their shit out before September of 2022. But I have my doubts.

34 Comments | Leave a Comment..

Posted on Techdirt - 4 September 2019 @ 11:59am

Pinterest's Way Of Dealing With Anti-Vax Nonsense And Scams Is Only Possible Because Of Section 230

from the experimentation-wins dept

A key argument by many who are advocating for getting rid of Section 230 is that various internet platforms need to "take more responsibility" or have some sort of "duty of care," to rid their platforms of malicious content (however that's defined). I even heard one staunch anti-Section 230 advocate complain vocally that internet services "aren't experimenting enough" with policing their platforms. The argument that there's not enough experimentation struck me as quite odd -- because if you look around, there's actually a ton of experimentation going on in platform moderation methods and techniques. And, even more weird, is that most of this experimentation is only possible because of Section 230.

Take the case of Pinterest. While Facebook, Twitter, YouTube, and Amazon have all struggled with ways to deal with the influx of utter nonsense -- much of which is actively dangerous -- Pinterest earlier this year announced that it was taking a hardline stance against anti-vax nonsense, banning it from the site, as best it could.

Pinterest has responded by building a “blacklist” of “polluted” search terms.

“We are doing our best to remove bad content, but we know that there is bad content that we haven’t gotten to yet,” explained Ifeoma Ozoma, a public policy and social impact manager at Pinterest. “We don’t want to surface that with search terms like ‘cancer cure’ or ‘suicide’. We’re hoping that we can move from breaking the site to surfacing only good content. Until then, this is preferable.”

Pinterest also includes health misinformation images in its “hash bank”, preventing users from re-pinning anti-vaxx memes that have already been reported and taken down. (Hashing is a technology that applies a unique digital identifier to images and videos; it has been more widely used to prevent the spread of child abuse images and terrorist content.)

And the company has banned all pins from certain websites.

“If there’s a website that is dedicated in whole to spreading health misinformation, we don’t want that on our platform, so we can block on the URL level,” Ozoma said.

That was at the beginning of the year. And now, Pinterest is trying to fill the nonsense void with credible information instead. Rather than just blocking all of the nonsense, it has decided to replace it with credible information:

On Wednesday, Pinterest announced a new step in its efforts to combat health misinformation on its platform: users will be able to search for 200 terms related to vaccines, but the results displayed will come from major public health organizations, including the World Health Organization (WHO), Centers for Disease Control, American Academy of Pediatrics (AAP) and Vaccine Safety Net.

The platform will also bar advertisements, recommendations and comments on those pages.

“It was really important for us to make sure that this experience doesn’t allow any misinformation to seep in,” said Ifeoma Ozoma, public policy and social impact manager for Pinterest. “You’re not going to end up in a situation where you click on a trustworthy pin and the recommendations or comments are full of misinformation.”

This is certainly not a panacea. Indeed, it's not even that hard to still find vaccine misinformation on Pinterest if you look hard enough. And, there's certainly a risk of over-blocking in this area as well. For example if someone were countering disinformation about vaccines, it's possible that they could accidentally get swept up in the mess as well. Also, not all information is so obviously bullshit. I've seen anti-vaxxers misrepresent legitimate studies as supporting their position -- so how do you handle legitimate reports that are being misrepresented? It quickly becomes difficult (as we've discussed in many previous posts).

However, the key thing here is that this is how Pinterest has decided to deal with this problem. It's the way in which this one platform has decided to approach things on a problem that it sees with the spread of, often dangerous, myths about vaccinations (and, just a heads up: don't even think of spreading more anti-vax nonsense in the comments, because you're not just wrong and ignorant, but you're actively harming people).

But the key thing here is that Pinterest is able to experiment this way because Section 230 protects its choices. Section 230 allows platforms like Pinterest to experiment and try different approaches. And that's important for a variety of reasons. More experimentation means more ideas and more tests of what actually works. It also allows for a recognition that every platform is different. The content that is on Pinterest is different than the content on YouTube or Reddit or Amazon or Twitter, and Section 230 lets them craft a unique policy and implement it how they see fit to best deal with their own platform and their own community.

Nearly all of the proposals to chip away at Section 230 would limit or block entirely this kind of experimentation -- meaning we'd all end up significantly worse off in the long run. It's one thing for people to simply demand that platforms take more responsibility -- but when the people making those demands are simultaneously trying to take away the tools that allow the companies to actually experiment with how best to take more responsibility, that's when problems come in.

113 Comments | Leave a Comment..

Posted on Free Speech - 4 September 2019 @ 9:48am

Pro Tip: Don't Send A Completely Bogus Defamation Threat To A Website That Employs A Former ACLU Badass

from the just-a-suggestion dept

If you happen to recognize the name Jamie Lynn Crofts, it may be from the truly amazing amicus brief she filed two years ago in the nutty SLAPP lawsuit that coal boss Bob Murray filed against comedian John Oliver after Oliver did a (very funny) segment about coal and coal jobs that talked a fair bit about Bob Murray. Crofts, at the time working for the ACLU in West Virginia, filed an amicus brief that was truly wonderful to behold, including sections entitled "The Ridiculous Case at Hand" and "Anyone Can Legally Say "Eat Shit, Bob!" and "You Can't Sue People for Being Mean to You, Bob" and "You Can't Get a Court Order Telling the Press How to Cover Stories, Bob."

Anyway, it appears that Jamie has since moved on from the ACLU, and it appears that she's now regularly writing about legal issues for Wonkette, and doing a pretty damn good job of it as well, looking through her recent stories. I wish I'd known that before, as I would have followed her coverage much more closely. However, Jamie truly shines when dealing with bullshit censorial threats, and apparently the performance artists known as "Diamond and Silk" decided to send a laughably sketchy "cease and desist" letter to Wonkette over some of their coverage of Diamond and Silk and whatever it is that they do. Jamie's response is entitled In The Matter Of Diamond And Silk's Very Real Lawyer v. Wonkette: Bring It, Sh*thead, which maybe gives you a sense of the spirit of her reply.

Normally, in this space, we'd go through and highlight the absurdity of the threat letter, but, honestly, we can't do half as good a job as Jamie does (we probably couldn't do 20% as good a job). So you should go read the whole thing, but here's a snippet.

They gave us 24 hours to STOP THE BESMIRCHES, lest we FACE THE WRATH of the consummate professional who wrote this letter.

Libelizing and Slanderification!

Let's talk about how the law actually works, here. Here in the US of A, we have this little thing called the First Amendment. And because of it, you don't get to sue people for being mean to you. In fact, making fun of assholes is a proud American tradition, much like obesity and electing white supremacists.

Even private citizens can only sue for false statements of fact that harm their reputation. And for public figures, which Diamond and Silk unfortunately and undeniably are, it's a lot harder. Public figures have to show that any actual false statements were made "with actual malice."

It's a pretty basic thing in American law that you don't get to sue media organizations -- or mommybloggers -- just because you don't like what they have to say. The US Supreme Court has been pretty clear throughout the years that political speech, in particular, receives the most protection. That's because "speech concerning public affairs is more than self-expression; it is the essence of self-government." Garrison v. Louisiana, 379 U.S. 64 74-75 (1964).

What's really fun about the truth requirement is that it means you get to request documents from the other side and argue in court about whether or not the particular statements are, in fact, true. So if Calcite and Burlap actually sued us for this, one of the actual issues would be whether Wonderbitch really does hate them for being so dumb. And they'd have to show that their "reputation," such as it is, was harmed by what she wrote.

Discovery would be LIT.

Not only would we get to explore exactly how Quartz and Cotton-Poly Blend prop up white nationalism, we'd get to ask them why they think our articles are false and what kind of sketchy sources they get their money from.

I swear there's a lot more there and it just gets better and better and better. So go read it. And, yeah, maybe don't send a bogus legal threat letter to a site that employs a former ACLU 1st Amendment lawyer who is famous for filing a brief in court about how it's legal to say "Eat Shit, Bob!"

73 Comments | Leave a Comment..

Posted on Techdirt - 3 September 2019 @ 1:35pm

Just As Attorney General Barr Insists iPhone Users Have Too Much Security, We Learn They Don't Have Nearly Enough

from the well,-look-at-that dept

You may recall a few years back, John Oliver did one of his always excellent Last Week Tonight shows all about encryption. It concluded with an "honest Apple commercial" that highlighted the difficulty of keeping phones secure, and noting that it's a constant war against malicious attackers who are always trying to figure out new ways to break into people's phones:

That commercial is a lot more realistic than people might think. And late last week, Google revealed a pretty astounding iOS exploit that broadly targeted anyone who visited a series of compromised websites, using a combination of zero day attacks that allowed them to more or less own anyone's iPhone who had visited the sites. As Wired noted in its piece about this attack, it changes most of what we know about iPhone attacks these days. At the very least, it demolished the idea that most iPhone hacking really only targeted key individuals.

It also represents a deep shift in how the security community thinks about rare zero-day attacks and the economics of "targeted" hacking. The campaign should dispel the notion, writes Google Project Zero researcher Ian Beer, that every iPhone hacking victim is a "million dollar dissident," a nickname given to now-imprisoned UAE human rights activist Ahmed Mansour in 2016 after his iPhone was hacked. Since an iPhone hacking technique was estimated at the time to cost $1 million or more—as much as $2 million today, according to some published prices—attacks against dissidents like Mansour were thought to be expensive, stealthy, and highly focused as a rule.

The iPhone-hacking campaign Google uncovered upends those assumptions. If a hacking operation is brazen enough to indiscriminately hack thousands of phones, iPhone hacking isn't all that expensive, says Cooper Quintin, a security researcher with the Electronic Frontier Foundation's Threat Lab.

"The prevailing wisdom and math has been incorrect," says Quintin, who focuses on state-sponsored hacking that targets activists and journalists. "We've sort of been operating on this framework, that it costs a million dollars to hack the dissident’s iPhone. It actually costs far less than that per dissident if you’re attacking a group. If your target is an entire class of people and you're willing to do a watering hole attack, the per-dissident price can be very cheap."

Now, it's true that device encryption has nothing to do with this attack -- and, in fact, the attack could be seen as a way to get around device encryption, since it was putting malware on your phone that could slurp up your data once you unencrypted it locally -- but it does strike me as yet another condemnation of Attorney General William Barr's utter nonsense lately about how the average consumer doesn't need that much phone security these days. If you'll recall, Barr shrugged off concerns about banning real encryption by saying that since all phones have some security vulnerabilities, what's a few more:

All systems fall short of optimality and have some residual risk of vulnerability — a point which the tech community acknowledges when they propose that law enforcement can satisfy its requirements by exploiting vulnerabilities in their products. The real question is whether the residual risk of vulnerability resulting from incorporating a lawful access mechanism is materially greater than those already in the unmodified product. The Department does not believe this can be demonstrated.

The Department of Justice and Barr are wrong. Encryption still remains not just a key piece of fighting these vulnerabilities, but one of the most important. Creating "lawful access" points is worse than taking away a protection, it's literally enabling a multitude of new vulnerabilities -- and playing right into the hands of people looking to exploit such vulnerabilities.

Indeed, as the Wired article notes, even as surprising and unexpected as the latest vulnerabilities were, it's notable that they appeared to be out there for quite some time, with many, many victims, and no one spotted it even though the attackers were super sloppy:

The hackers still made some strangely amateurish mistakes, Williams points out, making it all the more extraordinary that they operated so long without being detected. The spyware the hackers installed with their zero-day tools didn't use HTTPS encryption, allowing anyone on the same network as a victim to read or intercept the data it stole in transit. And that data was siphoned off to a server whose IP addresses were hardcoded into the malware, making it far easier to locate the group's servers, and harder for them to adapt their infrastructure over time. (Google carefully left those IP addresses out of its report.)

Given the mismatch between crude spyware and highly sophisticated zero-day chains used to plant it, Williams hypothesizes that the hackers may be a government agency that bought the zero day exploits from a contractor, but whose own inexperienced programmers coded the malware left behind on targeted iPhones. "This is someone with a ton of money and horrible tradecraft, because they’re relatively young at this game," Williams says.

And that certainly suggests that there are likely already much more sophisticated attacks out there -- and if not, many more are coming soon. And, they will target any and all possible vulnerabilities -- including any "backdoor" the DOJ/FBI demands that device makers install. Contrary to what you may have heard that the debate over backdoors is a fight between 'security and privacy," it's not. It's a debate between "security for most people, and rare instances where law enforcement doesn't want to do basic detective work and wants everything handed to them."

This latest revelation should now make many people more aware of the security challenges of protecting connected devices. But it should also re-emphasize how utterly ludicrous it would be to purposefully insert new vulnerabilities into phones because the DOJ can't be bothered to do its job properly.

13 Comments | Leave a Comment..

Posted on Free Speech - 3 September 2019 @ 12:05pm

Knight Institute Warns Rep. Ocasio-Cortez That She, Like Trump, Can't Block People On Twitter

from the as-noted dept

Earlier this summer, we wrote about the 2nd Circuit appeals court affirming a district court ruling against Donald Trump, saying that it's a 1st Amendment violation for him to block followers on Twitter. The reasoning in the decisions was a bit nuanced, but the short version is that (1) if you're a public official, and (2) using social media (3) for official purposes (4) to create a space of open dialogue, then you cannot block people from following you based on the views they express. The four conditions do need to be met -- and the lower court at least noted that such public officials can still "mute" people. That is, the officials don't need to listen -- but they cannot limit access to the narrow public space that is created in response to their official social media posts.

Right after that ruling came down, we pointed out that someone had already sued Rep. Alexandria Ocasio-Cortez for blocking people on Twitter as well, and our analysis was that she certainly seemed to be violating the 1st Amendment in the same way as Trump was. Now, the Knight 1st Amendment Institute, which filed the initial lawsuit against Trump, has sent a letter to Ocasio-Cortez making the same point. This is interesting, because when the original lawsuit against AOC was filed, and the media requested comment from the Knight Institute, there was at least some hesitation, saying that they needed to look at all of the details. Now that the details have been explored, it appears that the Knight Institute has come to the same conclusion.

As the letter makes clear, the @AOC account meets all the criteria that the court required to say that blocking is not allowed. Apparently Ocasio-Cortez is trying to argue that the @AOC account is a personal account, and she had another more official account. But, as the Knight letter explains, that's not at all accurate:

Based on the facts as we understand them, the @AOC account is a “public forum” within the meaning of the First Amendment. You use the account as an extension of your office—to share information about congressional hearings, to explain policy proposals, to advocate legislation, and to solicit public comment about issues relating to government. Recently, for example, you used the account to discuss new “policy approaches we should consider wrt immigration,” and to ask the public, “[w]hat commissions would you want to see Congress establish?” The account is a digital forum in which you share your thoughts and decisions as a member of Congress, and in which members of the public directly engage with you and with one another about matters of public policy. Since you first took office, the number of users following the @AOC account has reached more than 5.2 million. Many of your tweets staking out positions on issues such as immigration, the environment, and impeachment have made headline news. The @AOC account is important to you as a legislator, to your constituents, and to others who seek to understand and influence your legislative decisions and priorities.

Multiple courts have held that public officials’ social media accounts constitute public forums when they are used in the way that you use the @AOC account, and they have made clear that public officials violate the First Amendment when they block users from these forums on the basis of viewpoint. Most relevant here, the U.S. Court of Appeals for the Second Circuit recently concluded that President Trump violated the First Amendment by blocking users from his Twitter account, @real- DonaldTrump, because “he disagree[d] with their speech.” In another recent case, the Fourth Circuit held that the chairperson of a local county board violated the First Amendment by blocking an individual from her Facebook page.

In pending litigation, your attorneys have argued that the @AOC account is not subject to the First Amendment because it is a personal account. As we have explained above, that characterization is incorrect. Further, while we understand that you have another account that is nominally your “official” one, the fact remains that you use the @AOC account as an extension of your office. Notably, the Second Circuit rejected President Trump’s argument that his account is a personal one even though he has other accounts—@POTUS and @WhiteHouse—that are nominally official. The Court wrote, “the First Amendment does not permit a public official who utilizes a social media account for all manner of official purposes to exclude persons from an otherwise-open online dialogue because they expressed views with which the official disagrees.”

So far, AOC responded to this by saying that she's only blocking 20 accounts and that she's blocking them for "harassment," rather than viewpoint discrimination.

But, again, as the lower court said in the Trump case, the remedy there should be "muting," not blocking. I get that being a public figure on Twitter is not always fun -- and especially for polarizing political figures (which both Trump and Ocasio-Cortez undoubtedly are). And I get that it must suck to have assholes clog up your feed. I'm barely known and, while I use Twitter's block button sparingly, I have found it useful at times. But I'm not a publicly elected official using my account for official business as an elected politician. The fact that one of the accounts AOC is blocking is a media outlet, even one as ridiculous as The Daily Caller, only highlights the 1st Amendment concerns here.

I've seen some people supporting the case against Trump, but not supporting it against AOC (and I've also seen the reverse). In most cases, though, those opinions seem to be driven mainly by whether or not one feels politically aligned with one or the other politician. And, while I have seen some good faith arguments that "harassment" should be considered something different, there is a very slippery slope there. Put that into Trump's hands and he'll just as quickly claim that everyone who is criticizing him is "harassing" him as well. We have a 1st Amendment for a reason, and politicians on both sides of the traditional aisle should respect that.

Read More | 34 Comments | Leave a Comment..

Posted on Techdirt - 3 September 2019 @ 9:34am

Bedbug Privilege: Bret Stephens Uses His NY Times Column To Suggest Jokingly Comparing Him To A Bedbug Is Prelude To Ethnic Genocide

from the are-bedbugs-snowflakes? dept

It's one thing to trigger a massive Streisand Effect. It's another to keep on making it worse. Bret Stephens is entering new territory here. Last week, we wrote about his bedbug freakout, in which he misread a tweet that basically no one had seen or read, and tried to use his high and mighty position as a "NY Times Columnist" to get a professor fired, by angrily emailing that professor and cc'ing university provost. As you'll recall, the professor, David Karpf of George Washington University, had simply cracked a mild joke in response to someone at the NY Times tweeting that there were bedbugs in the NY Times offices: "The bedbugs are a metaphor. The bedbugs are Bret Stephens."

Now, let's pause for a second, to note that Stephens appears to have misread this tweet. It is not calling him a bedbug. It's saying that "bedbug is a metaphor for Bret Stephens." In other words, he's joking that other NY Times staffers want to get rid of Stephens, but are having trouble doing so.

Stephens dug himself a deeper hole the next morning by going on MSNBC and trying to defend his nonsense -- saying he wasn't trying to get Karpf fired, but just wanted his bosses to be aware of how professors at the school acted. That's nonsense and everyone knows it's nonsense. You don't angrily email someone's boss and complain about them hoping for no response whatsoever. Stephens is insulting everyone's intelligence with such a claim. Stephens also claimed that he took such offense to being called a bedbug (remember, he wasn't being called a bedbug) because it was associated with how "totalitarian regimes" act in dehumanizing people. Again, no one believes this. No one read Karpf's joke of a tweet and thought, "man, it's time to send Stephens to the ovens."

Either way, Stephens had a whole week to calm down, and to recognize he totally and completely overreacted. He could even it as a growing moment. Perhaps recognize that many of his columns about how easily people take offense, and how people need thicker skin, were kinda hypocritical, given his own reaction to a very mild criticism. But, nope. Stephens apparently thinks himself too important, and is way too cocky and overly sure of himself, to let such a grave insult pass him by. He seems to think he was really, really onto something with that comparison to totalitarian regimes. And, he's an important NY Times columnist -- so it must be time to write a full column about how the Nazis called Jews bedbugs. He just... needed to find the right quote and be too technologically illiterate to recognize that when you link to Google books, after doing a search it retains your search terms.

So, Stephens writes one of his high and mighty NY Times opinion pieces about Nazis "and the Ingredients of Slaughter." He doesn't mention Karpf or his own little laughable freakout. He just subtly (I'm sure, he must have thought) drops in a reference to Germans referring to bedbugs. And didn't realize that he'd left the search terms viewable to all.

And here's an even bigger image showing how the search was left in the URL so that it shows up whenever anyone clicked through:

From that it's clear that Bret literally went to Google books, did a search on "Jews as bedbugs," found a random dissertation that had the following line in it:

“The bedbugs are on fire. The Germans are doing a great job.”

This gets even more troubling if one were to read the actual paper that Stephens links to. In Stephens' column, he refers to this quote as coming from "a Polish anti-semite." Yet, in the actual book, it just says "one man." And, even worse, the book itself appears to note that a scholar believes the reference to bedbugs was to be taken literally, as they were dealing with an infestation of bedbugs -- and not as a reference to Jews.

Incredibly -- and literally unbelievably -- the NY Times jumped in to defend Stephens and claim that editors added that link:

First of all, what? This makes no sense at all. Or, as Cody Johnston rightly points out, if the NY Times editors were trying to find the right Google Books link to use, why did they do a search for "Jews as bedbugs," rather than the literal quote that Stephens included in his piece? Second, if it actually was the editors who added that link, that actually makes the whole thing worse, because it suggests that editors reviewed the column and decided, "you know what this needs? That much more evidence that Stephens and the NY Times are all in on using our position of power to stomp out a pesky professor on Twitter who made a mild joke at our expense."

All of this looks really, really bad. And, of course, it looks worse and worse, the more you look at it. As others have noted, Stephens seems to specialize in "telling snowflakes to harden up" and to stop being so easily offended. Indeed, just months ago, he mocked people "who specialize in being offended."

But, again, it gets worse. Karpf initially responded to the latest NYTimes-level subtweet, with a bit of shock:

But then, he correctly noted just how fucked up this whole thing really is:

Indeed, Karpf spent a couple days after all of this happened running circles around Stephens in talking to the media and explaining why Stephens actions are really, really messed up. In the op-ed piece he did for the LA Times, he properly notes that, despite Stephens' laughable claims that he just wanted "civility," it's obvious that Stephens' actions were never about civility:

This was never about civility; it was about power. Bret Stephens cc’d my provost because he wanted to impose a social penalty on me for making jokes about him online. That isn’t a call for polite, civil, rational discourse. It’s an exercise of power. He wanted me and my employer to realize that I had offended an important voice at the paper of record. When powerful people demand civility from those with less power, what they are really saying is that they expect obedience from their lessers.

This NY Times' piece (which was written after Karpf wrote that line) is a pretty big piece of evidence there. Stephens thinks he's important. He has a Pulitzer Prize. He's a columnist at the NY Times. He is trying to abuse that position of power to pretend a mild insult directed at him is the equivalent of Hitler. This is a mixture of both the Streisand Effect and Godwin's Law... with a bit of Charles Carreon's inability to stop digging thrown in for spice.

Over in Esquire, Karpf further noted just what an example this all is of Stephens abusing his position of power:

Bret Stephens is above me in the status hierarchy. He knows this. I know this. He has won a Pulitzer Prize and has a regular op-ed column in the New York Times. I am just some professor. I’ve written two books, but unless you are professionally involved with digital politics, you probably have never heard of me.

[....]

But what was most striking to me was that he had gone to the effort to CC the provost. Including the Provost clarifies the intent of the message. It means he was not reaching out in an earnest attempt to promote online civil discourse. It means he was trying to send a message that he stands above me in the status hierarchy, and that people like me are not supposed to write mean jokes about people like him online. It was an exercise in wielding power—using the imprimatur of The New York Times to ward off speech that he finds distasteful.

Again, Karpf wrote this before Stephens then used the literal pages of the NY Times to imply that Karpf was the equivalent of a Nazi cheering on the death of Jews.

Karpf points out that, while he's relatively immune from Stephens abuse of power, others are not so fortunate, and not so privileged:

But here’s what still bothers me as this strange episode recedes from the news cycle: Bret Stephens seems to think that his social status should render him immune from criticism from people like me. I think that the rewards of his social status come with an understanding that lesser-known people will say mean things about him online.

Stephens reached out to me in the mistaken belief that I would feel ashamed. He reached out believing my university would chastise me for provoking the ire of a writer at The New York Times. That’s an abuse of his social station. It cost me nothing, but it is an abuse of his power that would carry a real penalty for a younger or less privileged academic. The Times should expect more of its writers. Stephens should expect more of himself.

Indeed, back in the LA Times piece, Karpf lays it out even more clearly:

Part of why this story has gone viral is that it is about so little. The daily news is terrible. The Amazon rainforest is burning, the president retweets white nationalists, the economy looks like it is heading for a recession… By contrast, Bret Stephens, the author of “Free Speech and the Necessity of Discomfort,” couldn’t handle the slightest discomfort when he saw speech about himself online. The stakes are low here, while they are terrifyingly high elsewhere. But it’s worth keeping in mind that these viral media stories are usually much worse for everyone involved. I am a tenured white male professor. I have taken remarkably little online abuse as a result of this episode. If Stephens had directed his message to one of my female colleagues, they would have faced much more online vitriol. I’ve had zero death threats. Many women with a public platform receive a death threat with their daily morning coffee. This particular episode was pretty low-stakes, but we still have a lot of work to do here.

Now that Stephens has taken things to another level by taking what was a mild joke at his behest and turning it into comparing the joker to the freaking Nazis, Karpf has again handled things much, much better. His latest piece in Esquire after Stephens' column is also really good at digging in to the heart of what happened here:

Twitter jokes from obscure academics are not where the armed violence targeting synagogues is coming from. He ought to read Sarah Jeong’s recent piece, “When the Internet Chases You From Your Home.” It takes an extraordinarily incurious mind to believe, in 2019, that the most vulnerable populations online are moderate Republicans like himself, given what women and people of color who dare to participate in public discourse routinely face.

The greatest irony is how easily this whole episode could have been avoided, or at least prematurely brought to a close. This should have been a goofy one-day story about barely anything at all. On Tuesday morning, Stephens could have simply said “I had a bad night. I shouldn’t have sent that email. I didn’t think the guy would post it to social media. That was embarrassing for me. I apologize, let’s move on.” That would have been the end of things. Barring that, he could have laid low for a week. He could have written a column about anything other than the “Bretbug” dustup. As a professor of strategic political communication, I could have told him that the only way for him to stop losing here is to stop playing.

Instead, Stephens used the largest weapon at his disposal—his New York Times column—to imply that the Jewish professor who mildly teased him online was the equivalent of a Nazi propagandist. (Godwin’s Law, by the way, is meant to describe internet discussion forums, not published columns in the paper of record.)

Oh. And, of course you know it gets worse. Considering that the entire crux of Stephens' column was to suggest that comparing people to insects is setting the stage for genocide, you had to know that people were going to point out that Stephens himself has (you already saw this one coming a million miles away, right?) compared people to insects. Specifically, in a 2013 WSJ column, Stephens compared Palestinians to mosquitoes.

And then even worse. As others quickly discovered, back in 2004, Stephens compared the Palestinians to weeds.

Now, you could argue that in that column, he says he means it metaphorically, but then I'd just need to remind you that the bedbug tweet was also explicitly metaphorical.

So, if you're following along at home, Stephens -- who insists that people are way too easily offended these days, and complains how the kids these days need to suck it up and not get so damn offended -- got ridiculously offended after he misread a very mild joke where his name was a punchline. A joke, I should remind you, that almost no one saw. He then took it upon himself to email the joker, and cc his boss -- whining about the lack of civility in a passive aggressive manner that seemed obviously designed to use his status to punish the professor. When that whole thing completely blew up in his face, rather than recognizing how all this went wrong, Stephens doubled down, concocted a ludicrous backstory about how Nazis called people bedbugs (which he had to search for to find just one example that doesn't even show that they did) and put it into a nonsense NY Times opinion piece whose only real job is to suggest that calling him a bedbug (which Karpf didn't actually do) was a prelude to ethnic genocide... all while forgetting that he, himself, had called Palestinians mosquitos and weeds.

One would hope this ends here. But I fear that it will not.

50 Comments | Leave a Comment..

Posted on Techdirt - 30 August 2019 @ 10:43am

Josh Hawley Continues To Pretend That Silicon Valley Isn't Innovative

from the nanny-state dept

Josh Hawley pretends to be against big government. He pretends to be against the "nanny state." But since the second he got into power, nearly everything he's proposed has been about increasing government control over industry. But just one industry. The internet/tech industry that he has personally decided doesn't work the way he thinks it should. Beyond trying to get rid of Section 230, Hawley has proposed a bill that literally makes design choices for internet companies. Earlier this year, he introduced another bill that tries to design features for online video sites. He's made it clear that he doesn't like internet site because his constituents like them too much, which seems odd.

And, just a week after the Wall Street Journal rightly mocked this approach, and explained that his constant refrain that there is no innovation coming out of Silicon Valley anymore is laughable... the very same Wall Street Journal has allowed Hawley to simply repeat his nonsensical claim that there is no innovation coming out of Silicon Valley (likely behind a paywall):

Men landed on the moon 50 years ago, a tremendous feat of American creativity, courage and, not least, technology. The tech discoveries made in the space race powered innovation for decades. But I wonder, 50 years on, what the tech industry is giving America today.

Nice poetic start... by essentially announcing to the world that you're totally ignorant of what's happening in tech and innovation today. That's one strategy.

Innovation in physics—the world of real things—has slowed, and America is losing its manufacturing process edge in key industries. Meanwhile, the landscapes of our cities and towns look about the same as they did half a century ago.

[Citation needed]. It's unclear how you decide that "innovation in physics... has slowed," but one simple point on that is that the easier challenges have been solved, and people are working on much harder stuff. Similarly, it's unclear how you determine that we've "lost our manufacturing process edge." According to whom? And what? The US is still a massive manufacturing country, neck and neck with China. It is true that fewer jobs are in manufacturing, but much of that is because of process innovations. And I'm not sure what the landscapes of our cities and towns have to do with anything at all.

There’s no question that Silicon Valley and the three or four corporate behemoths that dominate it have made it easier to share information. But the modern smartphone, the search engine and the digital social network were invented more than a decade ago. What passes for innovation by Big Tech today isn’t fundamentally new products or new services, but ever more sophisticated exploitation of people.

It's totally fair to note that innovation in search engines, smartphones and social networks has slowed down, but to argue that this is the end of innovation and that the only thing Silicon Valley is working on today is "sophisticated exploitation of people" is laughably ignorant. First of all, the smartphone is really only about a dozen years old. That's still a pretty damn recent innovation. Successful tablets are even younger. That's pretty recent.

And there's lots of other amazing innovation still happening -- in fact, much of it driven by the revenue successes of those older innovations that Hawley is now mocking: self-driving cars, space exploration, distributed ledger technology, artificial intelligence, health technologies, robotics. Hell, there are even a whole bunch of flying car companies out here these days. You literally have to willfully not look to argue that all anyone is doing out here is working on "exploitation of people."

To monetize older innovations, the dominant platforms employ behavioral scientists to develop interface designs that keep users online as much as possible. Big Tech calls it “engagement.” Another word would be addiction.

There is some fair criticism hidden within the sneering. It is reasonable to wonder if this is the best use of the time of some people who work on things like engagement (which, it should be noted, is fundamentally different from addiction). But if that work is actually about providing a better product that people find more useful, that's a good thing, no? I admit that there may be a fine line between building a better product and using tricks to keep people engaged when they shouldn't be. But it's still more of a line than Hawley is willing to admit here.

By getting their users to spend more time on their platforms, the social-media giants turn the customer into a data source to be sucked dry. Here’s how it works: The more attention users give the platform, the more personal information the platform extracts from them, recording every click, view and preference. Big Tech then converts this information into advertisements, all targeted with increasing precision—which produces even more advertising dollars for Big Tech.

Yes, that's the narrative that some people like to push. But here's the thing: there's no "sucking" anyone "dry." Data is not a finite resource. You don't go dry of data. And, yes, if these systems actually work in giving people more of what they want, then isn't that the market at work? Isn't that what people like Hawley always pretended they supported? And if, as I suspect, all this targeting doesn't work all that well in most use cases, then these efforts will flop and people will learn and move elsewhere.

What “innovation” remains in this space is innovation to keep the treadmill running, longer and faster, drawing more data from users to bombard us with more ads for more stuff.

Again, this ignores nearly every other bit of innovation happening in the Valley today. I hate to spoil this for Hawley -- who really seems to be itching for the job of "Product Manager, all of Silicon Valley" -- but the "engagement" jobs are not the ones that engineers and techies are excited about these days. It's all the stuff in the list above. The exciting jobs are in new areas, built on "moonshots" and exciting new technologies.

But here’s the problem. As we spend more time on that digital treadmill, our real-world relationships atrophy, sometimes to disastrous effect. Teen suicide is up. Twenty-two percent of millennials report that they have no friends. More than a few researchers have noticed a connection.

Everyone's got studies. Just recently we pointed to a pretty comprehensive and rigorous study out of Oxford that found almost no impact of social media on "adolescent life satisfaction." While there may be a clear correlation between social media use and depression/suicide, to say there's a causal relationship seems based on little more than speculation. The Oxford study actually tried to dig in and go beyond the correlation, and found that "social media use is not, in and of itself, a strong predictor of life satisfaction across the adolescent population." That's not to say it doesn't have any impact -- it clearly does. But, in many cases, social media use actually increased "life satisfaction," by giving people others they could talk to, often about topics they might not be able to discuss with people who are in the same general location. Focusing narrowly on assumed causation, as Hawley does, likely would mean taking away all of the benefits of social media, and the ability to connect with others in an attempt to weed out what negative effects there are as well.

At the same time, the dominant tech companies’ market concentration is stifling competition that might bring truly new and rewarding innovation. Want to raise money for a venture to challenge Facebook or Google? Good luck. The best pitch for a startup is a pitch for getting purchased by one of the tech giants a few years in. If they won’t buy you, they’ll just copy you.

If this were true, we'd likely see a decline in venture investing. But we don't. It keeps going up and up. And there are lots of entrepreneurs who want to challenge Facebook and Google. Most entrepreneurs talk about wanting to be "the next" Facebook or Google, just like those before them wanted to be the next Microsoft or Yahoo. It's kind of a thing that Silicon Valley specializes in. Indeed, I was just talking to a venture capitalist recently who is actively looking for startups to challenge Facebook and Google, because he thought those companies have gotten so big and so cumbersome that they're ripe for disruption.

Americans shouldn’t settle for this stagnation.

What stagnation?

It’s time we demanded more of Big Tech than it demands of us. That's why I’ve proposed banning the “dark patterns” that feed tech addiction. I’ve introduced legislation to provide consumers a legally enforceable right to browse the internet privately, without data tracking. I’ve advocated stepping up privacy safeguards for children and requiring tech companies to moderate content without political bias as a condition of civil immunity. And I’ve advocated more competition to spur real innovation for real people.

These are all fascinating, if misleading, ways to spin his legislative solutions, that amount to little more than kneecapping how tons of internet services work.

It should be no surprise that the tech companies have fiercely resisted these proposals at every turn, often with hysterical claims about breaking the internet or putting the American economy at a disadvantage to China—as if “autoplay” or “infinite scroll” were powering American productivity. If those are the weapons we’ll marshal in an economic battle with Chinese high-tech manufacturing, the war is already lost.

This is an odd one to point out. No one thinks that prohibiting auto-scroll will lead to China taking advantage. But people do worry about taking away Section 230, or other odd restrictions on every internet company -- including startup challengers to the big guys -- opening up the space for Chinese startups (who are already taking market share: see TikTok). But, really, the autoscroll thing is so odd to highlight because that's one of the most egregious examples of Hawley's nanny state tendencies. Literally telling companies how to design their products.

To the masters of Big Tech, I say: Raise your sights. If you want to be leaders for this country in this century, earn it. Build tools that enrich lives, strengthen society, create good-paying jobs, and improve productive capacity.

Again, if Hawley ever actually bothered to look around, he'd see that all of that is absolutely happening. But Hawley won't do that.

There was a time when innovation meant something grand and technology meant something hopeful, when we dreamed of going to the stars and beyond, of curing diseases and creating new ways to travel and make things. Those are the dreams that fuel the American future. Those are the dreams we need to dream again.

This is bizarre. Literally the hottest companies in Silicon Valley right now are all ones around "going to the stars and beyond, of curing diseases and creating new ways to travel and make things." Hawley is either deliberately lying about Silicon Valley or so ignorant as to be in no position to comment on it. The WSJ should have stuck with last week's op-ed, and rejected this nonsense one.

61 Comments | Leave a Comment..

More posts from Mike Masnick >>