Mike Masnick’s Techdirt Profile

mmasnick

About Mike Masnick Techdirt Insider

Mike is the founder and CEO of Floor64 and editor of the Techdirt blog.

He can be found on Twitter at http://www.twitter.com/mmasnick



Posted on Techdirt - 14 June 2019 @ 7:39pm

The Flipside To Figuring Out What Content Do You Block: Cloudflare's Project Galileo Focuses On Who It Should Protect

from the case-studies dept

There has been so much discussion lately about the impossibility of doing content moderation well, but it's notable that the vast majority of that discussion focuses on what content to ban or to block entirely. I do wish there was more talk about alternatives, some of which already exist (from things like demonetization to refusing to algorithmically promote -- though, for the most part, these solutions just seem to annoy people even more). But there is something of a flipside to this debate which applies in perhaps somewhat more rare circumstances: what content or speakers to specifically protect.

I'm thinking of this, in particular, as Cloudflare has announced the 5th anniversary of its (until now, mostly secretive) Project Galileo offering, in which the company provides free security to around 600 organizations which are likely targets of attacks from well resourced attackers:

Through the Project, Cloudflare protects—at no cost—nearly 600 organizations around the world engaged in some of the most politically and artistically important work online. Because of their work, these organizations are attacked frequently, often with some of the fiercest cyber attacks we’ve seen.

Since it launched in 2014, we haven't talked about Galileo much externally because we worry that drawing more attention to these organizations may put them at increased risk. Internally, however, it's a source of pride for our whole team and is something we dedicate significant resources to. And, for me personally, many of the moments that mark my most meaningful accomplishments were born from our work protecting Project Galileo recipients.

The promise of Project Galileo is simple: Cloudflare will provide our full set of security services to any politically or artistically important organizations at no cost so long as they are either non-profits or small commercial entities. I'm still on the distribution list that receives an email whenever someone applies to be a Project Galileo participant, and those emails remain the first I open every morning.

At a first glance, this might not seem like much of a story at all: internet company does something good to protect those at risk doesn't necessarily seem that interesting at first, especially during a moment in time when everyone is so focused on attacking every internet company for bringing about all the evils of the world. However, I do think there are some very important lessons to be learned here, and some of them very much apply to the debates about content moderation. In some sense, Project Galileo is like the usual content moderation debates, but in reverse.

I was particularly interested in how Cloudflare chose which organizations to protect, and spoke with the company's CEO, Matthew Prince last week to get a more in-depth explanation. As he explained, they partnered up with a wide variety of trustworthy organizations (including EFF, Open Technology Institute, the ACLU, Access Now, CDT, Mozilla, Committee to Protect Journalists and the Freedom of the Press Foundation, among others), and would let those organizations nominate organizations which might be at risk or if organizations approached Cloudflare about being included in Project Galileo, Cloudflare could run their application by those trusted partners. What started with 15 partner organizations has now nearly doubled to 28.

Of course, such a system likely wouldn't work well in the other direction (figuring out what accounts to ban or otherwise punish) as people would undoubtedly flip out and attack them -- as many did a few years ago when Twitter announced its Trust and Safety Council of partner organizations that it relied on for advice on how it handled its trust and safety questions. Many critics of Twitter and its policies have continued to falsely insist that the organizations in this list are some sort of Star Chamber making decisions on who is allowed to use Twitter and who is not -- so any move to actually have such a system in place would likely be met with resistance.

However, there is something interesting about having a more thorough process involving outside experts, than just trusting a company to make these decisions entirely internally. It's obviously somewhat different with Cloudflare, in part because it's providing underlying security services that are not as upfront as the various social media sites, and also because it's about picking out who to "protect" rather than who to block. But it is worth looking at and thinking about all of the different challenges there are when it comes to content moderation that go beyond what most people normally talk about.

For what it's worth, this is also quite important as more and more politicians around the globe are gearing up to "regulate" content moderation in one way or another. It's one thing to say that social media sites should be required by law to block certain accounts (or to not block certain accounts), but think about how any of those laws might also apply to services like Project Galileo, and you can see why there should be caution in rushing in with regulatory solutions. The approach taken with something like Project Galileo ought to be entirely different than the process of determining whether or not a platform has decided to remove Nazi propagandists. But it's doubtful that those proposing new regulations are thinking that far ahead, and I worry that some new proposals may sweep up Project Galileo in a manner where it may become more difficult for Cloudflare to continue to run such a program.

Still, in this era when everyone is so focused on the bad stuff online and how to stop it, it's at least worth acknowledging a cool project from Cloudflare to note the good stuff online and how to protect it.

58 Comments | Leave a Comment..

Posted on Techdirt - 14 June 2019 @ 12:21pm

Appeals Court To Rehear Case On 'Stairway To Heaven' Copyright Infringement Questions

from the a-mess,-a-mess,-a-mess dept

Almost exactly three years ago, we were pleasantly surprised to find that a jury unanimously ruled that Led Zeppelin did not infringe on a song by the band Taurus called "Spirit" with "Stairway to Heaven." We noted that, similar to the Blurred Lines case, if you just listen to bits and pieces of each song, you can hear a similarity, but that does not, and should not, mean it was infringing. As we've pointed out, while Stairway and Taurus can sound similar:

... the same is true of Stairway, Taurus... and J.S. Bach's Bouree In E Minor, which you'd better believe is in the public domain:

Given all that, we were disappointed last fall when the 9th Circuit suddenly vacated the jury's decision and ordered a new trial, claiming that the jury instructions in the original were incorrect. However, as copyright lawyer, Rick Sanders explained, there were potentially some positives to come out of this, such as some very good reasons for this decision, including that it might fix the 9th Circuit's insanely ridiculous legal framework for determining if there is infringement. Also, there were some very real problems with the jury instructions.

However, before the case did go back for a second trial, that decision was appealed, and now the 9th Circuit has agreed to hear the issue en banc (with an 11-judge slate). It looks like there are a number of potentially important issues that the court will get a chance to dig into when it hears the case this fall. The guy who runs the estate of the guy who wrote "Taurus" wants the court to determine whether or not the specific sheet music that is deposited with the copyright lays out the full scope of what is covered (under the 1909 Copyright Act, which applied when the song was written), and also suggests that the court needs to consider the "dire consequences" of its decision "including the seismic disenfranchisement of almost all" musicians of pre-1978 music (which, uh, is quite a bit of hyperbole). Meanwhile, Zeppelin admits that there were some problems with the original jury instructions (though, not as much as the other side claims), but says that it wouldn't have made a difference and that the plaintiff "invited and waived" the mistake in the first place.

However, as Rick Sanders noted in his pieces, Zeppelin's lawyers also ask the 9th Circuit to toss out the weird "inverse ratio rule" legal framework that the 9th Circuit uses in determining infringement (to understand that weird rule, go back and read this piece).

Of course, this is the 9th Circuit we're talking about, and it has a way of getting copyright law completely screwed up all too frequently. So while it has a chance to do something good, it could also muck things up, and this particular court is especially good at mucking up copyright law.

15 Comments | Leave a Comment..

Posted on Techdirt - 14 June 2019 @ 9:27am

There Are Lots Of Ways To Punish Big Tech Companies, But Only A Few Will Actually Help Improve The Internet

from the this-is-important dept

There are many reasonable complaints making the rounds these days about the big internet companies, and many questions about what should be done. Unfortunately, too much of the thinking around this can be summarized as "these companies are bad, we should punish them, any punishment therefore is good." This is dangerous thinking. I tend to agree with Benedict Evans who noted that there's a similarity between calls to break up big tech companies and Brexit in the UK:

I've pointed out a few times now that most calls to break up or regulate these companies fail to explain how these plansactually solve the problems people highlight. Companies are callous about our privacy? Will regulating them or breaking them up actually stop that? The GDPR has proven otherwise already. The companies algorithms recommend bad things? How will breaking them up stop that?

Too much of the thinking seems to just be focused on "company bad, must punish" and don't get much beyond that.

And that's a pretty big problem, because many of the ideas being passed around could ultimately end up harming the wider internet much more than they end up damaging the few big companies anyway. We've already seen that with the GDPR, which has served to further entrench the giants. The same will almost certainly be true when the EU Copyright Directive goes into effect, since the entire plan is designed to entrench the giants so that the big entertainment companies can negotiate new licensing deals with them.

In a new piece for the Economist, Cory Doctorow warns that many of these attempts to "harm" the big internet companies through regulation will actually do much more harm to the wider internet, while making the biggest companies stronger.

The past 12 months have seen a blizzard of new internet regulations that, ironically, have done more to enshrine Big Tech’s dominance than the decades of lax antitrust enforcement that preceded them. This will have grave consequences for privacy, free expression and safety.

He talks about the GDPR, FOSTA, the EU Copyright Directive and various "terrorist" and "online harms" regulatory proposals -- all of which end up making the big internet companies more powerful in the name of "regulating/punishing" them. It's worth reading Cory's take on all those laws, but he summarizes the key point here:

Creating state-like duties for the big tech platforms imposes short-term pain on their shareholders in exchange for long-term gain. Shaving a few hundred million dollars off a company's quarterly earnings to pay for compliance is a bargain in exchange for a world in which they need not fear a rival growing large enough to compete with them. Google can stop looking over its shoulder for the next company that will do to it what it did to Yahoo, and Facebook can stop watching for someone ready to cast it in the role of MySpace, in the next social media upheaval.

These duties can only be performed by the biggest companies, which all-but forecloses on the possibility of breaking up Big Tech. Once it has been knighted to serve as an arm of the state, Big Tech cannot be cut down to size if it is to perform those duties.

Cory is much more engaged in the idea of breaking up the big tech companies than I am (we recently debated the topic on the Techdirt podcast), but his point here is valid. So much of the "punish big tech" movement is at odds with other parts of the "punish big tech" movement. That's because there is no coherent strategy here -- it's just "punish big tech."

Cory's article suggests that any move towards antitrust should include mandating interoperability in an effort to build up competition:

One exciting possibility is to create an absolute legal defence for companies that make "interoperable" products that plug into the dominant companies' offerings, from third-party printer ink to unauthorised Facebook readers that slurp up all the messages waiting for you there and filter them to your specifications, not Mark Zuckerberg's. This interoperability defence would have to shield digital toolsmiths from all manner of claims: tortious interference, bypassing copyright locks, patent infringement and, of course, violating terms of service.

Interoperability is a competitive lever that is crying to be used, hard. After all, the problem with YouTube isn't that it makes a lot of interesting videos available—it is that it uses search and suggestion filters that lead viewers into hateful, extreme bubbles. The problem with Facebook isn't that they have made a place where all your friends can be found—it is that it tries to "maximise engagement" by poisoning your interactions with inflammatory or hoax material.

This actually reminds me of another similar piece, written last month by Josh Constine at TechCrunch, arguing that the FTC should force Facebook to mandate "friend portability" for Facebook:

I don’t expect the FTC to launch its own “Fedbook” social network. But what it can do is pave an escape route from Facebook so worthy alternatives become viable options. That’s why the FTC must require Facebook to offer truly interoperable data portability for the social graph.

In other words, the government should pass regulations forcing Facebook to let you export your friend list to other social networks in a privacy-safe way. This would allow you to connect with or follow those people elsewhere so you could leave Facebook without losing touch with your friends. The increased threat of people ditching Facebook for competitors would create a much stronger incentive to protect users and society.

Both Cory and Josh are making an important point: part of the reason why these platforms have become so big and so powerful is because there is real friction in leaving -- not because it's hard to just start using another platform, but because if all the other users are still on that other platform, it's meaningless if just you switch. Of course, history shows us that over time, many people will migrate over to new platforms. And you know this if you've been on the internet for the past two decades. Remember the 2000s when you used AIM to communicate with your friends while chatting with folks on MySpace? Over time, folks moved.

But making platforms more open -- forcing "interoperability" -- is certainly one way forward. I'd actually argue it does not go far enough. I've argued an even better solution is not just about forced interoperability, but moving to a world of protocols instead of platforms. In such a world, interoperability would be standard, but would also be just one piece of the puzzle for making the world more dynamic and competitive. If we relied on more open platforms, then third parties could build all sorts of new services, from better front ends, to better features and tools, and users could choose which implementation(s) they wanted to use, making switching from any particular service provider much easier -- especially if that provider did anything to hurt user trust. Interoperability would be one step in that direction, but only one step.

It's quite reasonable that people are concerned about big tech these days, but if we're going to have a reasonable solution that doesn't create wider negative consequences for the internet, we should be thinking much more carefully about the various proposals on the table. A simple "punish big tech because big tech is bad" may get people riled up, but the chances for negative consequences are too great to ignore.

79 Comments | Leave a Comment..

Posted on Techdirt - 13 June 2019 @ 7:04pm

Content Moderation Is Impossible: You Can't Expect Moderators To Understand Satire Or Irony

from the just-doesn't-work-that-way dept

The latest in our never ending series of posts on why content moderation at scale is impossible to do well, involves Twitter now claiming that a tweet from the account @TheTweetOfGod somehow violates its policies:

If you're unfamiliar with that particular Twitter account, it is a popular account that pretends to tweet pithy statements from "God" that attempt (often not very well, in my opinion) to be funny in a sort of ironic, satirical way. I've found it to miss a lot more than it hits, but that's only my personal opinion. Apparently, Twitter's content moderation elves had a problem with the tweet above. And it's not hard to see why. Somewhere Twitter has a set of rules that include that it's a violation of its rules to mock certain classes of people -- and that includes making fun of people for their sexual orientation, which violates Twitter's rules on "hateful conduct." And it's not difficult to see how a random content moderation employee would skim a tweet like the one flagged above, not recognize the context, the fact that it's an attempt at satire, and flag it as a problem.

Thankfully, in this case, Twitter did correct it upon appeal, but it's just another reminder that so many things tend to trip up content moderators -- especially when they have to moderate a huge amount of content -- and satire and irony are categories that frequently trip up such systems.

87 Comments | Leave a Comment..

Posted on Techdirt - 13 June 2019 @ 10:39am

Radiohead Responds To Extortionate Hacker By Releasing Hacked Recordings For Charity

from the nicely-done dept

Radiohead has always taken a more thoughtful, less kneejerk approach to how it handles the kinds of situations that many others in the recording industry tend to respond to by freaking out. Back in 2007, in the midst of the worldwide freakout over piracy, Radiohead released a surprise album, telling fans they could pay what they wanted to download it (while also selling a more expensive "box set", giving its biggest fans a good reason to pay extra. The band has also been supportive of file sharing and even leaked some of its own tracks via BitTorrent.

So perhaps this following story shouldn't be seen as too much of a surprise (though, I imagine it was a surprise to whoever hacked Radiohead frontman Thom Yorke). As noted in that parenthetical, someone apparently hacked Yorke, and somehow got access to a set of 18 minidiscs of somewhat random/eclectic material that Yorke had recorded in the 1996/97 timeframe, when the band was working on its seminal Ok Computer album. The hackers apparently then asked Yorke/Radiohead for $150,000 not to release the material. The band chose not to give in to the hackers, who then did leak the material. However, soon after the material was leaked, the band announced (via Radiohead guitarist Jonny Greenwood's Instagram) that the band was now officially "releasing" that material on Bandcamp for £18 (or more) and donating any funds raised to Extinction Rebellion (a climate change advocacy group).

View this post on Instagram

radiohead.bandcamp.com rebellion.earth

A post shared by Radiohead (@radiohead) on

Greenwood's writeup -- titled "Walter Sobchak vs Bunny's toe, as an amusing nod towards the extortion attempt in The Big Lebowski -- is worth reading:

Subject: Walter Sobchak vs Bunny's toe

We got hacked last week - someone stole Thom's minidisk archive from around the time of OK Computer, and reportedly demanded $150,000 on threat of releasing it.

So instead of complaining - much - or ignoring it, we're releasing all 18 hours on Bandcamp in aid of Extinction Rebellion. Just for the next 18 days. So for £18 you can find out if we should have paid that ransom.

Never intended for public consumption (though some clips did reach the cassette in the OK Computer reissue) it's only tangentially interesting. And very, very long. Not a phone download. Rainy out, isn't it though?

And yes, the file is quite large -- approaching 2GB -- and the band itself says it's not that interesting. But, of course, fans are interested, because that's what fandom is about. In fact, it's quite interesting to see that a bunch of fans have put together a crowdsourced Google Doc creating a track listing and annotation of what's in the files (though, they also have to explain that they weren't the ones who hacked Yorke's stuff in the first place). The notes themselves are kind of interesting:

As with all such things, this could easily have turned into the band just complaining about a situation that basically everyone agrees is unfair and unpleasant. However, Radiohead, of all bands, appears to have been quick to turn what is undoubtedly a crappy situation into a positive one that both supports a charity they like and builds tremendous goodwill with fans (while making the hackers look awful).

There's a key point in all of this that is worth noting, and it's one that we've tried to make for years about piracy: we're not arguing that piracy is somehow a good thing. However, we have argued that piracy happens. The question is how you respond to it, and whether or not you can turn it into a positive situation. Too many in the music industry have taken piracy and turned a bad situation into something worse -- pissing off fans, annoying people, and doing damage to their own brand. Radiohead has long realized it's better to do the exact opposite. That's not cheering on the hacking (or piracy), but noting that when a bad thing happens, you might as well figure out how to make the best of it.

Radiohead has done that. I wish many others in that industry would do the same.

43 Comments | Leave a Comment..

Posted on Techdirt - 12 June 2019 @ 8:20pm

Historical Documentation Of Key Section 230 Cases

from the nice-to-see dept

We've been talking a lot lately about the fact that people seem incredibly confused (i.e., mostly wrong) about the history, purpose, and even language of Section 230 of the Communications Decency Act. No matter how many times we try to correct the record, it seems that more people keep getting it wrong. We've talked a few times about Jeff Kosseff's excellent new book called The Twenty-Six Words That Created the Internet, and, as Kosseff explains, part of his reason for putting together that book is that some of the early history around CDA 230 was at risk of disappearing.

And now Kosseff has teamed up with professor Eric Goldman to create an archive of documents related to key Section 230 cases.

As Kosseff notes:

As I noted in the book, many of the filings in the early Section 230 cases (particularly from the pre-PACER days), were particularly hard to track down. In an effort to ensure that these documents are not forever lost, I worked with Professor Eric Goldman of Santa Clara University to create an online archive of many of the filings. Below are some of the key court opinions mentioned in the book, along with some of the important court filings, if available. The files from the earliest cases are largely in paper format. We plan to add these filings once they are scanned; for now, we link to the court opinions.

This is great to see and should prove to be a useful resource, especially about some of the older cases.

Of course, it still won't stop some from misrepresenting the law, but at least having this information available will hopefully lead at least a few more people to understand the actual origins and purpose of the law.

70 Comments | Leave a Comment..

Posted on Techdirt - 12 June 2019 @ 9:28am

NY Times Publishes Laughable Propaganda To Argue Google Owes Newspapers Like Itself Free Money

from the all-the-propaganda-that-fits-to-pad-our-bottom-line dept

Earlier this week, I posted about a silly new organization that claims it's going to "save journalism" mainly by whining about how evil Google and Facebook are. As I noted in that piece, even if you believe Google and Facebook are evil, it's not clear how whining about them being evil provides any new journalists jobs. But the news industry as a whole has been on this weird "blame someone else" kick for way too long. The "News Media Alliance" (formerly the Newspaper Association of America) has been on a weird anti-tech protectionist kick for years now, and on Monday published a "study" claiming that Google made $4.7 billion from news -- a number that was then trumpeted loudly by the NY Times, which just happens to be one of the larger members of the News Media Alliance.

There's just one tiny problem. The "study" is no study at all and basically everyone in the media business is laughing at the NY Times for publishing such a ridiculously bogus study without highlighting how bogus it was. The $4.7 billion is not based on any careful research. It's based on one off-hand comment from over a decade ago by an exec who hasn't been at Google in years, and then extrapolated forward. Really.

That study relies on a public comment then–Google executive Marissa Mayer made at a media event in 2008, when she estimated that Google News brought in $100 million in revenue. The NMA report calculates what the same proportion of the company’s revenue would be today, then further inflates this figure based on the fact that news consumption via Google’s main search is 6 times larger than via Google News (according to the NMA’s estimate of referral traffic to newspaper websites).

This is not what any sensible person would call a "sound" methodology. Oh, and I almost forgot the kicker:

...the News Media Alliance cautioned that its estimate for Google’s income was conservative

The NY Times also kinda skimmed over the purposeful timing of this study's release. The News Media Alliance has been pushing for sometime for a special antitrust exemption to allow big news orgs to collude to try to force more money out of Google and Facebook, and the study was released just a day before a Congressional hearing on the topic. Most normal reporters would recognize that, maybe (just maybe) there was an ulterior motive in releasing this "report" with such a flimsy statistic. But the Times reported it as if it was fact.

There may be plenty of reasons to distrust Google and Facebook and their role regarding journalism, but this report is not any of that. Google doesn't even put ads on most of Google News, but instead pushes visitors off to the websites of news orgs. If those news orgs are failing to monetize that traffic, it seems pretty ridiculous to blame google for that and demand more money via collusive efforts.

As Jeff Jarvis notes in his own response to the NYT's piece, if the publishers want to point the blame finger, they might want to start by turning it back on themselves:

The problem has long been that publishers aren’t competent at exploiting the full value of these clicks by creating meaningful and valuable ongoing relationships with the people sent their way. So what does Google do? It tries to help publishers by, for example, starting a subscription service that drives more readers to easily subscribe — and join and contribute — to news sites directly from Google pages. The NMA study cites that subscription service as an example of Google emphasizing news and by implication exploiting publishers. It is the opposite. Google started the subscription service because publishers begged for it — I was in the room when they did — and Google listened. The same goes for most every product change the study lists in which Google emphasizes news more. That helps publishers. The study then uses ridiculously limited data (including, crucially, an offhand and often disputed remark 10 years ago by a then-exec at Google about the conceptual value of news) to make leaps over logic to argue that news is important on its services and thus Google owes news publishers a cut of its revenue (which Google gains by offering publishers’ former customers, advertisers, a better deal; it’s called competition). By this logic, Instagram should be buying cat food for every kitty in the land and Reddit owes a fortune to conspiracy theorists.

The real problem here is news publishers’ dogged refusal to understand how the internet has changed their world, throwing the paradigm they understood into the grinder.

Yes, the world has changed. But the NMA seems to think that the government should now just force the internet companies to hand over money after their own members spent years twiddling their thumbs and squandering any attempt to build up loyal followings and sustainable business models. It's not easy to keep a media business sustainable these days, but so much of it has to do with those companies refusing to recognize how the internet was changing the business, and how to take advantage of those changes.

50 Comments | Leave a Comment..

Posted on Techdirt - 12 June 2019 @ 3:24am

Facebook Tested With Deepfake Of Mark Zuckerberg: Company Leaves It Up

from the as-it-should dept

Over the last few weeks there's been a silly debate over whether or not Facebook made the right call in agreeing to leave up some manipulated videos of House Speaker Nancy Pelosi that were slowed down and/or edited, to make it appear like she was either confused or something less than sober. Some Pelosi-haters tried to push the video as an attack on Pelosi. Facebook (relatively quickly) recognized that the video was manipulated, and stopped it from being more widely promoted via its algorithm -- and also added some "warning" text for anyone who tried to share it. However, many were disappointed that Facebook didn't remove the video entirely, arguing that Facebook was enabling propaganda. Pelosi herself attacked Facebook's decision, and (ridiculously) called the company a "willing enabler" of foreign election meddling. However, there were strong arguments that Facebook did the right thing. Also, it seems worth noting that Fox News played one of the same video clips (without any disclaimer) and somehow Pelosi and others didn't seem to think it deserved the same level of criticism as Facebook.

Either way, Facebook defended its decision and even noted that it would do the same with a manipulated video of Mark Zuckerberg. It didn't take long to put that to the test, as some artists and an advertising agency created a deep fake of Zuckerberg saying a bunch of stuff about controlling everyone's data and secrets and whatnot, and posted it to Facebook-owned Instagram.

And... spoiler alert: Facebook left it up.

“We will treat this content the same way we treat all misinformation on Instagram," a spokesperson for Instagram told Motherboard. "If third-party fact-checkers mark it as false, we will filter it from Instagram’s recommendation surfaces like Explore and hashtag pages.”

This actually is not a surprise (nor should it be). People keep wanting to infer nefarious intent in various content moderation choices, and we keep explaining why that's almost never the real reason. Mistakes are made constantly, and some of those mistakes look bad. But these companies do have policies in place that they try to follow. Sometimes they're more difficult to follow than other times, and they often involve a lot of judgment calls. But in cases like the Pelosi and Zuckerberg manipulated videos, the policies seem fairly clear: pull them from the automated algorithmic boost, and maybe flag them as misinformation, but allow the content to remain on the site.

So, once again, we end up with a "gotcha" story that isn't.

Of course, now that Pelosi and Zuck have faced the same treatment, perhaps Pelosi could get around to returning Zuckerberg's phone call. Or would that destroy the false narrative that Pelosi and her supporters have cooked up around this story?

Oh, and later on Tuesday, CBS decided to throw a bit of a wrench into this story. You see, the fake Zuckerberg footage is made to look as though it's Zuck appearing on CBS News, and the company demanded the video be taken down as a violation of its trademark:

Perhaps complicating the situation for Facebook and Instagram a call late Tuesday from CBS for the company to remove the video. The clip of Zuckberg used to make the deepfake was taken from an online CBS News broadcast. "CBS has requested that Facebook take down this fake, unauthorized use of the CBSN trademark," a CBS spokesperson told CNN Business.

Of course, if Facebook gives in to CBS over this request, it will inevitably (stupidly) be used by some to argue that Facebook used a different standard for disinformation about its own exec, when the reality would just be a very different kind of claim (trademark infringement, rather than just propaganda). Hopefully, Facebook doesn't cave to CBS and points out to the company the rather obvious fair use arguments for why this is not infringing.

15 Comments | Leave a Comment..

Posted on Techdirt - 11 June 2019 @ 12:22pm

Appeals Court Issues Strong CDA 230 Ruling, But It Will Be Misleadingly Quoted By Those Misrepresenting CDA 230

from the mostly-good,-but-a-bit-of-bad dept

Last Friday, the DC circuit appeals court issued a mostly good and mostly straightforward ruling applying Section 230 of the Communications Decency Act (CDA 230) in a perfectly expected way. However, the case is notable on a few grounds, partly because it clarifies a few key aspects of CDA 230 (which is good), and partly because of some sloppy language that is almost certainly going to be misquoted and misrepresented by those who (incorrectly) keep insisting that CDA 230 requires "neutrality" by the platform in order to retain the protections of the law.

Let's just start by highlighting that there is no "neutrality" rule in CDA 230 -- and (importantly) the opposite is actually true. Not only does the law not require neutrality, it explicitly states that it's goal is for there to be more content moderation. The law explicitly notes that it is designed:

to remove disincentives for the development and utilization of blocking and filtering technologies that empower parents to restrict their children’s access to objectionable or inappropriate online material

In short, the law was designed to encourage moderation, which, by definition, cannot be "neutral."

Now, onto the case. It involved a bunch of locksmiths who claim that there are "scam locksmiths" claiming to be local in areas they are not, and the various search engines (Google, Microsoft and Yahoo are the defendants here) are putting those fake locksmiths in their search results, meaning that the real locksmiths have to spend extra on advertising to get noticed above the scam locksmiths.

You might think that if these legit locksmiths have been wronged by anyone, it's the scam locksmiths, but because everyone wants to blame platforms, they've sued the platforms -- claiming antitrust violations, false advertising, and a conspiracy in restraint of trade. The lower court, and now the appeals court, easily finds that Section 230 protects the search engines from these claims, as the content from the scam locksmiths is from the scam locksmiths, and not from the platforms.

The attempt to get around CDA 230 is mainly by focusing on one aspect of how the local search services work -- in that most of the services try to take the address of the local business and place a "pin" on a map to show where the business is physically located. The locksmiths argue that creating this map and pin involves content that the search engines create, and therefore it is not immune under CDA 230. The appeals court says... that's not how it works, since the information is clearly "derived" directly from the third parties:

The first question we must address is whether the defendants’ translation of information that comes from the scam locksmiths’ webpages -- in particular, exact street addresses -- into map pinpoints takes the defendants beyond the scope of § 230 immunity. In considering this question, it is helpful to begin with the simple scenario in which a search engine receives GPS data from a user’s device and converts that information into a map pinpoint showing the user’s geographic location. The decision to present this third-party data in a particular format -- a map -- does not constitute the “creation” or “development” of information for purposes of § 230(f)(3). The underlying information is entirely provided by the third party, and the choice of presentation does not itself convert the search engine into an information content provider. Indeed, were the display of this kind of information not immunized, nothing would be: every representation by a search engine of another party’s information requires the translation of a digital transmission into textual or pictorial form. Although the plaintiffs resisted this conclusion in their briefs, see Locksmiths’ Reply Br. 3 (declaring that the “location of the inquiring consumer . . . is determined entirely by the search engines”), they acknowledged at oral argument that a search engine has immunity if all it does is translate a user’s geolocation into map form, see Recording of Oral Arg. at 12:07-12:10.

With this concession, it is difficult to draw any principled distinction between that translation and the translation of exact street addresses from scam-locksmith websites into map pinpoints. At oral argument, the plaintiffs could offer no distinction, and we see none. In both instances, data is collected from a third party and re-presented in a different format. At best, the plaintiffs suggested that a line could be drawn between the placement of “good” and “bad” locksmith information onto the defendants’ maps. See id. at 12:43-12:58 (accepting that, “to the extent that the search engine simply depicts the exact information they obtained from the good locksmith and the consumer on a map, that appears to be covered by the [Act]”). But that line is untenable because, as discussed above, Congress has immunized the re-publication of even false information.

That's a nice, clean ruling on what should be an obvious point. But having such clean language could be useful for citations in future cases. It is notable, at least, (and useful) that the court clearly states: "Congress has immunized the re-publication of even false information." Other courts have made this clear, but having it in such a nice, compact form that is highly quotable is certainly handy.

There are a few other attempts to get around CDA 230 that all fail -- including using the fact that the "false advertising" claim is under the Lanham Act (which is often associated with trademark law), and CDA 230 explicitly excludes "intellectual property" law. But that doesn't magically make the false advertising claims "intellectual property," nor does it exclude them from CDA 230 protections.

But, as noted up top, there is something in the ruling that could be problematic going forward concerning the still very incorrect argument that CDA 230 requires the platforms be "neutral." The locksmiths' lawyers argued that even if the above case (of putting a pin on a map) didn't make the search engines "content creators," perhaps they were content creators when they effectively made up the location. In short: when these (and some other) local search engines don't know the actual exact location of a business, they might put in what is effectively a guesstimate, usually placing it in a central location of an expected range. As the court explains:

The plaintiffs describe a situation in which the defendants create a map pinpoint based on a scam locksmith’s website that says the locksmith “provides service in the Washington, D.C. metropolitan area” and “lists a phone number with a ‘202’ area code.” Locksmiths’ Br. 8; see also Locksmiths’ Reply Br. 4-5. According to the plaintiffs, the defendants’ search engines use this information to “arbitrarily” assign a map location within the geographic scope indicated by the third party.

Legally, that does represent a slightly different question -- and (if you squint) you can kinda see how someone could maybe, possibly, make the argument that if the local search engines take that generalized info and create a pin that appears specific to end users, that it has somehow "created" that content. But the court (correctly, in my opinion) says "nope," and that since that pin is still derived from the information provided by a 3rd party, 230 protects. This is good and right.

The problem is that the court went a bit overboard in using the word "neutral" in describing this, using the word in a very different way than most people mean when they say "neutral" (and in a different way than previous court rulings -- including those cited in the case -- have used the word neutral):

We conclude that these translations are also protected. First, as the plaintiffs do not dispute, the location of the map pinpoint is derived from scam-locksmith information: its location is constrained by the underlying third-party information. In this sense, the defendants are publishing “information provided by another information content provider.” Cf. Kimzey v. Yelp!, Inc., 836 F.3d 1263, 1270 (9th Cir. 2016) (holding that Yelp’s star rating system, which is based on receiving customer service ratings from third parties and “reduc[ing] this information into a single, aggregate metric” of one to five stars could not be “anything other than usergenerated data”). It is true that the location algorithm is not completely constrained, but that is merely a consequence of a website design that portrays all search results pictorially, with the maximum precision possible from third-party content of varying precision. Cf. Carafano v. Metrosplash.com, Inc., 339 F.3d 1119, 1125 (9th Cir. 2003) (“Without standardized, easily encoded answers, [Matchmaker.com] might not be able to offer these services and certainly not to the same degree.”).

Second, and also key, the defendants’ translation of thirdparty information into map pinpoints does not convert them into “information content providers” because defendants use a neutral algorithm to make that translation. We have previously held that “a website does not create or develop content when it merely provides a neutral means by which third parties can post information of their own independent choosing online.” Klayman, 753 F.3d at 1358; accord Bennett, 882 F.3d at 1167; see Kimzey, 836 F.3d at 1270 (holding that Yelp’s “star-rating system is best characterized as the kind of neutral tool[] operating on voluntary inputs that . . . [does] not amount to content development or creation” (internal quotation marks omitted) (citing Klayman, 753 F.3d at 1358)). And the Sixth Circuit has held that the “automated editorial act[s]” of search engines are generally immunized under the Act. O’Kroley v. Fastcase, Inc., 831 F.3d 352, 355 (6th Cir. 2016).

Here, the defendants use automated algorithms to convert third-party indicia of location into pictorial form. See supra note 4. Those algorithms are “neutral means” that do not distinguish between legitimate and scam locksmiths in the translation process. The plaintiffs’ amended complaint effectively acknowledges that the defendants’ algorithms operate in this fashion: it alleges that the words and numbers the scam locksmiths use to give the appearance of locality have “tricked Google” into placing the pinpoints in the geographic regions that the scam locksmiths desire. Am. Compl. ¶ 61B. To recognize that Google has been “tricked” is to acknowledge that its algorithm neutrally translates both legitimate and scam information in the same manner. Because the defendants employ a “neutral means” and an “automated editorial act” to convert third-party location and area-code information into map pinpoints, those pinpoints come within the protection of § 230.6

See all those "neutral means" lines? What the court means is really automated and not designed to check for truth or falsity of the information. It does not mean "unbiased," because any algorithm that is making decisions is inherently and absolutely "biased" towards trying to choose what it feels is the "best" solution -- in this case, it is "biased" towards approximating where to put the pin.

The court is not, in any way, saying that a platform need be "neutral" in how it applies moderation choices, but I would bet a fair bit of money that many of the trolls (and potentially grandstanding politicians) will use this part of the court ruling to pretend 230 does require "neutrality." The only thing I'm not sure about is how quickly this line will be cited in a bogus lawsuit, but I wouldn't expect it to take very long.

For what it's worth, after I finished writing this, I saw that professor Eric Goldman had also written up his analysis of the case, which is pretty similar, and includes a few other key points as well, but also expects the "neutral means" line to be abused:

I can see Sen. Cruz seizing on an opinion like this to say that neutrality indeed is a prerequisite of Section 230. That would be less of a lie than his current claim that Section 230 only protects “neutral public forums,” but only slightly. The “neutrality” required in this case relates only to the balance between legal and illegal content. Still, even when defined narrowly and precisely like in the Daniel v. Armslist case, the term “neutrality” is far more likely to mislead than help judges. In contrast, the opinion cites the O’Kroley case for the proposition that Section 230 protects “automated editorial act[s],” and that phrasing (though still odd) is much better than the term “neutrality.”

The overall ruling is good and clear -- with just this one regrettable bit of language.

Read More | 52 Comments | Leave a Comment..

Posted on Techdirt - 11 June 2019 @ 9:32am

If You Think The Reason Internet Companies Snarf Up Your Data Is Because Their Services Are Free, Allow Me To Introduce You To The Telcos

from the free-is-not-the-problem dept

It's been a few years since this kind of argument has come up, but it's one that we've had to swat down a few times in the past: it's the argument that somehow if a company offers a service for free, it means that they'll absolutely snarf up all your data, and that requiring services be paid for directly by users somehow would fix that. This is easy to debunk in multiple directions and yet it still pops up here and there.

The latest is from the technology columnist for the Wall Street Journal, Christopher Mims (whose work I usually enjoy). His latest (possibly paywalled) piece is called, Why Free Is Too High a Price for Facebook and Google with the subhead reading: "Most of the ills traced to these companies are a direct consequence of their no-cost business models." Here's the crux of the argument:

In fact, most of the ills traced to these companies are a direct consequence of their “free” business models, which compel them to suck up our personal data and prioritize user growth over the health and privacy of individuals and society, all so they can sell more advertisements. They make money from the attention and in some cases the hard work—all those status updates, videos and likes are also a kind of uncompensated labor, if you think about it—of their most devoted users.

Of course, it seems rather easy to point out why that's wrong with two examples. First we pay for other services such as our broadband and mobile data providers and they are so much worse on the privacy front, it's not even remotely comparable. It's not as if magically paying for the service has stopped AT&T or Verizon from being horrific on the privacy front. The snarfing up of data doesn't go away if you pay for services.

Second, there are businesses that have been built on giving away free tools without having to snarf up your data. Indeed, that's actually how Google succeeded for much of its early history. It didn't need to know everything about you. It just needed to know what you were searching for. And that was massively successful. It's true that, over time, Google has moved away from that, but others (like DuckDuckGo) have stepped into that space as well. DuckDuckGo is free, and I don't see Christopher Mims concerned that they are magically "compelled to suck up our personal data."

As the saying goes, correlation is not causation. Just because Google and Facebook offer both free services and collect a lot of data, does not in any way mean that one automatically leads to the other (or that removing the "free" part lessens the data sucking).

Much of the rest of the piece is quite a thoughtful look at questions around big tech and antitrust -- which I appreciate -- but then it jumps back into these inevitable claims about "free" including the positively bizarre suggestion that free services undermine democracy. Seriously:

When an online service must be paid for solely through advertising, the company’s overriding incentive is to increase engagement with it: Users see and click on more ads. This drives all sorts of unexpected outcomes. Owing to its engagement-maximizing algorithms, Facebook appears to bear, by its own admission, some responsibility for a genocide in Myanmar.

Other well-documented ills that may have been exacerbated by Facebook include the erosion of global democracy, the resurgence of preventable childhood diseases and what the company itself acknowledges may be wide-ranging deleterious effects on the mental health of millions.

On YouTube, Google’s engagement-maximizing algorithm has been recommending material that denies the Holocaust, Sandy Hook and other tragedies, as well as white-supremacist content and other forms of hate speech, a policy the company on Wednesday pledged to redress. Over the years, YouTube has been criticized for other practices, from driving viewers to the internet’s darkest corners to pushing questionable content on children. Meanwhile, the globally dominant Google search engine has had a hard time avoiding accusations of bias in its results.

Except, once again, this seems like a correlation/causation error. Indeed, there's a strong argument that these "wide-ranging deleterious effects" on mental health or democracy are the kinds of things that would create long term problems for these companies (which does, indeed, appear to be the case). Any company that takes a longer term view of things would recognize that if the platforms optimize in a manner that creates serious problems for the world, that can't be good for long term business, and they can and should correct. Indeed, that's exactly what these companies have been struggling to do over the last couple of years. And it's got nothing to do with their services being "free."

There are plenty of reasonable questions to ask about the power and position of Google and Facebook. And there are reasonable debates to be had about antitrust and the impact of these services on the globe. But tying "free" into the equation without any evidence that that's the problem (and, in fact, with tons of evidence to suggest "free" has nothing to do with any of it) doesn't seem particularly productive.

17 Comments | Leave a Comment..

Posted on Techdirt - 10 June 2019 @ 3:24pm

Mathew Higbee Cuts And Runs When Finally Challenged On A Questionable Shakedown

from the do-his-clients-recognize-their-own-liability? dept

Last month, we wrote about a declaratory judgment lawsuit that had been filed against a client of Mathew Higbee. As we've discussed at length, Higbee runs "Higbee & Associates" which is one of the more active copyright trolls around these days, frequently sending threatening shakedown-style letters to people, and then having various "paralegals" demand insane sums of money. In some cases, it does appear that Higbee turns up actual cases of infringement (though, even in those cases, the amount he demands seems disconnected from anything regarding a reasonable fee). But, in way too many cases, the claims are highly questionable. The lawsuit mentioned last month represented just one of those cases -- involving a threat against a forum because one of its users had deeplinked a photographer's own uploaded image into the forum. There were many reasons why the threat was bogus, but as per the Higbee operation's MO, they kept demanding payment and dismissing any arguments for why the use was not infringing (and, relatedly, why it was against the incorrect target).

Paul Levy and Public Citizen filed for declaratory judgment that the use was non-infringing, and in the process, pondered publicly whether or not Higbee had warned his various clients that they might end up in court in response to Higbee's aggressive tactics. Apparently, in the case of photographer Quang-Tuan Luong, the photographer was not particularly happy about ending up in court, and Higbee and his client quickly agreed to cut and run, despite Higbee's insistence that he was ready to take this matter to court.

I gave Higbee a chance to withdraw his client’s claims; however, Higbee had previously told me that my arguments about non-liability for infringement in an identical case were “delusional,” so we decided to give Higbee a chance to explain to a judge in what way these defenses were delusional, that is, in response to an action for a declaratory judgment.

I confess that, in filing that lawsuit, I wondered whether Higbee had ever warned Luong that he would not necessarily get to make the final decision whether his demand would end up in litigation, in that the very aggressiveness of Higbee’s demand letters, coupled with persistent nagging from paralegals to offer a settlement or face immediate litigation, sets up his clients to be sued for a declaratory judgment of non-infringement. That speculation proved prescient, because Higbee’s immediate response to the lawsuit was to offer to have his client covenant not to sue Schlossberg for infringement. Higbee also told me that he had offered to defend Luong against the declaratory judgment action for free.  It appears, however, that even such a generous offer was not enough to hold onto Luong as a copyright infringement claimant in this case. A settlement agreement has been signed; because there is no longer a case or controversy, the lawsuit has now been dismissed. 

Levy makes it clear, however, that he's actively looking for other such cases to challenge in court in response to Higbee's overaggressive demands:

Since that blog post, I have got wind of several other situations in which Higbee has claimed large amounts of damages against forum hosts.  We are considering which ones would make the best test cases.  

My last blog post about Higbee mentioned another case in which he had made a demand against the host of a forum about United States elections, where a user had posted a deep link to a photograph by another of Higbee’s stable of clients, Michael Grecco. Higbee has sued on Grecco’s behalf on a number of occasions, and Higbee told me that, unlike Luong, Grecco was a true believer who was looking for opportunities to pursue Higbee’s copyright theories in litigation.  Higbee said that he was going to be talking to Grecco to confirm that he wanted to litigate against the election forum. I could not help suspecting at the time that Higbee was blowing smoke to show what a tough guy he is.  That was a month ago, and yet so far as I can tell, Higbee has not yet got around to talking to his client about the subject. I have to wonder just who it is that wants to litigate Higbee’s legal theories.

Indeed, I have asked Higbee whether he warns his clients generally that they can be sued for a declaratory judgment of non-infringement even if they have never given Higbee authority to go to court on their behalf. He told me that he is too busy to address my questions.

He also notes that another such declaratory judgment filing has been made against the very same Michael Grecco:

That case involves another demand letter from Higbee, this time to an indigent young man named Lee Golden who lives in Brooklyn with his parents and blogs about action movies.  Because Golden included a Grecco photograph of Xena the Warrior Princess, Higbee sent his typical aggressive  demand letter, setting $25,000 as the required payment to avoid being sued. Golden responded with a plaintive email, apologizing profusely, saying that he had no idea about copyright issues, that he had taken down the photo...own, returning to its demand for $25,000 and threatening to seek $30,000 or even $150,000 if the case had to be litigated. Higbee even sent a draft infringement complaint, threatening to make Golden defend himself in the Central District of California even though many of Higbee’s actual lawsuits are filed in the jurisdiction where the alleged infringer lives, perhaps because Higbee wants to avoid having to litigate personal jurisdiction.

But Golden’s counsel likely did not know this, so Strupinsky and his partner Joshua Lurie have filed suit on Golden's behalf in the Eastern District of New York, seeking a declaratory judgment of non-infringement. We will see how anxious Michael Grecco is to litigate this case.

We see this again and again with copyright trolling operations. They often promise potential clients that this is a "no risk" way to make money. Just sign up and they'll scour the internet and you'll just sit back and receive the payments. Indeed, Higbee's site suggests just that:

Let a national copyright law firm take care of all of your copyright enforcement needs— from reverse image search to collecting payment. You pay nothing up front. We only get paid when you get paid. Best of all, by using us for reverse image search you will be eliminating the middle man and nearly doubling your profit.

His site also claims that he'll go to court for you "assuming you want us to" -- leaving out the risk of a declaratory judgment filing (and associated embarrassment for trying to shake down non-profits and personal websites of people with no money).

34 Comments | Leave a Comment..

Posted on Techdirt - 10 June 2019 @ 11:57am

Whining About Big Tech Doesn't Protect Journalism

from the not-how-it-works dept

I've been as frustrated as anyone by the fact that the internet advertising business models have not filtered down to news publishers, because it does seem like a real lost opportunity. However, it's kind of weird to see a couple of laid off journalists announce a project to "protect" journalism that seems to consist entirely of whining about big tech. It's literally called the "Save Journalism Project" but they have no plans to actually "save journalism."

Their new project will be set up as a nonprofit, according to Eddie Vale, a Democratic consultant whose firm is helping launch the effort. Vale pitched Bassett on the idea, and the two of them brought in Stanton. Vale said initial funding had been secured from “someone who doesn’t want to be public so Google and Facebook don’t go after them,” and the group plans to continue to fundraise. So far, the pair have coauthored testimony given to the Senate Judiciary Committee highlighting the tech giants’ impact on the news industry — “since being laid off, we’ve made it our mission to understand how the digital marketplace works and how Big Tech is killing the journalism industry,” they wrote — flown a plane above Google’s I/O conference, and authored op-eds.

Wow. Flying a plane over a Google conference. That'll save journalism.

At the moment, Stanton and Bassett are more focused on warning the public and the industry about the issue than on proposing solutions.

“I do think that everyone is starting to see a need to break up and regulate these companies or something along those lines,” Bassett said. “And with regards to how they’re going to make journalism viable again, I don’t frankly know...I think right now we’re starting with just getting this conversation out into the public and making people aware of exactly what’s going on. I do hope at some point we graduate into saying, ‘here’s a list of policy proposals, here’s exactly what needs to happen.’”

Even if you believe the (debatable) claim that Google and Facebook are somehow to blame for the decline in advertising revenue to news sites, I'm left scratching my head over what good complaining about big tech actually does. Also "break up and regulate or something along those lines"? Again, I think there's a valuable discussion to be had about how best to help fund journalism. It is pretty damn key to my own livelihood. But, this organization isn't set up to have an "open" discussion. It has already insisted what the problem is ("big tech") without being able to support that argument, and then says "something must be done" and so far it's just whining about big tech.

I fail to see how that's productive. There are lots of smart, thoughtful people who have put a lot of work into a variety of arguments about how to deal with the big internet companies. Some of them I agree with and some of them I don't, but I'm at a near total loss as to how merely whining does anything at all to save journalism.

30 Comments | Leave a Comment..

Posted on Free Speech - 10 June 2019 @ 9:45am

Republicans Blame CDA 230 For Letting Platforms Censor Too Much; Democrats Blame CDA 230 For Platforms Not Censoring Enough

from the which-is-it? dept

It certainly appears that politicians on both sides of the political aisle have decided that if they can agree on one thing, it's that social media companies are bad, and that they're bad because of Section 230, and that needs to change. The problem, of course, is that beyond that point of agreement, they actually disagree entirely on the reasons why. On the Republican side, you have people like Rep. Louis Gohmert and Senator Ted Cruz who are upset about platforms using Section 230's protections to allow them to moderate content that those platforms find objectionable. Cruz and Gohmert want to amend CDA 230 to say that's not allowed.

Meanwhile, on the Democratic side, we've seen Nancy Pelosi attack CDA 230, incorrectly saying that it's somehow a "gift" to the tech industry because it allows them not to moderate content. Pelosi's big complaint is that the platforms aren't censoring enough, and she blames 230 for that, while the Republicans are saying the platforms are censoring too much -- and incredibly, both are saying this is the fault of CDA 230.

Now another powerful Democrat, Rep. Frank Pallone, the chair of the House Energy and Commerce Committee (which has some level of "oversight" over the internet) has sided with Pelosi in attacking CDA 230 and arguing that companies are using it "as a shield" to not remove things like the doctored video of Pelosi:

But, of course, the contrasting (and contradictory) positions of these grandstanding politicians on both sides of the aisle should -- by itself -- demonstrate why mucking with Section 230 is so dangerous. The whole point and value of Section 230 was in how it crafted the incentive structure. Again, it's important to read both parts of part (c) of Section 230, because the two elements work together to deal with both of the issues described above.

(c) Protection for “Good Samaritan” blocking and screening of offensive material

(1) Treatment of publisher or speaker
No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

(2) Civil liability No provider or user of an interactive computer service shall be held liable on account of—

(A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or

(B) any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1).

It's these two elements together that make Section 230 so powerful. The first says that we don't blame the platform for any of the actions/content posted by users. This should be fairly straightforward. It's about the proper application of liability to the party who actually violated the law, and not the tools and services they used to violate the law. Some people want to change this, but much of that push is coming from lawyers who just want the bigger pockets to sue. It involves, what I've referred to as "Steve Dallas lawsuits" after the character in the classic comic strip Bloom County, who explains why you should always focused on suing those with the deepest pockets, no matter how tangentially they are to the law violating.

But, part (2) of the law is also important. It's the part that actually allows platforms the ability to moderate. Section 230 was an explicit response to the ruling in Stratton Oakmont v. Prodigy, in which a NY state judge ruled that because Prodigy wanted to provide a "family friendly" service, and therefore moderated out content it found objectionable (in order to support that "family friendly" goal), it therefore became automatically liable for any of the content that was left up. But, of course, that's crazy. The end result of such a rule would be either that platforms wouldn't do anything to moderate content, which would mean everything would be a total free for all -- and you couldn't have a "family friendly" forum at all, and everything would quickly fill up with spam/porn/harassment/abuse/etc -- or platforms would basically restrict almost everything to create a totally anodyne and boring existence.

The genius of Section 230 is that it enabled a balance that allowed for experimentation and this includes the ability to experiment with different forms of moderation. Everyone focuses on Facebook, YouTube and Twitter -- which all take moderately different approaches -- but having a Section 230 is also what allowed for the radically different approaches taken by other sites: like Wikipedia and Reddit (and even us at Techdirt). These use very different approaches, some of which work better than others, but much of which is community-dependent. It's that experimentation that is good.

But the very fact that both sides of the political aisle seem to be attacking CDA 230 but for completely opposite reasons really should highlight why messing with CDA 230 would be such a disaster. If Congress moves the law in the direction that Gohmert/Cruz want, then you'd likely get many fewer platforms, and some would just be overrun by messes, while others would be locked down and barely usable. If Congress moves the law in the direction that Pelosi/Pallone seem to want, then you would end up with effectively the same result: much greater censorship as companies try to avoid liability.

Neither solution is a good one, and neither would truly satisfy the critics in the first place. That's part of the reason why this debate is so silly. Everyone's mad at these platforms for how they moderate, but what they're really mad at is humanity. Sometimes people say mean and awful things. Or they spread disinformation. Or defamation. And those are real concerns. But there need to be better ways of dealing with it than Congress stepping in (against the restriction put on it by the 1st Amendment), and saying that the internet platforms themselves either must police humanity... or need to stop policing humanity altogether. Neither is a solution to the problems of humanity.

121 Comments | Leave a Comment..

Posted on Techdirt - 7 June 2019 @ 1:36pm

Colorado's Governor Jared Polis Signs Strong Anti-SLAPP Law And Blocks Damaging Licensing Restrictions

from the keep-it-up,-gov dept

When Jared Polis was in Congress, he was one of the (tragically few) reliably good, principled voices on topics that were important to us here at Techdirt: copyright, patents, encryption and more. Now that he's governor in Colorado, it appears he continues to do good things. First up, he's signed an excellent new anti-SLAPP law modeled on California's gold standard anti-SLAPP law. As we've discussed at length over the years, anti-SLAPP laws are a key tool in protecting free speech. They do this in two key ways: by ending bogus lawsuits designed to silence critics by enabling a court to toss them out very quickly (before they get too involved) and (importantly) making it much easier to make the plaintiffs in such cases pay the legal expenses of the defendants they sued. These laws have been in place in about half of the states so far, and they've been incredibly useful in deterring lawsuits that have no merit, but are filed entirely to burden the defendants with costs and general chilling effects of being dragged to court.

Colorado joins nearly 30 states that have adopted measures to curb what are called strategic lawsuits against public participation. Witnesses testified during the legislative session about how they’d been sued for libel or slander simply for exercising their First Amendment rights.

The new law allows a citizen to seek an immediate stay of such a lawsuit by arguing it’s motivated by the citizen’s exercise of First Amendment rights. A higher court can order immediate dismissal of the lawsuit, and plaintiffs can be held liable for court costs and attorneys’ fees.

Democratic Reps. Lisa Cutter and Shannon Bird and Sen. Michael Foote sponsored the bill, which was modeled after a longstanding California statute that is considered one of the nation’s toughest.

On another issue we've talked about, ridiculous occupational licensing laws that go way beyond any "public safety" reason to just block out competition and limit the competitiveness of markets, Polis has responded by vetoing a bill to increase occupational licenses in Colorado. This was a bill pushed by members of his own party, so it's good to see Polis push back on it. His veto statement is worth reading.

Before any unregulated occupation is to be regulated, or any regulated occupation is to be continued, the state should complete its due diligence to ensure that regulation will, in fact, ensure consumer safety in a cost-efficient manner. This bill does not meet that threshold.

As we have previously noted, occupational licensing is not always superior to other forms of consumer protection. Too often it is used to protect existing professional within an occupation against competition from newcomers entering that occupation. Meanwhile, according to the 2019 Current Population Survey, 24 percent of the national workforce is licensed, up from below five percent in the 1950s. Licensing in the United States over the years has at times prevented minorities and the economically disadvantaged from having the ability to access occupations. When the supply of professionals is restricted, the cost of services increases and the poorest among us lose the ability to access these services.

There's a lot more in the statement, but that's the crux of it.

Kudos to Governor Polis. Keep it up.

39 Comments | Leave a Comment..

Posted on Techdirt - 7 June 2019 @ 10:44am

The Impossibility Of Content Moderation: YouTube's New Ban On Nazis Hits Reporter Who Documents Extremism, Professor Teaching About Hitler

from the so-that's-all-working-well dept

So just as the recent big content moderation mess was happening on YouTube, the company announced that it had changed its policies to better deal with violent extremism and supremacism on the platform:

Today, we're taking another step in our hate speech policy by specifically prohibiting videos alleging that a group is superior in order to justify discrimination, segregation or exclusion based on qualities like age, gender, race, caste, religion, sexual orientation or veteran status. This would include, for example, videos that promote or glorify Nazi ideology, which is inherently discriminatory. Finally, we will remove content denying that well-documented violent events, like the Holocaust or the shooting at Sandy Hook Elementary, took place.

The timing of this announcement was seen as curious (or, at the very least, poorly timed) as it came basically hours after they had refused to take down Steven Crowder's account (see the earlier post linked above), even though that wasn't an identical situation -- though analogous enough that tons of people commented on it.

In making the announcement, YouTube correctly noted that this new bit of line drawing could represent some problems, including among those tracking hate and extremism:

We recognize some of this content has value to researchers and NGOs looking to understand hate in order to combat it, and we are exploring options to make it available to them in the future. And as always, context matters, so some videos could remain up because they discuss topics like pending legislation, aim to condemn or expose hate, or provide analysis of current events. We will begin enforcing this updated policy today; however, it will take time for our systems to fully ramp up and we’ll be gradually expanding coverage over the next several months.

But within hours of the new policy rolling out, we were already seeing how difficult it is to implement without taking down content that probably deserves to remain up. Ford Fischer, a reporter who tracks extremist and hate groups, and whose work is regularly cited, noted that his own channel had been demonetized.

Fischer than discusses the specific videos that YouTube says is the reason for this -- and it does include holocaust denialism, but for the sake of documenting it, not promoting it:

And this gets, once again, to the very problem of expecting platforms to police this kind of speech. The exact same content can mean very different things in different contexts. In some cases, it may be used to promote odious ideology. In other cases, it's used to document and expose that ideology and the ignorance and problems associated with it.

But how do you craft a policy that can determine one from the other? As YouTube is discovering (truth is, they probably already knew this), the answer is that you don't. Any policy ends up creating some sort of collateral damage, and the demands from well meaning people mean that the direction this tends to go in leads to greater and greater takedowns. But, if in the process of doing this we end up sweeping the documentation under the rug, that's a problem as well.

Here's another example: right after YouTube's new policy was put in place, a history teacher found that his own YouTube channel was banned. Why? Because he hosted archival footage of Hitler:

“My stomach fell,” Allsop told BuzzFeed News via email. “I’m a history teacher, not someone who promotes hatred. I share archive footage and study materials to help students learn about the past.”

Once again, it often sounds easy to say something like "well, let's ban the Nazis." I'd even argue it's a reasonable goal for a platform to have a blanket "no Nazis" policy. But the reality is that the implementation is not nearly as easy as many people believe. And the end result can be that archival and documentary footage gets blocked. And that could have serious long term consequences if part of our goal is to educate people about why Nazis are bad.

Of course, none of this should come as a surprise to anyone who's been dealing with these issues over the past couple of decades. Early attempts to ban "porn" also took down information on breast cancer. Attempts to block "terrorist content" have repeatedly taken down people documenting war crimes. This kind of thing happens over and over and over again and believing that this time will magically be different is a fool's errand.

116 Comments | Leave a Comment..

Posted on Techdirt - 7 June 2019 @ 6:36am

The Impossibility Of Content Moderation Plays Out, Once Again, On YouTube

from the no-one-will-agree dept

I was traveling a bit this week so didn't watch the slow motion train wreck that was happening on YouTube in real time. The latest situation began when Vox video producer Carlos Maza posted publicly on Twitter about Steven Crowder -- one of those ranty angry "comedians" -- kept posting "repeated, overt attacks on my sexual orientation and ethnicity." He noted that Crowder's fans had taken to harassing and doxxing him and generally being assholes. He "reported" the content to YouTube, saying that he felt the content violated its policies on bullying and harassment. After a few days, YouTube posted via Twitter (oddly) a kinda weird explanation, saying that after reviewing the videos, they didn't actually violate YouTube's harassment policies.

Lots of people got angry about that decision, and then YouTube changed its mind (partly), choosing to (maybe temporarily) demonetize Crowder's channel until he agreed to "address all of the issues with his channel", specifically "continued egregious actions that have harmed the broader community" whatever that means.

As Robby Soave at Reason notes, this is a solution that pissed off absolutely everyone and satisfied absolutely no one. Though, there is one thing that pretty much everyone agrees: boy YouTube sure pointed a pretty large cannon at its own foot in dealing with this one (seriously, don't they employ people who have some sort of clue about these kinds of communication issues?).

As Soave points out, there's really no good results here. He's correct that Crowder does seem to be an asshole and there's no reason to express any sympathy for Crowder being a jerk and attacking someone for their sexual orientation or ethnicity. Crowder deserves to be called out and mocked for such things. At the same time, it is quite reasonable to sympathize with Maza, as being on the end of such targeted harassment by assholes is horrific. Part of the problem, here, is the disconnect between what Crowder himself did (just be a general asshole) and what Crowder's followers and fans did (taking Crowder's assholish comments and escalating them into harassment). That puts a platform like YouTube (once again) into a really impossible position. Should it be holding Crowder responsible for the actions of his crazy deranged followers (which it can easily be argued he winkingly/noddingly encouraged) even if Crowder didn't do the harassment directly, and was just generally an asshole? It's a tough call. It may seem like an easy call, but try to apply that standard to other situations and it gets complicated fast.

Katie Herzog, at The Stranger, posted a thoughtful piece about how this particular standard could boomerang back on the more vulnerable and marginalized people in our society (as is the case with almost any effort towards censorship). Even if Crowder is deeply unfunny and a jerk, this standard creates follow on effects:

Crowder is a comic, doing exactly what comics do: Mocking a public figure. There's nothing illegal about that, and if YouTube does reverse its decision and start to ban everyone who mocks people for their sexuality or race, they're going to have to ban a whole lot of queer people of color who enjoy making fun of straight white dudes next. That's not a precedent I'd like to see set.

Of course, the usual response to this is to have people claim that we're making a bogus "slippery slope" argument that isn't there. What they mean is that since you and I can somehow magically work out which assholes deserve to be shut down and which assholes are doing it in pursuit of the larger good, then clearly a company can set in place a policy that works that says "stop just the assholes I don't like."

And there are reasons to be sympathetic to such a position. It's just that we have a couple decades of seeing how this works at scale in actual practice and it doesn't work. Ratchet up the ability to silence assholes and there are plenty of "false positives" -- people getting kicked off for dopey reasons. Go in the other direction and you end up with too many assholes on the platform. As we've discussed for years, there is no right level and there is no way to set the controls in a way that works. No matter what decision is made is going to piss off a ton of people. This is why I've been pushing for platforms to actually move in a different direction. Rather than taking on all of the policing responsibility themselves, open it up. Let others build services for content moderation, push the power to the ends of the network, and let there actually be competition among the moderation options, rather than having it come from a single, centralized source. There will still be problems with that, but it avoids many of the issues with the mess described here.

68 Comments | Leave a Comment..

Posted on Techdirt - 5 June 2019 @ 1:58pm

Settlement In Tom Brady Photo Case Leaves Issue Of Copyright On Embedded Images Unsettled

from the bring-back-the-server-test dept

A little over a year ago, we wrote about a pretty bad ruling in NY, by Judge Katherine Forrest, arguing that merely embedding content on a site -- even though it's hosted elsewhere -- could be deemed infringing. This went against what has been known as the "server test," which says that the issue is where the content is actually hosted (which server it's actually on), and that merely embedding the image shouldn't lead to new claims of infringement. Considering that, technically, embedding an image is no different than linking to an image, saying that embedding an image that is hosted elsewhere is itself infringing could put much of the basic concept of how the internet works at risk.

This particular case involved a photo of quarterback Tom Brady that had been posted originally to Snapchat. The image, taken by photographer Justin Goldman, made its way from Snapchat to Reddit to Twitter. Some news organizations embedded tweets showing the photo, using Twitter's native embed functionality. Goldman sued a bunch of them. Judge Forrest, citing the Supreme Court's "looks like a duck" test in the Aereo ruling said that embedding qualifies as displaying a work (even though the websites in question aren't hosting anything other than a pointer telling user's computers to go find that image). Even worse, Forrest explicitly rejected the server test, saying it was wrong.

This was poised to be a pretty big deal... except that it's not. Because the entire lawsuit has been settled leaving the question of whether or not the server test is considered valid (especially in NY where the case was filed) unanswered. While there is the Forrest ruling on the books, since it's in a district court it creates no official precedent that other courts need to follow (though that won't stop it from being cited). However, as the linked article notes, there are some other cases challenging the server test and looking at the legality of embeds still going on, so perhaps we won't have to wait long for the issue to bubble up again. One hopes that, this time, a court will accept the basic server test as the only reasonable interpretation of the law.

19 Comments | Leave a Comment..

Posted on Free Speech - 5 June 2019 @ 9:17am

European Court Of Justice Suggests Maybe The Entire Internet Should Be Censored And Filtered

from the oh-come-on dept

The idea of an open "global" internet keeps taking a beating -- and the worst offender is not, say, China or Russia, but rather the EU. We've already discussed things like the EU Copyright Directive and the Terrorist Content Regulation, but it seems like every day there's something new and more ridiculous -- and the latest may be coming from the Court of Justice of the EU (CJEU), which frequently is a bulwark against overreaching laws regarding the internet, but sometimes (too frequently, unfortunately) gets things really, really wrong (saying the "Right to be Forgotten" applied to search engines was one terrible example).

And now, the CJEU's Advocate General has issued a recommendation in a new case that would be hugely problematic for the idea of a global open internet that isn't weighted down with censorship filters. The Advocate General's recommendations are just that: recommendations for the CJEU to consider before making a final ruling. However, as we've noted in the past, the CJEU frequently accepts the AG's recommendations. Not always. But frequently.

The case here involves a an attempt to get Facebook to delete critical information of a politician in Austria under Austrian law. In the US, of course, social media companies are not required to delete such information. The content itself is usually protected by the 1st Amendment, and the platforms are then protected by Section 230 of the Communications Decency Act that prevents them from being liable, even if the content in question does violate the law (though, importantly, most platforms will still remove such content if it's been determined by a court to violate the law).

In the EU, the intermediary liability scheme is significantly weaker. Under the E-Commerce Directive's rules, there is an exemption of liability, but it's much more similar to the DMCA's safe harbors for copyright-infringing material in the US. That is, the liability exemptions only occur if the platform doesn't have knowledge of the "illegal activity" and if they do get such knowledge, they need to remove the content. There is also a prohibition on a "general monitoring" requirement (i.e., filters).

The case at hand involved someone on Facebook posting a link to an article about an Austrian politician, Eva Glawischnig-Piesczek, and added some comments along with the link. Specifically:

That user also published, in connection with that article, an accompanying disparaging comment about the applicant accusing her of being a ‘lousy traitor of the people’, a ‘corrupt oaf’ and a member of a ‘fascist party’.

In the US -- some silly lawsuits notwithstanding -- such statements would be clearly protected by the 1st Amendment. Apparently not so much in Austria. But then there's the question of Facebook's responsibility.

An Austrian court ordered Facebook to remove the content, which it complied with by removing access to anyone in Austria. The original demand was also that Facebook be required to prevent "equivalent content" from appearing as well. On appeal, a court denied Facebook's request that it only had to comply in Austria, and also said that such "equivalent content" could only be limited to cases where someone then alerted Facebook to the "equivalent content" being posted (and, thus, not a general monitoring requirement).

From there, the case went to the CJEU, who was asked to determine if such blocking needs to be global and how should the "equivalent content" question be handled.

And, then, basically everything goes off the rails. First up, the Advocate General, seems to think that -- like many misguided folks concerning CDA 230 -- there's some sort of "neutrality" requirement for internet platforms, and that doing any sort of monitoring might lose their safe harbors for no longer being neutral. This is mind-blowingly stupid.

It should be observed that Article 15(1) of Directive 2000/31 prohibits Member States from imposing a general obligation on, among others, providers of services whose activity consists in storing information to monitor the information which they store or a general obligation actively to seek facts or circumstances indicating illegal activity. Furthermore, it is apparent from the case-law that that provision precludes, in particular, a host provider whose conduct is limited to that of an intermediary service provider from being ordered to monitor all (9) or virtually all (10) of the data of all users of its service in order to prevent any future infringement.

If, contrary to that provision, a Member State were able, in the context of an injunction, to impose a general monitoring obligation on a host provider, it cannot be precluded that the latter might well lose the status of intermediary service provider and the immunity that goes with it. In fact, the role of a host provider carrying out general monitoring would no longer be neutral. The activity of that host provider would not retain its technical, automatic and passive nature, which would imply that that host provider would be aware of the information stored and would monitor it.

Say what now? It's right that general monitoring is not required (and explicitly rejected) in the law, but the corollary that deciding to do general monitoring wipes out your safe harbors is... crazy. Here, the AG is basically saying we can't have a general monitoring obligation (good) because that would overturn the requirement of platforms to be neutral (crazy):

Admittedly, Article 14(1)(a) of Directive 2000/31 makes the liability of an intermediary service provider subject to actual knowledge of the illegal activity or information. However, having regard to a general monitoring obligation, the illegal nature of any activity or information might be considered to be automatically brought to the knowledge of that intermediary service provider and the latter would have to remove the information or disable access to it without having been aware of its illegal content. (11) Consequently, the logic or relative immunity from liability for the information stored by an intermediary service provider would be systematically overturned, which would undermine the practical effect of Article 14(1) of Directive 2000/31.

In short, the role of a host provider carrying out such general monitoring would no longer be neutral, since the activity of that host provider would no longer retain its technical, automatic and passive nature, which would imply that the host provider would be aware of the information stored and would monitor that information. Consequently, the implementation of a general monitoring obligation, imposed on a host provider in the context of an injunction authorised, prima facie, under Article 14(3) of Directive 2000/31, could render Article 14 of that directive inapplicable to that host provider.

I thus infer from a reading of Article 14(3) in conjunction with Article 15(1) of Directive 2000/31 that an obligation imposed on an intermediary service provider in the context of an injunction cannot have the consequence that, by reference to all or virtually all of the information stored, the role of that intermediary service provider is no longer neutral in the sense described in the preceding point.

So the AG comes to a good result through horrifically bad reasoning.

However, while rejecting general monitoring, the AG then goes on to talk about why more specific monitoring and censorship is probably just fine and dandy, with a somewhat odd aside about how the "duration" of the monitoring can make it okay. However, the key point is that the AG has no problem with saying, once something is deemed "infringing," that it can be a requirement on the internet platform to have to remove new instances of the same content:

In fact, as is clear from my analysis, a host provider may be ordered to prevent any further infringement of the same type and by the same recipient of an information society service. (24) Such a situation does indeed represent a specific case of an infringement that has actually been identified, so that the obligation to identify, among the information originating from a single user, the information identical to that characterised as illegal does not constitute a general monitoring obligation.

To my mind, the same applies with regard to information identical to the information characterised as illegal which is disseminated by other users. I am aware of the fact that this reasoning has the effect that the personal scope of a monitoring obligation encompasses every user and, accordingly, all the information disseminated via a platform.

Nonetheless, an obligation to seek and identify information identical to the information that has been characterised as illegal by the court seised is always targeted at the specific case of an infringement. In addition, the present case relates to an obligation imposed in the context of an interlocutory order, which is effective until the proceedings are definitively closed. Thus, such an obligation imposed on a host provider is, by the nature of things, limited in time.

And then, based on nothing at all, the AG pulls out the "magic software will make this work" reasoning, insisting that software tools will make sure that the right content is properly censored:

Furthermore, the reproduction of the same content by any user of a social network platform seems to me, as a general rule, to be capable of being detected with the help of software tools, without the host provider being obliged to employ active non-automatic filtering of all the information disseminated via its platform.

This statement... is just wrong? First off, it acts as if using software to scan for the same content is somehow not a filter. But it is. And then it shows a real misunderstanding about the effectiveness of filters (and the ability of some to trick filters). And there's no mention of false positives. I mean, in this case here, a politician was called a corrupt oaf. How should Facebook be forced to block that. Is any use of the phrase "corrupt oaf" now blocked? Perhaps it would have to be "corrupt oaf" and the politician, Eva Glawischnig-Piesczek, that need to be together to be blocked. But, in that case, does it mean that this article itself cannot be posted on Facebook? So many questions...

The AG then insists that somehow this isn't too burdensome (based on what, exactly?) and seems to make the mistake of many non-technical people, who think that filters are (a) much better than they are, and (b) not dealing with significant gray areas all the time.

First of all, seeking and identifying information identical to that which has been characterised as illegal by a court seised does not require sophisticated techniques that might represent an extraordinary burden.

And, I mean, perhaps that's true for Facebook -- but it certainly could represent a much bigger burden for lots of other, smaller providers. Like, us, for example.

Hilariously, as soon as the AG is done saying the filtering is easy, the recommendation notes that (oh right!) context may be important:

Last, such an obligation respects internet users’ fundamental right to freedom of expression and information, guaranteed in Article 11 of the Charter, in so far as the protection of that freedom need not necessarily be ensured absolutely, but must be weighed against the protection of other fundamental rights. As regards the information identical to the information that was characterised as illegal, it consists, prima facie and as a general rule, in repetitions of an infringement actually characterised as illegal. Those repetitions should be characterised in the same way, although such characterisation may be nuanced by reference, in particular, to the context of what is alleged to be an illegal statement.

Next up is the question of blocking "equivalent content." The AG, properly notes that determining what is, and what is not, "equivalent" represents quite a challenge -- and at least seeks to limit what may be ordered to be blocked, saying that it should only apply to content from the same user, and that any injunction be quite specific in what needs to be blocked:

I propose that the answer to the first and second questions, in so far as they relate to the personal scope and the material scope of a monitoring obligation, should be that Article 15(1) of Directive 2000/31 must be interpreted as meaning that it does not preclude a host provider operating a social network platform from being ordered, in the context of an injunction, to seek and identify, among all the information disseminated by users of that platform, the information identical to the information that was characterised as illegal by a court that has issued that injunction. In the context of such an injunction, a host provider may be ordered to seek and identify the information equivalent to that characterised as illegal only among the information disseminated by the user who disseminated that illegal information. A court adjudicating on the removal of such equivalent information must ensure that the effects of its injunction are clear, precise and foreseeable. In doing so, it must weigh up the fundamental rights involved and take account of the principle of proportionality.

Then, finally, it gets to the question of global blocking -- and basically says that nothing in EU law prevents a member state, such as Austria, from ordering global blocking, and therefore, that it can do so -- but that local state courts should consider the consequences of ordering such global takedowns.

... as regards the territorial scope of a removal obligation imposed on a host provider in the context of an injunction, it should be considered that that obligation is not regulated either by Article 15(1) of Directive 2000/31 or by any other provision of that directive and that that provision therefore does not preclude that host provider from being ordered to remove worldwide information disseminated via a social network platform. Nor is that territorial scope regulated by EU law, since in the present case the applicant’s action is not based on EU law.

Regarding the consequences:

To conclude, it follows from the foregoing considerations that the court of a Member State may, in theory, adjudicate on the removal worldwide of information disseminated via the internet. However, owing to the differences between, on the one hand, national laws and, on the other, the protection of the private life and personality rights provided for in those laws, and in order to respect the widely recognised fundamental rights, such a court must, rather, adopt an approach of self-limitation. Therefore, in the interest of international comity, (51) to which the Portuguese Government refers, that court should, as far as possible, limit the extraterritorial effects of its junctions concerning harm to private life and personality rights. (52) The implementation of a removal obligation should not go beyond what is necessary to achieve the protection of the injured person. Thus, instead of removing the content, that court might, in an appropriate case, order that access to that information be disabled with the help of geo-blocking.

That is a wholly unsatisfying answer, given that we all know how little many governments think about "self-limitation" when it comes to censoring critics globally.

And now we have to wait to see what the court says. Hopefully it does not follow these recommendations. As intermediary liability expert Daphne Keller from Stanford notes, there are some serious procedural problems with how all of this shakes out. In particular, because of the nature of the CJEU, they will only hear from some of the parties whose rights are at stake (a lightly edited quote of her tweetstorm):

The process problems are: (1) National courts don’t have to develop a strong factual record before referring the case to the CJEU, and (2) Once cases get to the CJEU, experts and public interest advocates can’t intervene to explain the missing info. That’s doubly problematic when – as in every intermediary liability case – the court hears only from (1) the person harmed by online expression and (2) the platform but NOT (3) the users whose rights to seek and impart information are at stake. That's an imbalanced set of inputs. On the massively important question of how filters work, the AG is left to triangulate between what plaintiff says, what Facebook says, and what some government briefs say. He uses those sources to make assumptions about everything from technical feasibility to costs.

And, in this case in particular, that leads to some bizarre results -- including quoting a fictional movie as evidence.

In the absence of other factual sources, he also just gives up and quotes from a fictional movie – The Social Network -- about the permanence of online info.

That, in particular, is most problematic here. It is literally the first line of the AG's opinion:

The internet’s not written in pencil, it’s written in ink, says a character in an American film released in 2010. I am referring here, and it is no coincidence, to the film The Social Network.

But a quote in a film that is arguably not even true, seems like an incredibly weak basis for a law that fundamentally could lead to massive global censorship filters across the internet. Again, one hopes that the CJEU goes in a different direction, but I wouldn't hold my breath.

52 Comments | Leave a Comment..

Posted on Techdirt - 4 June 2019 @ 3:35pm

New Study Shows That All This Ad Targeting Doesn't Work That Well

from the well-duh dept

Just a couple months ago, I wrote a post saying that for all the focus on "surveillance capitalism," and the claims that Facebook and Google need to suck up more and more data to better target ads, the secretive reality was that all of this ad this ad targeting doesn't really work, and it's mostly a scam pulled on advertisers to get them to pay higher rates for little actual return. And, now, a new study says that publishers, in particular, are seeing basically no extra revenue from heavily targeted ads, but some of the middlemen ad tech companies are making out like bandits. In other words, a lot of this is snake oil arbitrage. The WSJ has summarized the findings:

But in one of the first empirical studies of the impacts of behaviorally targeted advertising on online publishers’ revenue, researchers at the University of Minnesota, University of California, Irvine, and Carnegie Mellon University suggest publishers only get about 4% more revenue for an ad impression that has a cookie enabled than for one that doesn’t. The study tracked millions of ad transactions at a large U.S. media company over the course of one week.

That modest gain for publishers stands in contrast to the vastly larger sums advertisers are willing to pay for behaviorally targeted ads. A 2009 study by Howard Beales, a professor at George Washington University School of Business and a former director of the Bureau of Consumer Protection at the Federal Trade Commission, found advertisers are willing to pay 2.68 times more for a behaviorally targeted ad than one that wasn’t.

Much of the premium likely is being eaten up by the so-called “ad tech tax,” the middlemen’s fees that eat up 60 cents of every dollar spent on programmatic ads, according to marketing intelligence firm Warc.

As a site that relies on advertising to make money, this is hellishly frustrating. For years we've been pitching non-invasive, non-tracking ad campaigns for Techdirt. Over and over again we tell potential advertisers that people here would be much more open to paying attention to their ads if they promised not to do any tracking at all. And, over and over again companies (even those that initially express interest) decide to throw all their money at the big flashy adtech firms that promise to use "AI" and "machine learning" to better target their ads -- and get little in return for it.

We still hope that sooner or later advertisers realize that they're getting scammed by the ad companies promising miracles in the form of tracking everything, and go back to recognizing that good, old fashioned, brand advertising works well without the need for invasive, intrusive surveillance.

54 Comments | Leave a Comment..

Posted on Techdirt - 4 June 2019 @ 10:42am

A Legal Fight Against The SEC May Represent Our Last Hope For An Open, Distributed Internet

from the pay-attention dept

Let's get this out of the way up top: yes, many cryptocurrencies and "Initial Coin Offerings" (ICO's) were complete scams, designed to dupe people out of billions of dollars. It's entirely reasonable to call those out, and to argue that there should be some significant regulatory oversight of such scams. However, it is also possible to believe that an overreaction to such scams could kill off a nascent attempt to rebuild a truly open and distributed internet. For years now, I've been talking about why we could better fulfill the dream of an open, distributed internet if we were to move to a world of protocols, not platforms, and in a more recent post, I've discussed some policy proposals to help the world move in that direction -- with the final one concerning the SEC, and getting it to stop looking at cryptocurrencies solely as a financial instrument nearly identical to a security. This is not to avoid all scrutiny of cryptocurrencies. But having a working cryptocurrency system in which the success of a protocol can be driven by its actual usage and development, rather than ads or "surveillance capitalism", would benefit massively from more freedom to experiment.

While it does not appear that, by itself, it will be that successful, a few years back the social network/messaging app Kik started an experiment in this space, raising $100 million with an ICO and designing it so that its "Kin" tokens could be used to reward developers who build services. The company has put some effort into encouraging developers to build within its ecosystem, and for others to use the Kin tokens as currency.

However, mostly behind the scenes, Kik and the SEC have been having a bit of a fight over whether or not the ICO was an unregistered securities sale. Back in January, the company revealed that it had been negotiating with the SEC over the whole thing.

The SEC isn’t accusing Kik of fraud, Mr. Livingston said. Rather, its enforcement division believes Kik failed to register the sale with the SEC and thus didn’t give investors the proper information. The agency’s enforcement action must be authorized by the SEC’s commissioners, and it’s unknown whether they have voted to authorize the litigation.

[....]

The SEC says most digital tokens are covered by a 73-year-old Supreme Court decision that defined which investments are considered securities. Many tokens meet the court’s test because they can be traded for profit, and their value is tied to the performance of the startup that sold them, regulators say.

In a 39-page rebuttal on Dec. 10 to the SEC, Kik argued the sale terms, in fact, don’t constitute an investment contract, and investors weren’t led to expect to profit on their purchase of kin.

“Bringing the proposed enforcement action against Kik and the foundation would amount to doubling down on a deeply flawed regulatory and enforcement approach,” the company’s lawyers wrote, according to copy of the rebuttal reviewed by the Journal.

Since that time, the two sides have continued to negotiate, with Kik basically now admitting it wants a judge to weigh in because it can't get the SEC to see things its way. It has announced that it has set aside $5 million to go to court with the SEC over this matter (and is asking for further donations).

Despite the fact that last month over 300,000 people earned and spent Kin as a currency, the SEC is still saying that it might be a security. After months of trying to find a reasonable solution, Kin has been unable to reach a settlement that wouldn’t severely impact the Kin project and everyone in the space. So Kin is going to take on the SEC in court to make sure there is a foundation for innovation going forward.

As the company notes, the current ambiguity is acting as a real "innovation tax" in the space. Many companies are refusing to experiment with these kinds of offerings, or even to work with existing tokens, out of fear of how an SEC might completely upend the space with a decision one way or the other. Indeed, in a recent talk by SEC Commissioner Hester Peirce (who is supportive of more experimentation with crypto), she very clearly worries about how ambiguities in the way the SEC has acted over the last couple of years will stifle innovation. The speech notes that the SEC has mostly avoided heavy handed regulation in the space, but that it has not done much to help actually clarify the rules.

The SEC staff recently issued a framework to assist issuers with conducting a Howey analysis of potential token offerings. The document is a thorough 14 pages. It points to features of an offering and actions by an issuer that could signal that the offering is likely a securities offering. If this framework helps issuers understand what the different Howey factors might look like in an ICO context, it may be valuable. I am concerned, however, that it could raise more questions and concerns than it answers.

While Howey has four factors to consider, the framework lists 38 separate considerations, many of which include several sub-points. A seasoned securities lawyer might be able to infer which of these considerations will likely be controlling and might therefore be able to provide the appropriate weight to each. Whether the framework gives anything new to the seasoned securities lawyer used to operating in the facts and circumstances world of Howey is an open question. I worry that non-lawyers and lawyers not steeped in securities law and its attendant lore will not know what to make of the guidance. Pages worth of factors, many of which seemingly apply to all decentralized networks, might contribute to the feeling that navigating the securities laws in this area is perilous business. Rather than sorting through the factors or hiring an expensive lawyer to do so, a wary company may reasonably decide to forgo certain opportunities or to pursue them in a more crypto-friendly jurisdiction overseas.

On the same day the Corporation Finance staff issued the Framework, the staff also issued the first token no-action letter in response to an inquiry from TurnKey Jet, a charter jet company. The company intended to effectively tokenize gift cards. Customer members could purchase tokens that would be redeemable, dollar for dollar, for charter jet services. The tokens could be sold only to other members. This transaction is so clearly not an offer of securities that I worry the staff’s issuance of a digital token no-action letter—the first and so far only such letter—may in fact have the effect of broadening the perceived reach of our securities laws. If these tokens were securities, it would be hard to distinguish them from any medium of stored value. Is a Starbucks card a security? If we are going that far, I can only imagine what name the barista will write on my coffee cup.

And yet, the staff’s letter did not stop at merely stating that the token offering would not qualify as a securities offering, but highlighted specific but non-dispositive factors. In other words, the letter effectively imposed conditions on a non-security. For example, the staff’s response prohibits the company from repurchasing the tokens unless it does so at a discount. Further, as I mentioned earlier, the incoming letter precluded a secondary market that includes non-members. Does that mean that a company that chooses to offer to repurchase gift cards at a premium or that allows gift card purchasers to sell or give them to third parties needs to call its securities lawyer to start the registration process?

As Peirce notes, there are still so many hugely open questions, and the potential liability for getting any of these wrong is clearly holding back many possible innovations that could be quite important to a more distributed, more open, internet. After listing out a bunch of unanswered questions, she notes:

On these points, the SEC has been nearly silent. This silence may ultimately be deadly. An issuer can conduct a private securities offering with no SEC involvement. The rules that distinguish a private from a public offering focus on the offerees and investors. The form that the security is in—whether shares of common stock or interests in orange groves—has no bearing on how the rules operate. The rules that govern broker-dealers, investment advisers, auditors, and trading platforms are different. They govern the ownership, storage, and exchange of securities—exactly the aspects of digital assets that the crypto industry seeks to transform. A broker or adviser cannot custody an asset if it does not know how to show it has possession and control of the asset. An auditor must be able to review and verify the actual transactions.

Additionally, while issuers that rely on the private offering exemption do not need SEC permission to issue securities, a platform cannot trade securities unless it is registered with the SEC as an exchange or an alternative trading system. A broker-dealer generally must register with the SEC and FINRA.

The SEC has yet to provide guidance to the public or FINRA on any of the core questions. The result is that many would-be brokers and trading platforms are stuck in a frustrating waiting mode; they are unable to get clear answers to questions about how they may proceed in this market.

This is why Kik/Kin going to court to force the SEC to make some decisions is so important here. Venture capitalist Fred Wilson put up a blog post explaining why:

Sadly, the SEC looks at crypto tokens and sees securities that they want to regulate as such. They cannot seem to understand that not all of these assets are securities, they cannot seem to understand that most are commodities, currencies, or utilities like frequent flyer miles. They cannot understand that crypto tokens are unlike any assets that have come before them and that crypto tokens need new regulatory structures. They cannot understand that their unwillingness to come up with new rules paired with their “regulate by enforcement” strategy is hurting the crypto sector, pushing it offshore, and is causing most of the new projects to raise capital outside of the US and/or put together legal structures that look like Frankenstein monsters.

None of this is an attempt to argue that there shouldn't be any oversight over cryptocurrencies. Clearly, and obviously, there have been many that are little more than ponzi schemes and get rich quick cons. The entire space will benefit from some clear rules that enable greater experimentation with things like helping to fund protocols via such tokenization -- but the SEC's overall approach to date has been one where it seems on the one hand to be afraid to do anything at all other than make scary noises and threats, and on the other to be hinting that any new cryptocurrency offering should have to go through an FDA-like approval process before it might hit the market. This is an untenable position.

I have my doubts as to whether the Kik/Kin approach to cryptocurrency will itself work. But its legal fight is an important one, if you'd like to see a better decentralized internet. Dealing with scams is one thing, but if every new platform startup needs to go through the regulatory rigmarole as if it were a company preparing to go public, that would be a massive chill on innovation and experimentation. There needs to be a better, more permissionless manner of exploring these ideas, so what happens with Kik's legal challenge here will ultimately be extremely important for the future of a world of protocols over platforms.

10 Comments | Leave a Comment..

More posts from Mike Masnick >>