Look, I know some folks get annoyed that I write as much about Elon Musk and exTwitter as I do, but he’s really been the most fascinating case study in sheer wrongness regarding the running of a modern internet company and it’s just endlessly fascinating. And, really, if he just stopped doing stupid things, I could get to the very long list of other stuff I’d like to write about, but day after day after day, he just comes out with something new and stupid.
It’s uncanny.
Anyway, the latest, as first reported by Kylie Robison at Fortune, is that Elon wants to remove headlines and snippets from news articles posted to exTwitter. When an Elon stan tweeted about Robison’s article, Musk confirmed it and said it was coming from him “directly” and it will be done to “greatly improve esthetics.” From Robison’s article:
The change means that anyone sharing a link on X—from individual users to publishers—would need to manually add their own text alongside the links they share on the service; otherwise the tweet will display only an image with no context other than an overlay of the URL. While clicking on the image will still lead to the full article on the publisher’s website, the change could have major implications for publishers who rely on social media to drive traffic to their sites as well as for advertisers.
According to a source with knowledge of the matter, the change is being pushed directly by X owner Elon Musk. The primary objective appears to be to reduce the height of tweets, thus allowing more posts to fit within the portion of the timeline that appears on screen. Musk also believes the change will help curb clickbait, the source said.
“It’s something Elon wants. They were running it by advertisers, who didn’t like it, but it’s happening,” the source said, adding that Musk thinks articles occupy excessive space on the timeline.
On exTwitter people were passing around images of the change, going from the first screenshot on the left (how this currently works) to the example on the right, showing a giant image that will link to the article entirely without context.
How are people even going to know to click on those images when they just look like regular images? This is truly ridiculous.
Musk also claimed that journalists should publish directly on exTwitter rather than elsewhere, suggesting he thinks he can take on platforms like Substack:
Of course, a few years back, Twitter bought a Substack competitor called Revue, but Elon shut that down because nothing good from the old company can survive. Elon has to reinvent it in the dumbest way possible.
On top of that, Musk has obviously had his battles with media organizations. We covered his stupid battle with NPR that caused the organization to leave Twitter, even once Musk rescinded his petty changes to NPR’s account. And, of course, just recently there were the reports of how he was slowing down access to certain news sites. So perhaps this is just another attack on the media.
That said, there’s one other possibility that I haven’t seen anyone discuss. Just a few weeks ago, AFP sued exTwitter for failing to pay them under France’s extremely dumb snippet tax law. This is a French law that is similar in many ways to Australia’s and Canada’s link tax, except it’s more explicitly about snippets rather than links.
Of course, as we pointed out at the time, exTwitter should win that lawsuit, as the only way that snippets show up on Twitter is if the media org set up their Twitter Cards to work that way. The Twitter cards feature is how news sites would tell Twitter what to show when links are included, but this new change sounds like Musk will be massively limiting what data can be included through those cards.
So, it seems entirely possible that once Elon learned France was trying to make him pay for the snippets that show up via Twitter Cards, he just told the team to get rid of the snippets to get out of paying.
Of course, the reality is that this just (yet again) makes the product way worse, especially for news, which is a major reason that people use the platform. As with the ‘remove block’ idea, pretty much everyone seems to be telling Elon this is a dumb idea, but those seem to be the decisions he revels in the most.
Elon Musk’s commitment to free speech and the free exchange of ideas has always been been a joke. Despite his repeated claims to being a “free speech absolutist,” and promising that his critics and rivals alike would be encouraged to remain on exTwitter, he has consistently shown that he has a ridiculously thin skin, and a quick trigger response to try to remove, suppress, or silence those he dislikes.
Basically, Musk has made it clear that he views content moderation as a tool to get back at whoever displeases him. The latest, as first revealed by the Washington Post, is that exTwitter is using the t.co shortcode links that Twitter control (and which it routes all links on the platform through) to throttle any links to certain sites, including the NY Times and Reuters, as well as social media operations he’s scared of, including Instagram, Facebook, Substack and Bluesky.
The company formerly known as Twitter has begun slowing the speed with which users can access links to the New York Times, Facebook and other news organizations and online competitors, a move that appears targeted at companies that have drawn the ire of owner Elon Musk.
It’s a weird kind of throttling, first noticed by someone on Hacker News, noting that if you clicked on any of the disfavored URLs, you’d get a 5 second throttle delay. As that user explained:
Twitter won’t ban domains they don’t like but will waste your time if you visit them.
I’ve been tracking the NYT delay ever since it was added (8/4, roughly noon Pacific time), and the delay is so consistent it’s obviously deliberate.
The NY Times itself confirmed this as well. However, that report also noted that after the Washington Post story started making the rounds, the throttle suddenly started to disappear.
The slowness, known in tech parlance as “throttling,” initially affected rival social networks including Facebook, Bluesky and Instagram, as well as the newsletter site Substack and news outlets including Reuters and The New York Times, according to The Times’s analysis. The delay to load links from X was relatively minor — about 4.5 seconds — but still noticeable, according to the analysis. Several of the services that were throttled have faced the ire of X’s owner, Elon Musk.
By Tuesday afternoon, the delay to reaching the news sites appeared to have lifted, according to The Times’s analysis.
My own spot test found that the throttling appears to be gone as well.
In the end, a short time delay is certainly not a huge deal, but it does, again, show how Elon is willing to weaponize the tools at his disposal to try to hurt those he dislikes, and does so in a way that is both transparently obvious and silly, but which seems less likely to be immediately noticed.
It is, of course, also another example of how fickle Musk’s actual commitment to “free speech” is. This is not new of course, and he is free to do this if he wants to. But he shouldn’t pretend that his view of free speech is somehow more noble than old Twitter’s when his reasons for such throttling are transparently petty payback, rather than based on any coherent policy.
Whatever you thought of old Twitter’s moderation practices, they were at least actually based on policy, and not whatever personally irked Jack or the trust & safety team.
Elon Musk has decided to reenable accounts suspended for posting CSAM while at the same time allowing the most basic of CSAM scanning systems to break. And, that’s not even looking at how most of the team who was in charge of fighting CSAM on the site were either laid off or left.
And, that’s made Ex-Twitter a much riskier site in lots of ways, including for advertisers who have bailed. But also for anyone linking to the site.
Since Musk took control of Twitter, he mostly eliminated the Trust and Safety group and stopped paying the vendor that scans for CSAM. As a result, CSAM (child sexual abuse material) has apparently been circulating on Twitter recently (from what I’ve read elsewhere, the same notorious video that the feds found on Josh Duggar’s hard drive).
Musk also recently reinstated the account of someone who posted CSAM content.
As a result, we’ll be removing any content here that leads to Twitter, or, as he now calls it, X. Whether it’s an embed link or a direct link to a tweet. Don’t care what outlet is doing it. If you’re a reporter or editor, stop embedding links to Twitter in any of your content.
Note that they’re not just banning links that go directly to Twitter, but also links to news stories that link or embed Twitter content. As that final sentence notes, the subreddit is encouraging journalists to stop linking to Twitter entirely (remember, at Techdirt we banned Twitter embeds last year).
I’m not sure it’s reasonable to ban any news article that merely links to or embeds a tweet, but it’s certainly interesting to see how this subreddit, in particular, is handling the increasing liability that Twitter (er… Ex-Twitter) has become.
I had wondered if the members of that subreddit would be upset about this, but skimming the comments and it seems like they’re pretty overwhelmingly in support of the move. Again, this is, perhaps surprising, but a real indicator of just how much damage Elon has done to Ex-Twitter’s brand, let alone to “X.”
There have been some ongoing debates (going back many years) in the copyright space regarding whether or not embedding infringing content into a website could be infringing in and of itself. If you understand what’s happening technically, this seems ludicrous. An embed is basically the same thing as a link. And merely linking to infringing content is unlikely to be infringing itself. All embedding is really doing is taking a link, and showing the content from that link. If embedding were found to be infringing, then there’s an argument that linking is infringing, and (as we’re seeing with various link tax proposals) that would break a fundamental part of how the internet works.
Last year we discussed a case that was on appeal to the 9th Circuit, that asked a slightly different question: could a company providing embeddable content (in this case Instagram) be held liable for providing embedding tools that then allowed others to embed content from that website elsewhere. In this case, some photographers argued that by providing tools (i.e., a tiny snippet of code that basically says “show this content at this link”) for embedding, Instagram was unfairly distributing works without a license. Specifically, the photographers were upset that works that they uploaded to Instagram were showing up in news articles after the media orgs used Instagram’s embed tool to embed the original (non-infringing) images. The lower court had (thankfully) rejected that argument.
The 9th Circuit has now upheld that lower court ruling, protecting some important elements of the ability to offer and use embed codes. At issue in this case, really, was yet another attempt to take the already problematic Aereo copyright test, which we’ve described as the “looks like a duck” test (ignoring the technical issues, and doing a “this looks like something else that is infringing, therefore we will assume this is infringing, no matter what the underlying details show”), which argued that because embeds look like locally hosted content, we should still treat it as if it’s locally hosted content.
Thankfully, the court rejects that line of argument, and makes it clear that the important Perfect 10 copyright case, which focused on who was actually hosting the material, was not overruled by Aereo.
This copyright dispute tests the limits of our holding in Perfect 10 v. Amazon, 508 F.3d 1146 (9th Cir. 2007) in light of the Supreme Court’s subsequent decision in American Broadcasting Companies, Inc. v. Aereo, 573 U.S. 431 (2014). Plaintiffs-appellees Alexis Hunley and Matthew Scott Brauer (collectively “Hunley”) are photographers who sued defendant Instagram for copyright infringement. Hunley alleges that Instagram violates their exclusive display right by permitting third-party sites to embed the photographers’ Instagram content. See 17 U.S.C. § 106(5). The district court held that Instagram could not be liable for secondary infringement because embedding a photo does not “display a copy” of the underlying images under Perfect 10.
We agree with the district court that Perfect 10 forecloses relief in this case. Accordingly, we affirm.
I’m actually somewhat impressed that the court’s discussion of embedding remote content vs. hosting local content is… pretty clear, correct, and understandable.
When a web creator wants to include an image on a website, the web creator will write HTML instructions that direct the user’s web browser to retrieve the image from a specific location on a server and display it according to the website’s formatting requirements. When the image is located on the same server as the website, the HTML will include the file name of that image. So for example, if the National Parks Service wants to display a photo of Joshua Tree National Park located on its own server, it will write HTML instructions directing the browser to display the image file, , and the browser will retrieve and display the photo, hosted by the NPS server. By contrast, if an external website wants to include an image that is not located on its own servers, it will use HTML instructions to “embed” the image from another website’s server. To do so, the embedding website creator will use HTML instructions directing the browser to retrieve and display an image from an outside website rather than an image file. So if the embedding website wants to show the National Park Service’s Instagram post featuring Joshua Tree National Park—content that is not on the embedding website’s same server—it will direct the browser to retrieve and display content from the Instagram’s server.
It even includes an example of a full Instagram embed, which is not something you normally see in a judicial ruling.
The court does say that hyperlinking is different than embedding (though I’d argue it’s not really), but at least the court is clear that embedding content is not hosted by the website using the embed code:
As illustrated by the HTML instructions above, embedding is different from merely providing a hyperlink. Hyperlinking gives the URL address where external content is located directly to a user. To access that content, the user must click on the URL to open the linked website in its entirety. By contrast, embedding provides instructions to the browser, and the browser automatically retrieves and shows the content from the host website in the format specified by the embedding website. Embedding therefore allows users to see the content itself—not merely the address—on the embedding website without navigating away from the site. Courts have generally held that hyperlinking does not constitute direct infringement. See, e.g., Online Pol’y Grp. v. Diebold, Inc., 337 F. Supp. 2d 1195, 1202 n.12 (N.D. Cal. 2004) (“[H]yperlinking per se does not constitute direct infringement because there is no copying, [but] in some instances there may be a tenable claim of contributory infringement or vicarious liability.”); MyPlayCity, Inc. v. Conduit Ltd., 2012 WL 1107648, at *12 (S.D.N.Y. Mar. 20, 2012) (collecting cases), adhered to on reconsideration, 2012 WL 2929392 (S.D.N.Y. July 18, 2012).
From the user’s perspective, embedding is entirely passive: the embedding website directs the user’s own browser to the Instagram account and the Instagram content appears as part of the embedding website’s content. The embedding website appears to the user to have included the copyrighted material in its content. In reality, the embedding website has directed the reader’s browser to retrieve the public Instagram account and juxtapose it on the embedding website. Showing the Instagram content is almost instantaneous.
Importantly, the embedding website does not store a copy of the underlying image. Rather, embedding allows multiple websites to incorporate content stored on a single server simultaneously. The host server can control whether embedding is available to other websites and what image appears at a specific address. The host server can also delete or replace the image. For example, the National Park Service could replace the picture of Joshua Tree at with a picture of Canyonlands National Park. So long as the HTML instructions from the third-party site instruct the browser to retrieve the image located at a specific address, the browser will retrieve whatever the host server supplies at that location.
As the 9th Circuit notes, under the Perfect 10 rulings, the court has (rightly!) recognized that the Copyright Act’s “fixation” requirement means that the content in question has to actually be stored on the computer’s memory to be infringing, and embedding and other “in-line linking” don’t do that.
The court first rejects the argument that Perfect 10’s so-called “server test” only applies to search engines. As it notes, there’s no rationale for such a limitation:
Perfect 10 did not restrict the application of the Server Test to a specific type of website, such as search engines. To be sure, in Perfect 10, we considered the technical specifications of Google Image Search, including Google’s ability to index third-party websites in its search results. Perfect 10, 508 F.3d at 1155. We also noted Google’s reliance on an automated process for searching vast amounts of data: to create such a search engine, Google “automatically accesses thousands of websites . . . and indexes them within a database” and “Google’s computer program selects the advertising automatically by means of an algorithm.” Id. at 1155–56. But in articulating the Server Test, we did not rely on the unique context of a search engine. Our holding relied on the “plain language” of the Copyright Act and our own precedent describing when a copy is “fixed” in a tangible medium of expression. Id. (citing 17 U.S.C. § 101). We looked to MAI Sys. Corp. v. Peak Computer, Inc., for the conclusion that a digital image is “fixed” when it is stored in a server, hard disk, or other storage device. 991 F.2d 511, 517–18 (9th Cir. 1993). Applying this fixation requirement to the internet infrastructure, we concluded that in the embedding context, a website must store the image on its own server to directly infringe the public display right.
Then there’s the question of whether or not Aereo’s “looks like a duck” test at the Supreme Court effectively overruled the 9th Circuit’s server test. Thankfully, the 9th Circuit says it did not. The reasoning here is a bit complex (perhaps overly so), but basically the 9th Circuit says that Perfect 10 “server test” applies to the display right under copyright, whereas the Aereo test applies to the transmission of content, or the public performance right. It’s true that these are different rights, but really all this should serve to do is reinforce jus how wrong (and stupid) the Aereo ruling was. But, alas:
This difference between these two rights are significant in this case. Perfect 10 and Aereo deal with separate provisions of the Copyright Act—Perfect 10 addressed the public display right, and Aereo concerned the public performance right. In Perfect 10, we analyzed what it meant to publicly display a copy in the electronic context. See Perfect 10, 508 F.3d at 1161. By contrast, in Aereo the Court did not address what it means to transmit a copy, because the public performance right has no such requirement. See Aereo, 573 U.S. at 439–44. In other words, regardless of what Aereo said about retransmission of licensed works, Perfect 10 still forecloses liability to Hunley because it answered a predicate question: whether embedding constitutes “display” of a “copy.” Perfect 10, 508 F.3d at 1160. Aereo may have clarified who is liable for retransmitting or providing equipment to facilitate access to a display—but unless an underlying “copy” of the work is being transmitted, there is no direct infringement of the exclusive display right. Thus, Perfect 10 forecloses Hunley’s claims, even in light of Aereo.
This is correct in paying attention to who actually is making copies of the underlying work, but is still highlighting just how broken the Aereo ruling really was.
Either way, the 9th Circuit adds one more point here, which is that to violate copyright law, you have to show “volitional conduct,” and that can’t be done here:
There is an additional reason we cannot find liability for Instagram here. We held, prior to Aereo, that infringement under the Copyright Act requires proof of volitional conduct, the Copyright Act’s version of proximate cause. See Fox Broad. Co., Inc. v. Dish Network LLC, 747 F.3d 1060, 1067 (9th Cir. 2013); Kelly v. Arriba Soft Corp., 336 F.3d 811, 817 (9th Cir. 2003) (“To establish a claim of copyright infringement by reproduction, the plaintiff must show . . . copying by the defendant.”). And we are not alone, indeed, “every circuit to address this issue has adopted some version of . . . the volitional-conduct requirement.” BWP Media USA, Inc. v. T&S Software Assocs., Inc., 852 F.3d 436, 440 (5th Cir. 2017) (citing cases). The Court in Aereo did not address volitional conduct as such, although Justice Scalia did so in his dissent. See Aereo, 573 U.S. at 453 (Scalia, J., dissenting). But the Court did distinguish between those who engage in activities and may be said to “perform” and those who engage in passive activities such as “merely suppl[ying] equipment that allows others to do so.” Id. at 438–39. In any event, Perfect 10 was bound to apply our volitional-conduct analysis. When we applied our requirement that the infringer be the direct cause of the infringement, we concluded that the entity providing access to infringing content did not directly infringe, but the websites who copied and displayed the content did. Perfect 10, 508 F.3d at 1160.
Post-Aereo, we have continued to require proof of “causation [as] an element of a direct infringement claim.” Giganews, 847 F.3d at 666. In such cases we have taken account of Aereo and concluded that our volitional conduct requirement is “consistent with the Aereo majority opinion,” and thus remains “intact” in this circuit. Id. at 667; see Bell v. Wilmott Storage Servs., LLC, 12 F.4th 1065, 1081–82 (9th Cir. 2021); Oracle Am., Inc. v. Hewlett Packard Enter. Co., 971 F.3d 1042, 1053 (9th Cir. 2020); VHT, Inc. v. Zillow Grp., Inc., 918 F.3d 723, 731 (9th Cir. 2019). Our volitional conduct requirement draws a distinction between direct and secondary infringement that would likely foreclose direct liability for third-party embedders. And without direct infringement, Hunley’s secondary liability theories all fail. See Oracle Am., Inc., 971 F.3d at 1050.
So, even if Aereo overruled Perfect 10 (which it did not), this case was a loser.
Interestingly, the 9th Circuit also seems to throw some shade on the attempt by the plaintiffs in this case to try to stretch the “looks like a duck” test to apply here, and thankfully, the 9th Circuit basically says “don’t read too much into that test.”
We are reluctant to read too much into this passage. The Court commented on user perception to point out the similarities between Aereo and traditional cable companies. These similarities mattered because the 1976 Copyright Amendments specifically targeted cable broadcasts. See Aereo, 573 U.S. at 433. But the Court did not rely on user perception alone to determine whether Aereo performed. See id. The Court has not converted user perception into a separate and independent rule of decision.
While this again suggests that the 9th Circuit realizes the Aereo ruling is problematic, it also provides more examples of why, even with Aereo in place, the test does not apply to “perception” in other contexts.
There is a weird bit at the end of the ruling, responding to the plaintiff’s (ridiculous) claims that the server test undermines the policy purpose of copyright law (it does not, it upholds it…) by suggesting that plaintiffs apply for en banc review or petition the Supreme Court to review this as well. That… very well might happen, and would (yet again) put another important factor of the open web on trial.
Hunley, Instagram, and their amici have peppered us with policy reasons to uphold or overturn the Server Test. Their concerns are serious and well argued. Hunley argues that the Server Test allows embedders to circumvent the rights of copyright holders. Amici for Hunley argue that the Server Test is a bad policy judgment because it destroys the licensing market for photographers. On the other hand, amici for Instagram argue that embedding is a necessary part of the open internet that promotes innovation. As citizens and internet users, we too are concerned with the various tensions in the law and the implications of our decisions, but we are not the policymakers.
If Hunley disagrees with our legal interpretation—either because our reading of Perfect 10 is wrong or because Perfect 10 itself was wrongly decided—Hunley can petition for en banc review to correct our mistakes. But we have no right “to judge the validity of those [] claims or to foresee the path of future technological development.” Aereo, 573 U.S. at 463 (Scalia, J., dissenting). Most obviously, Hunley can seek further review in the Supreme Court or legislative clarification in Congress.
In other words, while this is a good ruling, there’s a good chance this issue is far from settled.
The California legislature is competing with states like Florida and Texas to see who can pass laws that will be more devastating to the Internet. California’s latest entry into this Internet death-spiral is the California Journalism Protection Act (CJPA, AB 886). CJPA has passed the California Assembly and is pending in the California Senate.
The CJPA engages with a critical problem in our society: how to ensure the production of socially valuable journalism in the face of the Internet’s changes to journalists’ business models? The bill declares, and I agree, that a “free and diverse fourth estate was critical in the founding of our democracy and continues to be the lifeblood for a functioning democracy…. Quality local journalism is key to sustaining civic society, strengthening communal ties, and providing information at a deeper level that national outlets cannot match.” Given these stakes, politicians should prioritize developing good-faith and well-researched ways to facilitate and support journalism. The CJPA is none of that.
Instead, the CJPA takes an asinine, ineffective, unconstitutional, and industry-captured approach to this critical topic. The CJPA isn’t a referendum on the importance of journalism; instead, it’s a test of our legislators’ skills at problem-solving, drafting, and helping constituents. Sadly, the California Assembly failed that test.
Overview of the Bill
The CJPA would make some Big Tech services pay journalists for using snippets of their content and providing links to the journalists’ websites. This policy approach is sometimes called a “link tax,” but that’s a misnomer. Tax dollars go to the government, which can then allocate the money to (in theory) advance the public good—such as funding journalism.
The CJPA bypasses the government’s intermediation and supervision of these cash flows. Instead, it pursues a policy worse than socialism. CJPA would compel some bigger online publishers (called “covered platforms” in the bill) to transfer some of their wealth directly to other publishers—intended to be journalistic operations, but most of the dollars will go to vulture capitalists’ stockholders and MAGA-clickbait outlets like Breitbart.
In an effort to justify this compelled wealth transfer, the bill manufactures a new intellectual property right—sometimes called an “ancillary copyright for press publishers“—in snippets and links and then requires the platforms to pay royalties (euphemistically called “journalism usage fee payments”) for the “privilege” of publishing ancillary-copyrighted material. The platforms aren’t allowed to reject or hide DJPs’ content, so they must show the content to their audiences and pay royalties even if they don’t want to.
The wealth-transfer recipients are called “digital journalism providers” (DJPs). The bill contemplates that the royalty amounts will be set by an “arbitrator” who will apply baseball-style “arbitration,” i.e., the valuation expert picks one of the parties’ proposals. “Arbitrator” is another misnomer; the so-called arbitrators are just setting valuations.
DJPs must spend 70% of their royalty payouts on “news journalists and support staff,” but that money won’t necessarily fund NEW INCREMENTAL journalism. The bill explicitly permits the money to be spent on administrative overhead instead of actual journalism. With the influx of new cash, DJPs can divert their current spending on journalists and overhead into the owners’ pockets. Recall how the COVID stimulus programs directly led to massive stock buybacks that put the government’s cash into the hands of already-wealthy stockholders—same thing here. Worse, journalist operations may become dependent on the platforms’ royalties, which could dry up with little warning (e.g., a platform could drop below CJPA’s statutory threshold). We should encourage journalists to build sustainable business models. CJPA does the opposite.
Detailed Analysis of the Bill Text
Who is a Digital Journalism Provider (DJP)?
A print publisher qualifies as a DJP if it:
“provide[s] information to an audience in the state.” Is a single reader in California an “audience”? By mandating royalty payouts despite limited ties to California, the bill ensures that many/most DJPs will not be California-based or have any interest in California-focused journalism.
“performs a public information function comparable to that traditionally served by newspapers and other periodical news publications.” What publications don’t serve that function?
“engages professionals to create, edit, produce, and distribute original content concerning local, regional, national, or international matters of public interest through activities, including conducting interviews, observing current events, analyzing documents and other information, or fact checking through multiple firsthand or secondhand news sources.” This is an attempt to define “journalists,” but what publications don’t “observe current events” or “analyze documents or other information”?
updates its content at least weekly.
has “an editorial process for error correction and clarification, including a transparent process for reporting errors or complaints to the publication.”
has:
$100k in annual revenue “from its editorial content,” or
an ISSN (good news for me; my blog ISSN is 2833-745X), or
is a non-profit organization
25%+ of content is about “topics of current local, regional, national, or international public interest.” Again, what publications don’t do this?
is not foreign-owned, terrorist-owned, etc.
If my blog qualifies as an eligible DJP, the definition of DJPs is surely over-inclusive.
Broadcasters qualify as DJPs if they:
have the specified FCC license,
engage journalists (like the factor above),
update content at least weekly, and
have error correction processes (like the factor above).
Who is a Covered Platform?
A service is a covered platform if it:
Acquires, indexes, or crawls DJP content,
“Aggregates, displays, provides, distributes, or directs users” to that content, and
Either
Has 50M+ US-based MAUs or subscribers, or
Its owner has (1) net annual sales or a market cap of $550B+ OR (2) 1B+ worldwide MAUs.
(For more details about the problems created by using MAUs/subscribers and revenues/market cap to measure size, see this article).
How is the “Journalism Usage Fee”/Ancillary Copyright Royalty Computed?
The CJPA creates a royalty pool of the “revenue generated through the sale of digital advertising impressions that are served to customers in the state through an online platform.” I didn’t understand the “impressions” reference. Publishers can charge for advertising in many ways, including ad impressions (CPM), clicks, actions, fixed fee, etc. Does the definition only include CPM-based revenue? Or all ad revenue, even if impressions aren’t used as a payment metric? There’s also the standard problem of apportioning ad revenue to “California.” Some readers’ locations won’t be determinable or will be wrong; and it may not be possible to disaggregate non-CPM payments by state.
Each platform’s royalty pool is reduced by a flat percentage, nominally to convert ad revenues from gross to net. This percentage is determined by a valuation-setting “arbitration” every 2 years (unless the parties reach an agreement). The valuation-setting process is confusing because it contemplates that all DJPs will coordinate their participation in a single “arbitration” per platform, but the bill doesn’t provide any mechanisms for that coordination. As a result, it appears that JDPs can independently band together and initiate their own customized “arbitration,” which could multiply the proceedings and possibly reach inconsistent results.
The bill tells the valuation-setter to:
Ignore any value conferred by the platform to the JDPs due to the traffic referrals, “unless the covered platform does not automatically access and extract information.” This latter exclusion is weird. For example, if a user posts a link to a third-party service, the platform could argue that this confers value to the JDP only if the platform doesn’t show an automated preview.
Note: In a typical open-market transaction, the parties always consider the value they confer on each other when setting the price. By unbalancing those considerations, the CJPA guarantees the royalties will overcompensate DJPs.
“Consider past incremental revenue contributions as a guide to the future incremental revenue contribution” by each DJP. No idea what this means.
Consider “comparable commercial agreements between parties granting access to digital content…[including] any material disparities in negotiating power between the parties to those commercial agreements.” I assume the analogous agreements will come from music licensing?
Each JDP is entitled to a percentage, called the “allocation share,” of the “net” royalty pool. It’s computed using this formula: (the number of pages linking to, containing, or displaying the JDP’s content to Californians) / (the total number of pages linking to, containing, or displaying any JDP’s content to Californians). Putting aside the problems with determining which readers are from California, this formula ignores that a single page may have content from multiple DJPs. Accordingly, the allocation share percentages cumulatively should add up to over 100% of the net royalty pool calculated by the valuation-setters. In other words, the formula ensures the unprofitability of publishing DJP content. For-profit companies typically exit unprofitable lines of business.
Elimination of Platforms’ Editorial Discretion
The CJPA has an anti-“retaliation” clause that nominally prevents platforms from reducing their financial exposure:
(a) A covered platform shall not retaliate against an eligible digital journalism provider for asserting its rights under this title by refusing to index content or changing the ranking, identification, modification, branding, or placement of the content of the eligible digital journalism provider on the covered platform.
(b) An eligible digital journalism provider that is retaliated against may bring a civil action against the covered platform.
(c) This section does not prohibit a covered platform from, and does not impose liability on a covered platform for, enforcing its terms of service against an eligible journalism provider.
This provision functions as a mandatory must-carry provision. It forces platforms to carry content they don’t want to carry and don’t think is appropriate for their audience—at peril of being sued for retaliation. In other words, any editorial decision that is adverse to any DJP creates a non-trivial risk of a lawsuit alleging that the decision was retaliatory. It doesn’t really change the calculus if the platform might ultimately prevail in the lawsuit; the costs and risks of being sued are enough to prospectively distort the platform’s decision-making.
[Note: section (c) doesn’t negate this issue at all. It simply converts a litigation battle over retaliation into a battle over whether the DJP violated the TOS. Platforms could try to eliminate the anti-retaliation provision by drafting TOS provisions broad enough to provide them with total editorial flexibility. However, courts might consider such broad drafting efforts to be bad faith non-compliance with the bill. Further, unhappy DJPs will still claim that broad TOS provisions were selectively enforced against them due to the platform’s retaliatory intent, so even tricky TOS drafting won’t eliminate the litigation risk.]
Thus, CJPA rigs the rules in favor of DJPs. The financial exposure from the anti-retaliation provision, plus the platform’s reduced ability to cater to the needs of its audience, further incentivizes platforms to drop all DJP content entirely or otherwise substantially reconfigure their offerings.
Limitations on JDP Royalty Spending
DJPs must spend 70% of the royalties on “news journalists and support staff.” Support staff includes “payroll, human resources, fundraising and grant support, advertising and sales, community events and partnerships, technical support, sanitation, and security.” This indicates that a DJP could spend the CJPA royalties on administrative overhead, spend a nominal amount on new “journalism,” and divert all other revenue to its capital owners. The CJPA doesn’t ensure any new investments in journalism or discourage looting of journalist organizations. Yet, I thought supporting journalism was CJPA’s raison d’être.
Why CJPA Won’t Survive Court Challenges
If passed, the CJPA will surely be subject to legal challenges, including:
Restrictions on Editorial Freedom. The CJPA mandates that the covered platforms must publish content they don’t want to publish—even anti-vax misinformation, election denialism, clickbait, shill content, and other forms of pernicious or junk content.
Florida and Texas recently imposed similar must-carry obligations in their social media censorship laws. The Florida social media censorship law specifically restricted platforms’ ability to remove journalist content. The 11th Circuit held that the provision triggered strict scrutiny because it was content-based. The court then said the journalism-protection clause failed strict scrutiny—and would have failed even lower levels of scrutiny because “the State has no substantial (or even legitimate) interest in restricting platforms’ speech… to ‘enhance the relative voice’ of… journalistic enterprises.” The court also questioned the tailoring fit. I think CJPA raises the same concerns. For more on this topic, see Ashutosh A. Bhagwat, Why Social Media Platforms Are Not Common Carriers, 2 J. Free Speech L. 127 (2022).
Note: the Florida bill required platforms to carry the journalism content for free. CJPA would require platforms to pay for the “privilege” of being forced to carry journalism content, wanted or not. CJPA’s skewed economics denigrate editorial freedom even more grossly than Florida’s law.
Copyright Preemption. The CJPA creates copyright-like protection for snippets and links. Per 17 USC 301 (the copyright preemption clause), only Congress has the power to provide copyright-like protection for works, including works that do not contain sufficient creativity to qualify as an original work of authorship. Content snippets and links individually aren’t original works of authorship, so they do not qualify for federal copyright protection at the federal or state level; while any compilation copyright is within federal copyright’s scope and therefore is also off-limits to state protection.
The CJPA governs the reproduction, distribution, and display of snippets and links, and the federal copyright law governs those activities in 17 USC 106. CJPA’s provisions thus overlap with 106’s scope, but the works are within the scope of federal copyright law. This is not permitted by federal copyright preemption.
Section 230. Most or all of the snippets/links governed by the CJPA will constitute third-party content, including search results containing third-party content and user-submitted links where the platform automatically fetches a preview from the JDP’s website. Thus, CJPA runs afoul of Section 230 in two ways. First, it treats the covered platforms as the “publishers or speakers” of those snippets and links for purposes of the allocation share. Second, the anti-retaliation claim imposes liability for removing/downgrading third-party content, which courts have repeatedly said is covered by Section 230 (in addition to the First Amendment).
DCC. I believe the Dormant Commerce Clause should always apply to state regulation of the Internet. In this case, the law repeatedly contemplates the platforms determining the location of California’s virtual borders, which will always have an error rate that cannot be eliminated. Those errors guarantee that the law reaches activity outside of California.
Takings. I’m not a takings expert, but a government-compelled wealth transfer from one private party to another sounds like the kind of thing our country’s founders would have wanted to revolt against.
Conclusion
Other countries have attempted “link taxes” like CJPA. I’m not aware of any proof that those laws have accomplished their goal of enhancing local journalism. Knowing the track record of global futility, why do the bill’s supporters think CJPA will achieve better results? Because of their blind faith that the bill will work exactly as they anticipate? Their hatred of Big Tech? Their desire to support journalism, even if it requires using illegitimate means?
Our country absolutely needs a robust and well-functioning journalism industry. Instead of making progress towards that vital goal, we’re wasting our time futzing with crap like CJPA.
Originally posted to Eric Goldman’s Technology & Marketing Law Blog, reposted here with permission, and (thankfully, for the time being) without having to pay Eric to link back to his original even though he qualifies as a “DJP” under this law.
We’ve written a few times about California’s “Journalism Protection Act” (CJPA) from state Rep. Buffy Wicks, and many times about the terrible concept of such link taxes. Unfortunately, it looks like California’s bill is moving forward, with buy-in from the big media orgs and their journalists that will get the free pay offs from such an unconstitutional link tax.
In response, Meta has now announced (as it has done elsewhere) that if California passes the CJPA it will simply stop allowing links to news media in California. From a statement posted on Twitter by Meta’s Comms boss Andy Stone:
If the Journalism Preservation Act passes, we will be forced to remove news from Facebook and Instagram rather than pay into a slush fund that primarily benefits big, out-of-state media companies under the guise of aiding California publishers. The bill fails to recognize that publishers and broadcasters put their content on our platform themselves and that substantial consolidation in California’s local news industry came over 15 years ago, well before Facebook was widely used. It is disappointing that California lawmakers appear to be prioritizing the best interests of national and international media companies over their own constituents.
Obviously, that statement is a bit self-serving, but this is the only reasonable response to this nonsense (other than to sue to have the law found unconstitutional).
Again, as we’ve detailed many times before, the impact of a tax on an activity is that you get less of that activity. Indeed, that’s often the reason given for taxing certain things. So no one should be surprised that if you tax links, companies that are going to have to pay are now going to decrease the links they allow. And, as is the case with news, where there has been little actual value to companies like Meta (which have long focused on family/friend connections over media), that if they just do a quick cost/benefit analysis of the situation, they’re likely to conclude that it’s just not worth it, and therefore ban links to news sites.
Still, I’m a bit confused by the reaction to this. As happened in Australia, people are attacking Meta over this, which shows an astounding level of entitlement. It’s literally saying (1) you have to allow yourself to be used to promote our news and send traffic to us, AND (2) you have to pay us for letting us use your platform for promotion and traffic. It’s only reasonable for a website to say “uh, no.” To then attack companies for recognizing what a terrible deal that is… is strange.
I’ve even seen some people call it “censorship” by Meta, which makes no sense at all. Here’s Buffy Wicks, the sponsor of the bill, claiming that this is Meta trying to “silence journalists.” I mean, come on.
Will Buffy Wicks let me post Techdirt articles to her website? Or to her Facebook page? No? Why is she silencing me!? Look, a private company choosing not to do the thing you’re going to force them to pay for, which is not providing much value for them, and which should be free… and then having them say “that’s not worth it,” is so far from silencing people as to call into question why anyone should take Wicks seriously on anything.
This is pretty straightforward economics: California is trying to tax something that is free and always should be free (the ability to link). They’re doing it as a favor to news orgs and as a smack down on companies they have made it clear they dislike (Google and Meta). But, if you’re going to force companies to pay for something that is free, don’t be surprised when they do the math and realize it’s not worth it.
Policing copyright infringement is hard. Hard for those operating websites that allow for the public to input content and hard for the rightsholders that far too often use automated systems that suck out loud at determining what is actually infringing and what isn’t. This is a lesson currently being learned by the European Union, as its website is being bombarded with DMCA notices that are getting parts of the site’s search results delisted.
The European Union recognizes that online piracy poses a serious threat to copyright holders and the public at large. In recent years, Europe has updated legislation to deal with modern piracy threats. This includes a requirement for large platforms to deter repeat copyright infringers.
Copyright holders have sent hundreds of DMCA notices flagging alleged copyright infringements on Europa.eu, the official website of the European Commission. The EU seems unable to deal with a recurring piracy spam problem on its own portal, up to the point that Google has begun removing Europa.eu search results.
I actually reversed the order of those paragraphs from the TorrentFreak post I’m quoting. Why? Because it highlights a problem that not nearly enough people recognize: regulations putting the burden of policing copyright infringement on otherwise innocent websites is a recipe for absolute disaster, as the EU is now learning with respect to its own website.
So what’s actually happening here? Is the EU website actually serving as a repository for infringing material? Nope!
Over the past few months, we have documented how scammers are exploiting weaknesses in various Europa.eu portals including, most recently, the European School Education Platform. These scams exploit public upload tools to share .pdf files, which in turn advertise pirated versions of the latest blockbusters. People who fall for these scams are in for a huge disappointment. Instead of gaining access to pirated movies, they are redirected to shady sites that often promise ‘free’ content in exchange for the visitor’s credit card details.
Meaning that in these cases the EU site isn’t hosting any actual infringing material, but is instead getting inundated with documents advertising links to other, supposedly/likely infringing websites. The webmaster for the site has said they’re aware of the problem and are trying to solve it, but the only real solution is almost certainly to shut down the kind of document submissions that is allowing for this to occur in the first place.
The lack of actually infringing material isn’t keeping the DMCA notices at bay however.
In several instances, the European Commission isn’t able to spot the problematic uploads. For example, a .pdf advertising a pirated copy of the film “The Last Manhunt” remains online today, more than two weeks after it first appeared. Following a DMCA notice, Google decided to remove the link from its search results.
In other cases, the Commission spots the scammy ads and removes them. When that happens, Google typically takes no action. According to Google’s records, the company has removed roughly two dozen Europa.eu URLs from its search results thus far.
Turns out this is all very hard to police. Something the EU should keep in mind when crafting its new piracy policies requiring more of the websites within its jurisdiction.
Look, I fucking warned Elon that this is exactly how it would go. It’s how it always goes.
Remember Parler? They promised that they would moderate “based off the FCC and the Supreme court of the United States” (a nonsensical statement for a variety of reasons, including that the FCC does not regulate websites). Then, as soon as people started abusing that on the site, they suddenly came out with, um, new rules, including no “posting pictures of your fecal matter.”
Or how about Gettr? Founded by a former Trump spokesperson, and funded by a sketchy Chinese billionaire, it promised to be a “free speech” haven. Then it had to ban a bunch of white nationalists for, you know, doing white nationalist shit. Then, suddenly, it started banning anyone who mentioned that the sketchy billionaire funder might actually be a Chinese spy.
And then there’s Truth Social. It’s also supposed to be all about free speech, right? That’s what its pitch man, Donald Trump, keeps insisting. Except, an actual study that compared its content moderation to other sites found that Truth Social’s moderation was far more aggressive and arbitrary than any other site. Among the forbidden things to “truth” about on Truth Social? Any talk of the Congressional hearings on January 6th. Much freedom. Very speech.
So, look, it’s no surprise that Musk was never actually going to be able to live up to his notoriously fickle word regarding “free speech” on Twitter. I mean, we wrote many, many articles highlighting all of this.
But, really, it would be nice if he didn’t then insult everyone’s intelligence about this and pretend that he’s still taking some principled righteous stand. It would be nice if he admitted that “oh shit, maybe content moderation is trickier than I thought” and maybe, just maybe, “Twitter actually had a really strong and thoughtful trust & safety team that actually worked extremely hard to be as permissive as possible, while still maintaining a website that users and advertisers liked.” But that would require an actual ability to look inward and recognize mistakes, which is not one of Elon’s strongsuits.
“Without commenting on any specific user accounts, I can confirm that we will suspend any accounts that violate our privacy policies and put other users at risk,” Irwin said. “We don’t make exceptions to this policy for journalists or any other accounts.”
Yeah… that’s not what people are complaining about. They weren’t saying journalists should get special treatment for breaking the rules. They’re asking how the fuck did what these journalists posted break the rules?
Eventually Musk jumped on Twitter, of course, and like Irwin, tried to pretend that they were just making sure the rules applied equally to journalists as to everyone else. Except… that was always the case? The issue was that yesterday, they created new laughably stupid rules to ban an account tweeting publicly available information regarding Elon Musk’s jet. Then Musk took it further and claimed that this (again) publicly available information was “assassination coordinates.”
Well, except for a few minor details. First, he just fucking changed the terms of service to shut down the jet tracker, and made them so broad and vague that tons of tweets would violate the rule — including anyone using Twitter’s built-in location indicator to tweet a photo of someone else. Second, the location of his plane is public information. It’s not “assassination coordinates.” If Musk is worried about getting assassinated, hiding this account isn’t going to help, because the assassin will just go straight to the ADS-B source and get the data anyway. Third, I get that Musk claims his child was in a car that was attacked the other night, but there remain some open questions about that story. For example, the location where it occurred, as deduced by BellingCat, was not close to any airport.
Given that, it’s not at all clear how this is connected to the jet tracking service.
Furthermore, the LAPD put out a statement on this:
LAPD’s Threat Management Unit (TMU) is aware of the situation and tweet by Elon Musk and is in contact with his representatives and security team. No crime reports have been filed yet.
Which, you know, seems notable. Because if a stalker actually went after him, you’d think that rather than just posting about it on social media, he might contact the police?
But, most importantly, none of the journalists in question actually posted “real time” assassination coordinates for Musk. They had posted about this whole story having to do with content moderation decisions made by Musk. Hell, one of the journalists, Donie Sullivan, got banned for tweeting that LAPD statement.
So, yeah, it’s not about “equal treatment” for journalists. It’s about coming up with bullshit arbitrary rules that just so happen to ban the journalists who have been calling out all the dumb shit Elon has been doing. Which, you know, was the kinda thing Elon insisted was the big problem under the last regime, and insisted he was brought in to solve.
From there it got even worse. A bunch of journalists, including a few of those who were banned (who, for unclear reasons were still able to log into Twitter Spaces, the real-time audio chat feature of Twitter) began discussing all of this, and Elon Musk showed up to… well… not quite defend himself? But, uh, to do whatever this was:
It starts with (banned) Washington Post journalist Drew Harwell asking a pretty good journalistic question:
One, I don’t think anyone in this room supports stalking. I’m sorry to hear about what happened with your family. Do you have evidence connecting the incident in LA with this flight tracking data? And separately, if this is an important enough issue to you, why not enact the rule change on Twitter and give accounts like Jack Sweeney’s, time to respond to, like you said, a slight delay in providing the data? Why say last month that you would support keeping his account online for free speech and then immediately suspend not just his account, but journalists reporting on it?
Unfortunately, before Elon could say anything, another reporter, Katie Notopoulos from Buzzfeed (who started the Twitter Space) jumped in with, perhaps, a less well composed question (this isn’t criticism — coming up with questions on the spot is difficult — but I do wonder what would have happened if Musk had been allowed to respond directly to Drew’s question).
Elon, thank you for joining, I am hoping that you can give a little more context about what has happened in the last few hours with a handful of journalists being banned?
Elon then says a lot of nonsense, basically just that “doxing is bad and anyone who has been threatened should agree with this policy.”
Well, as I’m sure everyone who’s been doxed would agree, showing real-time information about somebody’s location is inappropriate. And I think everyone would not like that to be done to them. And there’s not going to be any distinction in the future between so-called journalists and regular people. Everyone is going to be treated the same—no special treatment. You dox, you get suspended. End of story.
And ban evasion or trying to be clever about it, like “Oh, I posted a link — to the real-time information,” that’s obviously something trying to evade the meaning, that’s no different from actually showing real-time information.
I mean, a lot of this is kind of infuriating. Because many of the bans that happened in the last regime, and which Musk got so mad about, were also about putting people in danger. And Musk seems singularly concerned only when he’s the target. Over the weekend, he posted some incredibly misleading bullshit about his former head of trust & safety, Yoel Roth, taking an old tweet and a clip from his dissertation and acting as if both said the literal opposite of what Roth was saying in them (in both cases, Yoel was actually highlighting issues regarding keeping children safe from predators, and Elon and legions of his fans pretended he was doing the opposite, which is just trash). Following that, a large news organization that I will not name posted a very clear description of Yoel’s home, and tweeted out a link with those details. That tweet still is on Twitter today, and Yoel and his family had to flee their home after receiving very credible threats.
Again, I repeat, the tweet that identified his home is still on Twitter today. And Elon has done nothing about it.
So spare me the claim that this is about “inappropriate” sharing of information. None of the information the journalists shared was inappropriate, and Musk himself has contributed to threats on people’s lives.
As for the whole ban evasion thing, well, that’s also nonsense, but there’s more. Notopoulos asked another question:
When you’re saying, ‘posting a link to it,’ I mean, some of the people like Drew and Ryan Mac from The New York Times, who were banned, they were reporting on it in the course of pretty normal journalistic endeavors. You consider that like a tricky attempted ban evasion?
To which Musk responded:
You show the link to the real-time information – ban evasion, obviously.
So, again, that’s not at all what “ban evasion” means. The ban was on the information. Not a link to an account. Or a reporter talking about an article that links to an account. Or a reporter talking about a police report that very loosely kinda connects to the account.
And, again, banning links to the media was the thing that I thought Musk and his fans were completely up in arms about regarding the ban on the link to the NY Post story about Hunter Biden’s laptop. Remember? It was like a week ago that it was a “huge reveal” by Elon Musk and his handpicked reporters, who apparently revealed what was the crime of the century and possibly treason when Twitter banned a link over worries of harm. Drew Harwell, finally getting a chance to ask a question, got into this slightly awkward exchange where the two seem to be talking about different things, but Drew is making the point comparing it to the NY Post thing:
Drew: You’re suggesting that we’re sharing your address, which is not true. I never posted your address.
Elon: You posted a link to the address.
Drew:In the course of reporting about ElonJet, we posted links to ElonJet, which are now banned on Twitter.Twitter also marks even the Instagram and Mastodon accounts of ElonJet as harmful. We have to acknowledge, using the exact same link-blocking technique that you have criticized as part of the Hunter Biden-New York Post story in 2020. So what is different here?
Elon: It’s not more acceptable for you than it is for me. It’s the same thing.
Drew: So it’s unacceptable what you’re doing?
Elon: No. You doxx, you get suspended. End of story. That’s it.
And with that “end of story” he left the chat abruptly, even as others started asking more questions.
So that whole exchange makes no sense. They’re clearly talking past each other, and Elon is so focused on the “journalists doxing!” that he can’t even seem to comprehend what Drew is actually asking him there, which is comparing it to the NY Post thing.
And, of course, it also seems relevant to the January 6th/Donald Trump decision, which Musk has also roundly criticized. One of Musk’s buddies, Jason Calacanis, was also in the space defending Musk, and I only heard bits and pieces of it because (1) Twitter Spaces kept kicking me out and (2) before the Space ended, Twitter took all of Spaces offline, meaning that the recording isn’t available (Musk is claiming on Twitter that it’s a newly discovered bug, though tons of people are assuming, as people will do, that Musk pulled the plug to get the journalists to stop talking about him).
However, on Twitter, Calacanis tweeted what he insisted was a simple message:
It’s just so obvious to everyone: don’t dox or stalk anyone.
Someone will get hurt or worse.
💕Be good to each other💕
If you are splitting hairs on the definition of these words, or claiming it’s public information, you’re missing the basic human concept here: people’s safety.
But, again, this brings us right back around to the top of the story. “It’s just so obvious” is a traditional part of this content moderation learning curve. It always seems so obvious that, “sure, this speech is legal, but man, it seems so bad, we gotta take it down.” In this case, it’s “don’t stalk the billionaire CEO” (which, yeah, don’t do that shit).
But this is how content moderation works. There’s a reason the role is called “Trust & Safety” because you’re trying to weigh different tradeoffs to make things trustworthy and safe. But Musk hasn’t been doing that. He seems only focused on his own safety.
And Calacanis’s claim that people are “missing the basic human concept here: people’s safety” well… that brings me to January 6th and Twitter’s decision to ban Trump. Because, you know, as Twitter explained publicly at the time and was re-revealed recently in Musk’s “Twitter Files,” this was exactly the debate that went on inside Twitter among its executives and trust & safety bosses.
They looked at the riot at the Capitol where people literally died, and which the then President seemed reluctant to call off, realized that there was no guarantee he wouldn’t organize a follow up, decided that “people’s safety” mattered here, and made the hard call to ban Trump. To protect people’s safety.
Now, you can criticize that decision. You can offer alternative arguments for it. But there was a rationale for it, and it’s the exact same one Musk and his team are now using to justify these bans. But we’re not seeing the screaming and gnashing about how this is “against free speech” or whatever from Musk and his supporters. We’re not likely to see Musk have Matt Taibbi and Bari Weiss do a breathless expose on his internal DMs while all this went down.
That’s what’s hypocritical here.
(And we won’t even get into Musk going back on his other promise that they wouldn’t do suspensions any more, just decreased “reach” for the “bad or negative” tweets).
Every website that has third party content has to do moderation. Every one. It’s how it works. And every website has the right to moderate how they want. That’s part of their editorial discretion.
Musk absolutely can make bad decisions. Just like the previous Twitter could (and did). But it would be nice if they fucking realized that they’re doing the same damn thing, but on a much flimsier basis, and backed by utter and complete nonsense.
I asked Calacanis about the “public safety” issue and the Trump decision on Twitter, and got… a strange response.
In response he says:
I am a fan of using the blocking and mute tools for almost everything you don’t like at this joint.
Which, when you think about it, is a weird fucking response. After all, he was just going on and on about how it was righteous to ban a bunch of journalists because of “people’s safety.” But also that these problems can be solved by muting and blocking? So either he thinks Musk should have just muted and blocked all these reporters… or… what? It also does not actually respond to the question.
And, once again, we’re back to the same damn thing with content moderation at scale. Every decision has tons of tradeoffs. People are always going to be upset. But there are principled ways of doing it, and non-principled ways of doing it. And Elon/Jason are showing their lack of principles. They’re only trying to protect themselves, and seem to feel everyone else should just use “mute” and “block.”
Oh, and finally….
This post went on way longer than I initially intended it to, but there is an important postscript here. Last night, when we wrote about the banning of the @JoinMastodon account on Twitter, I actually downplayed the idea that it was about Team Musk being scared of a rapidly growing competitor. I was pretty sure it was because of the link to the @ElonJet account that was now working on Mastodon. And, that’s certainly the excuse that Musk and friends are still giving.
Buuuuut… there are reasons to believe it’s a bit more than that. Because as the evening wore on, Twitter basically started banning all links to any Mastodon server they could find. A bunch of people started posting examples. Some screenshots:
Those were just a few of many, many examples that can be found on both Twitter and Mastodon of Twitter effectively blocking any links to more high profile Mastodon servers (it appears that smaller or individual instances are still making it through).
Even more ridiculous, they’re banning people from updating their profiles with Mastodon addresses.
See that screenshot? It says “Account update failed: Description is considered malware.”
So, yeah, they’re now saying that if you put your Mastodon bio in your profile, it’s malware. Given that, it’s a little difficult to believe that this is all just about “public safety” regarding Elon stalkers, and not, perhaps, a little anti-competitive behavior on the part of an increasingly desperate Elon Musk.
We’ve been covering the Journalism Competition and Preservation Act (JCPA), which is a blatant handout by Congress in the form of a link tax that would require internet companies pay news orgs (mainly the vulture capitalist orgs that have been buying up local newspapers around the country, firing most of the journalists and living off of the legacy revenue streams) for… daring to send them traffic. We’ve gone over all the ways the bill is bad. We’ve gone over the fact that people in both the House and the Senate are (at this very moment) looking for ways to sneak it into law when no one’s looking. Indeed, there are reports that there will be an announcement tonight that it’s included as a part of the National Defense Appropriations Act (NDAA).
The whole thing stinks of corruption. Politicians often rely on local newspapers for endorsements to win re-election campaigns, so they want to keep local papers happy. And it’s the perfect kind of corrupt handout for Congress. It’s not even using “taxpayer” funds. It’s forcing other companies — the hated internet companies — to foot the bill.
Newspapers nationwide are running editorials today in favor of the Journalism Competition and Preservation Act, which passed a Senate committee with bipartisan support in September and has been waiting ever since for a floor vote.
Which… seems pretty sketchy when you think about it. The newspapers don’t seem likely to be running any editorials, or even op-eds, highlighting the problems and cronyism of the JCPA. Because, why would they? If it passes, it’s literally free cash for the companies.
What newspaper will run articles explaining how the JCPA won’t help journalists, but rather their private equity owners? What newspaper will run articles explaining how the JCPA fundamentally breaks the concept of the open internet where you can link anywhere you want for free? What newspaper will run op-eds explaining how the JCPA messes with copyright law in dangerous ways by implying a new right to demand a license for links or fair use snippets?
If “newspapers nationwide” are stumping for the JCPA in their editorial pages, then it looks like we have to assume that they’re not open to anything highlighting the problems and dangers of the bill.
And that, alone, should cause people to worry. It’s showing how these news orgs are willing to forget about basic fairness in their coverage in order to stump for a corrupt handout for their owners. Shameful.
A CEO who answers to shareholders will make decisions that may seem poorly thought out, but ultimately benefit shareholders. A guy who thought running Twitter would allow him to better serve a base of “censored” conservatives and rabid fanboys tends to make decisions that valorize him as a free speech warrior while giving this questionable customer base something to rally around.
Who knows why this happened, but one can credibly imagine Twitter’s new cleaning crew has been ordered to give any Musk-related tweets more scrutiny. Musk doesn’t seem to enjoy criticism, which aligns him with pretty much everyone everywhere. But he also has the power to mute criticism, which aligns him with authoritarian regimes/billionaires who own social media platforms.
Twitter deemed a Mediaite article that is critical of the company’s new owner Elon Musk as “potentially spammy” on Friday night, and diverted users to a warning page when they click the post.
The warning had been removed as of Saturday morning after multiple media outlets – including this one – reported on it.
[…]
The post, which is marked “Opinion” and was initially only accessible to users who read through a daunting message and clicked, “Ignore this warning and continue,” is titled “What Elon Musk Is Doing Right at Twitter.”
The entirety of the post’s text reads, “Nothing.”
Succinct. Accurate. And, apparently, potentially dangerous to Twitter users. According to the notice sent to Mediaite, the one-word post violated a bunch of Twitter rules, despite none of the violations listed applying to the post’s content.
The message listed the following categories, none of which the blocked post appears to violate:
– malicious links that could steal personal information or harm electronic devices
– spammy links that mislead people or disrupt their experience
– violent or misleading content that could lead to real-world harm
– certain categories of content that, if posted directly on Twitter, are a violation of the Twitter Rules.
All the post said was that Musk was wrong. The most nefarious explanation is that Musk has elevated moderation of Musk/Twitter-related content and encourages remaining moderators to pull the trigger when the content appears to criticize Musk. The less nefarious explanation is that a bunch of Musk fans brigaded the link, reporting it en masse as ”dangerous,” forcing moderators and/or moderation AI to succumb to the hecklers’ veto. Of course, the other possible explanation is simply mere incompetence, supported by reports of others, such as NBC News also having their links blocked.
Either way, it’s a terrible look for the new boss, who has claimed Twitter is more about free speech than ever. And he’s also learning about the flipside of moderation fuck-ups: the Streisand effect. Any time something negative about Musk is buried (inadvertently or deliberately), it will attract more attention from Twitter users. Musk can’t win.
And that’s the hard truth of content moderation: you’ll never make everyone happy. An even harder truth is that you can’t even make your preferred customer base happy. At best, you can only hope to make users less miserable by removing illegal content and putting procedures in place that give users the power to create the experience they want, rather than being subjected to the outer limits of whatever the platform allows.
To quote the far-more-idealistic Google from several years back, “Don’t be evil.” But evil is subjective. So, maybe the best you can do is not be worse than your predecessors. And if that’s the low bar Musk needs to reach, he’s still appears to be unwilling to attempt clearing it.