After demolishing the competition from 2020 through the first half of 2022, TikTok’s DAU growth rate has collapsed. In the fourth quarter of 2023, the video service lagged Snapchat, YouTube, Instagram, and Facebook. Yes, you read that right: The ancient big blue app grew faster than TikTok.
This reminds me of when Congress was super focused on regulating Facebook, even after it was shedding users rapidly.
That’s not to say that if there is any real evidence (again, none of which has yet been shown) of dangers associated with TikTok they should be ignored. But, it is a reminder that the internet space remains incredibly dynamic, even as the media and politicians act as if what’s happening today will continue to be the way it is.
Social media sites come and go. They’re cool for kids until their parents get involved or until they go through the inevitable enshittification curve. There appear to be at least some signals that TikTok may have passed its prime and folks are starting to move on.
In short, the “problem” (if there is one) may solve itself through the simple fact that… TikTok might just not be all that cool anymore. Business Insider suggests that the original TikTok generation, who were teenagers when the app first became cool, may have since graduated and started having to live life and get a job and stuff, leaving less time for TikTok. Of course, that would ignore the fact that as younger kids age into being teens, they’re still likely to join. But, perhaps not at the same rate as before.
The young adults I spoke to have been on social media for a decade or more and didn’t question the impact it was having on them until recently. They started noticing that TikTok, in particular, got in the way of sleep, work, household chores and relationships. Some even say it has kept them from chasing their own creative dreams. They are now deleting the app in pursuit of more in-person experiences and tidier homes.
While that may be anecdotal, there is at least some data to back it up:
TikTok’s U.S. average monthly users between the ages of 18 and 24 declined by nearly 9% from 2022 to 2023, according to mobile analytics firm Data.ai.
In short, as it often does, Congress may be fighting (badly) the last battle, and not realizing that some of this stuff… takes care of itself.
You know that line, “every accusation is a confession?” For no reason at all, that’s coming to mind all of a sudden. No reason.
Anyway, a decade ago, Henry Farrell and Martha Finnemore wrote a fantastic piece for Foreign Affairs on “The End of Hypocrisy” (which we also wrote about here at Techdirt). They argued that, even as many people mock American hypocrisy around the world, at least the plausible deniability of Americans taking the moral high ground was an incredibly powerful and effective tool of soft pressure. And how it was squandered with each revelation of just how little Americans respected the sovereignty of other nations, and regularly abused our access to internet backbones to spy on others.
The deeper threat that leakers such as Manning and Snowden pose is more subtle than a direct assault on U.S. national security: they undermine Washington’s ability to act hypocritically and get away with it. Their danger lies not in the new information that they reveal but in the documented confirmation they provide of what the United States is actually doing and why. When these deeds turn out to clash with the government’s public rhetoric, as they so often do, it becomes harder for U.S. allies to overlook Washington’s covert behavior and easier for U.S. adversaries to justify their own.
Two years into office, President Donald Trump authorized the Central Intelligence Agency to launch a clandestine campaign on Chinese social media aimed at turning public opinion in China against its government, according to former U.S. officials with direct knowledge of the highly classified operation.
Three former officials told Reuters that the CIA created a small team of operatives who used bogus internet identities to spread negative narratives about Xi Jinping’s government while leaking disparaging intelligence to overseas news outlets. The effort, which began in 2019, has not been previously reported.
I am also suddenly reminded of how the US government ran this big campaign for a few years about how no one should use Chinese networking equipment from companies like Huawei. This is despite the fact that a comprehensive White House report could find no evidence of nefarious behavior. Oh, but also, how some of the Ed Snowden docs revealed that the US government was actually installing secret backdoors in Cisco networking equipment to spy on people elsewhere?
Of course, there are a few different ways to look at this. One argument is that “well, we’re doing this, so we know that they must be too, and that justifies the US’s actions to try to cut them off.” And that would be maybe more compelling if there were more serious evidence that any of this actually works and that it doesn’t look absolutely ridiculous when it inevitably leaks out later.
The other way of looking at it is that the US comes off as a bunch of hypocrites who repeatedly squander whatever moral high ground they have on these arguments. As Farrell and Finnemore highlighted in that piece a decade ago, US foreign policy and the soft power it traditionally wielded relied heavily on (1) US politicians believing in the principles of freedom and openness we espoused, (2) our allies being able to back us up on those claims, and (3) our adversaries looking weak and pathetic in trying to go up against those principles.
But with each revelation of the US doing exactly what they accuse others of doing, all of that falls apart. US politicians making such claims look ever less sincere. Our allies can no longer continue to claim the moral high ground with a straight face. And our adversaries use our own stupid policies to justify their even worse ones.
I know (because I heard it all the time) that some people will say “but our adversaries don’t need any justification to do bad stuff.” That’s only true to some extent. Global pressure can be effective, but it’s harder to use that pressure legitimately when the US is doing something just as bad. In making it easier for our adversaries to justify their bad actions by pointing to similar activities by the US, it makes it even easier for them to go further, and to convince others to join them.
As that article noted towards the end, the solution should be that the US should act in a way that lives up to its rhetoric, rather than just being pathetically hypocritical.
A better alternative would be for Washington to pivot in the opposite direction, acting in ways more compatible with its rhetoric. This approach would also be costly and imperfect, for in international politics, ideals and interests will often clash. But the U.S. government can certainly afford to roll back some of its hypocritical behavior without compromising national security. A double standard on torture, a near indifference to casualties among non-American civilians, the gross expansion of the surveillance state — none of these is crucial to the country’s well-being, and in some cases, they undermine it.
The US’s attempts to use social media in China as a propaganda tool does not appear to have been very effective. The end result looks pretty silly and helps justify China doing very dangerous shit:
The covert propaganda campaign against Beijing could backfire, said Heer, the former CIA analyst. China could use evidence of a CIA influence program to bolster its decades-old accusations of shadowy Western subversion, helping Beijing “proselytize” in a developing world already deeply suspicious of Washington.
The message would be: “‘Look at the United States intervening in the internal affairs of other countries and rejecting the principles of peaceful coexistence,’” Heer said. “And there are places in the world where that is going to be a resonant message.”
But, coming at the same time that we’re looking to ban TikTok (or force its divestiture from a company based in China), maybe we should actually consider that suggestion from Farrell and Finnemore again. Maybe we should try to live up to our ideas. Maybe we should believe that if America is about freedom, and freedom is better than the authoritarian tyranny of China, we should be able to resist whatever they wish to pull with any social media propaganda campaign they could cook up.
Or do we think so little of Americans in general, that we think they won’t be able to resist the allure of this one social media app and its algorithm? If American freedom can’t resist an app of short videos, mostly used by kids, what kind of freedom is it really?
As you probably noticed, the House just passed the controversial ban on TikTok, with 352 Representatives in favor, and 65 opposed. The bill is now likely to be slow-walked to the Senate where its chance of passing is murky, but possible. Biden (which has been using the purportedly “dangerous national security threat” to campaign with) has stated he’ll sign the bill should it survive the trip.
The ban (technically a forced divestment, followed by a ban after ByteDance inevitably refuses to sell) passed through the house with more than a little help from Democrats:
Not talked much about in press coverage is the fact that the majority of constituents don’t actually support a ban (you know, the whole representative democracy thing). Support for a ban has been dropping for months, even among Republicans, and especially among the younger voters Democrats have already been struggling to connect with in the wake of the bloody shitshow in Gaza:
As the underlying Pew data makes clear, a lot of Americans aren’t sure what to think about the hysteria surrounding TikTok. And they’re not sure what to think, in part, because the collapsing U.S. tech press has done a largely abysmal job covering the story, either by parroting bad faith politician claims about the proposal and app, or omitting key important context.
The press has also been generally terrible at explaining to the public that the ban doesn’t actually do what it claims to do.
Banning TikTok, but refusing to pass a useful privacy law or regulate the data broker industry is entirely decorative. The data broker industry routinely collects all manner of sensitive U.S. consumer location, demographic, and behavior data from a massive array of apps, telecom networks, services, vehicles, smart doorbells and devices (many of them *gasp* built in China), then sells access to detailed data profiles to any nitwit with two nickels to rub together, including Chinese, Russian, and Iranian intelligence.
Often without securing or encrypting the data. And routinely under the false pretense that this is all ok because the underlying data has been “anonymized” (a completely meaningless term). The harm of this regulation-optional surveillance free-for-all has been obvious for decades, but has been made even more obvious post-Roe. Congress has chosen, time and time again, to ignore all of this.
Banning TikTok, but doing absolutely nothing about the broader regulatory capture and corruption that fostered TikTok’s (and every other companies’) disdain for privacy or consumer rights, isn’t actually fixing the problem. In fact, as Mike has noted, the ban creates entirely new problems, from potential constitutional free speech violations, to its harmful impact on online academic research.
I’ve mentioned more than a few times that I think the ongoing quest to ban TikTok is mostly a flimsy attempt to transfer TikTok’s fat revenues to Microsoft, Google, Twitter, Oracle, or Facebook under the pretense of national security and privacy, two things our comically corrupt, do-nothing Congress has repeatedly demonstrated in vivid detail they don’t have any genuine interest in.
TikTok creators seem to understand this better than the gerontocracy or the U.S. tech press:
But if Congress were really serious about privacy, they’d pass a privacy law or regulate data brokers.
If Congress were serious about national security, they’d meaningfully fight corruption, and certainly wouldn’t support a multi-indictment facing authoritarian NYC real estate con man with a fourth-grade reading level for fucking President.
So when Congress pops up to claim it’s taking aim at a single popular app because it’s suddenly super concerned about consumer privacy, propaganda, and national security, skeptics are right to steeply arch an eyebrow. You realize we can see your voting histories and policy priorities, right?
Xenophobia, Protectionism and Information Warfare
The GOP motivation for a TikTok ban has long been obvious: they believe TikTok’s growing ad revenues technically belong, by divine right, to white-owned U.S. companies. But the GOP also sees TikTok as an existential threat to their ever-evolving online propaganda efforts, which have become a strategic cornerstone of an increasingly extremist, authoritarian party whose policies are broadly unpopular.
The GOP is fine with rampant privacy abuses and propaganda — provided they’re the ones violating privacy or slinging political propaganda. You’ll recall Trump’s big original fix for the “TikTok problem” (before a right wing investor in TikTok recently changed his mind, for now) was a cronyistic transfer of ownership of TikTok to his Republican friends at Walmart and Oracle.
Former Trump Treasury Secretary Steve Mnuchin and his Saudi-funded Liberty Strategic Capital is already hard at work putting investors together to buy the app. If the GOP (or a proxy) manages to buy TikTok, they’ll engage in every last abuse they’ve accused the Chinese government of. TikTok will be converted, like Twitter, into a right wing surveillance and propaganda echoplex, where race-baiting authoritarian propaganda is not only unmoderated, but encouraged.
All under the pretense of “protecting free speech,” “antitrust reform,” or whatever latest flimsy pretense authoritarians are currently using to convince a gullible and lazy U.S. press that they’re operating in good faith.
Why Democrats would support any of this remains an open question. The ban would likely aid GOP propaganda efforts, piss off young voters, and advertise the party (which had actually been faster to embrace TikTok than the GOP) as woefully out of touch. All while not actually protecting consumer privacy or national security in any meaningful way. And creating entirely new problems.
National security, consumer privacy, or good faith worries about propaganda don’t enter into it.
Some Democratic Reps, like Ro Khanna, Alexandria Ocasio-Cortez and Sara Jacobs seem to understand the trap, keeping the focus on a need for a federal privacy law that reins in the privacy and surveillance abuses of all companies that do business in the U.S., foreign or domestic. Some senators, like Ron Wyden, have worked hard to ensure equal attention is paid toward rampant data broker abuses.
But 155 House Democrats voted for the ban, either because they’re corrupt, or they have absolutely no idea how any of this actually works. Pissing off your constituents by ruining an app used by 150+ million (mostly young) Americans during an election season is certainly a choice, especially given negligible constituent support–and growing evidence it likely creates more problems than it professes to solve.
It’s potentially forgotten in all the other nonsense that has happened over the past four years, but the initial push to “ban TikTok” in the US started right after a bunch of TikTokers reserved fake seats for a rally that Trump’s campaign people thought was going to be mobbed by people and ended up being embarrassingly half empty.
Days later, the Trump administration suddenly announced that TikTok was a national security threat and it was going to get banned. Following that, there was a comedy of errors as the administration couldn’t figure out how to actually ban it, in part because banning a website is almost certainly unconstitutional. Eventually, a month later, Trump issued an executive order banning TikTok, and it didn’t take long for a court to say that Trump can’t actually do that, in part because of the lack of any real evidence of a security threat, and in part because of First Amendment concerns.
For what it’s worth, the same basic thing happened last year when the state of Montana also tried to ban TikTok only to have it tossed out on First Amendment grounds.
But still, as noted, Congress is really, really into banning TikTok this time around, despite the legal setbacks from the last few attempts. And it didn’t help much that last week TikTok made a hamfisted attempt to have its users call members of Congress to complain.
So, it took some by surprise when Donald Trump came out and said that he no longer supports a TikTok ban because it would only work to help Facebook.
“Without TikTok, you can make Facebook bigger, and I consider Facebook to be an enemy of the people,” Trump, who was formerly U.S. president between 2017 and 2021, said in an interview Monday on CNBC’s “Squawk Box.”
He’s 100% correct. And remember, Facebook’s parent company Meta spent years quietly running a targeted PR campaign among politicians to demonize TikTok, which was the only company in years that showed that Facebook’s supposedly dominant position as the social media king was maybe not quite as dominant as it wanted everyone to believe.
That’s quite an about face for Trump who really kicked off the entire concept of banning TikTok. But, he’s absolutely correct. Again, TikTok proved that it’s possible to build a competitive social network, and the idea that no new entrant could ever succeed in the space was laughable. Knocking TikTok out of the space, or forcing a questionable divestiture to some other giant tech company, would massively help Meta and its Facebook and Instagram properties.
But, of course, there may be other reasons that Trump has turned around on this. As he also correctly noted, there are a lot of young people who love TikTok, and shutting down the app in the US may piss off a lot of younger voters:
“There are a lot of people on TikTok that love it. There are a lot of young kids on TikTok who will go crazy without it,” Trump said.
Given that, it’s entirely possible that Trump’s decision here is just straight-up political calculus. The Democrats have been way faster to embrace TikTok than Republicans, and maybe Trump and his handlers saw last week’s flood of calls to Congress and recognized that banning TikTok may suppress the youth vote, which is more likely to go to Biden than Trump.
Of course, as many others pointed out, the flip-flop in Trump’s position also came soon after he met with billionaire Trump supporter Jeff Yass who owns a huge chunk of ByteDance, though Trump denies that had anything to do with it. Yass, however, is spending considerable effort trying to kill the bill.
That said, whether or not Yass has anything to do with this, Trump’s points are actually accurate. Banning or forcing the divestiture of TikTok would be a huge gift to Meta. It could also be a political nightmare for whoever goes through with it.
Still, there is the larger reason that Trump doesn’t mention (perhaps because he argued the other way in the past). It’s almost certainly unconstitutional. It sets a terrible precedent for supposed US freedom — one that will come with significant blowback as other countries demand that successful US companies “divest” from their operations overseas or face similar blocks. I could easily see India or Brazil or other countries demanding a similar sort of remedy and pointing to the US’s actions against TikTok as reason to support it.
Again, the TikTok ban is stupid. If you’re concerned about data exfiltration, pass a comprehensive privacy law. If you’re concerned about manipulation, then better educate the public so that they’re not so easily influenced by an app made to share short videos.
While it seemed like our national policy hysteria over TikTok had waned slightly in 2024, it bubbled up once again last week upon rumors that the White House is supporting a “welcome and important” new bill that would effectively ban TikTok from operating in the United States.
The bipartisan bill (full text) — which moved forward last week in spite of TikTok’s ham-fisted attempt to overload Congress with phone calls from users — sponsored by Reps Mike Gallagher and Raja Krishnamoorthi, prevents all ByteDance-controlled applications from enjoying app store availability or web hosting services in the U.S., unless TikTok “severs ties to entities like ByteDance that are subject to the control of a foreign adversary.” Basically, the bill wants ByteDance to divest TikTok, preferably to an American company.
You’ll recall the Trump administration’s big “solution” for TikTok was basically cronyism: to force the company to sell itself to Walmart and Oracle. That is: companies controlled by Trump’s cronies, with their own track records of bad behavior and privacy violations. You’ll also recall that Facebook has been very busy sowing congressional angst for years about TikTok for purely anti-competitive reasons.
The bill applies to any company owned by ByteDance, whether or not anybody has actually proven any sort of meaningful connection to Chinese intelligence (we’re working off of vibes here, man). There’s also some murky language in the legislation that curiously excludes companies that deal in reviews, a nice treat for whatever company successfully lobbied for that exemption:
EXCLUSION: The term ‘‘covered company’’ does not include an entity that operates a website, desktop application, mobile application, or augmented or immersive technology application whose primary purpose is to allow users to post product reviews, business reviews, or travel information and reviews.
To be very clear: TikTok certainly isn’t without surveillance, national security, and notable privacy concerns. And the authoritarian Chinese government is, without question, an oppressive genocidal shitshow.
But banning TikTok, while refusing to pass a privacy law or regulate data brokers (which traffic in significantly greater volumes of sensitive data at much greater collective scale), winds up mostly being a performative endeavor driven more by anti-competitive intent (and a desire to control the flow and scope of modern news, information and propaganda) than any desire for serious reform.
A lot of the congressional opposition (especially on the GOP side) to TikTok comes largely from the belief that white owned and controlled American companies are owed, by divine right, access to the massive ad revenues Chinese-owned TikTok enjoys. For Luddites and policy nitwits like Tommy Tuberville, I strongly doubt the thinking extends much further than that.
I also think Republicans very much don’t like the idea of a company that could potentially traffic in propaganda that isn’t theirs. They’ve worked very hard for several years to scare feckless U.S. tech giants away from policing race-baiting political propaganda online (a cornerstone of modern GOP power), and their inability to control TikTok presents an obvious concern for entirely self-serving reasons.
But even lawmakers who sincerely believe that banning TikTok makes meaningful inroads on national security or consumer privacy generally don’t seem to understand the size and scope of the problem we’re dealing with.
You could ban TikTok with a patriotic flourish from the heavens immediately, but if we fail to regulate data brokers, pass a privacy law, or combat corruption, Chinese (or Russian, or Iranian) intelligence can simply turn around and buy that same data (and detailed profiles of American consumers) from an unlimited parade of different data brokers, telecoms, app makers, marketing companies, or services.
And they can do that because the U.S. has proven to be, time and time again, too corrupt to do the right thing or hold giant corporations (domestic or otherwise) accountable for privacy abuses. The result has been the creation of an historically massive, planet-wide, data monetization and surveillance machine that fails — over and over and over again — to meaningfully protect public safety and consumer privacy.
Congress has repeatedly made it very clear that making money is significantly more important than consumer welfare and public safety, as the scandal over sensitive abortion clinic location data makes clear. The U.S. government is also disincentivized to act, because it’s found exploitation of this privacy-optional nightmare to be a super handy way to avoid having to get warrants for domestic surveillance.
But it’s not enough. Congress needs to pass a privacy law for the internet-era with teeth that applies to all companies that operate in the U.S., foreign or domestic. It needs to adequately staff and fund the FTC so it can actually address the problem at the scale it’s operating at. And it needs to close the privacy loopholes that lets government surveillance efforts exploit the dysfunction.
But Congress won’t do that because Congress is comically, blisteringly corrupt. We’ve defanged our regulators for decades under the pretense it fostered an innovative, free market renaissance that never happened. When discussing our failure to meaningful protect U.S. consumer (and industry privacy), this corruption just isn’t mentioned–as if it’s simply somehow not relevant to the problem at hand.
Countries that care about national security make fleeting efforts to combat corruption, and don’t support NYC real estate conmen with fourth grade reading levels for the most powerful office in the land.
Countries that care about consumer privacy pass privacy laws, regulate data brokers, and generally hold corporations (and executives) meaningfully accountable for failing to secure consumer data. T-Mobile has been hacked eight times in five years due to comically lax security and privacy standards, and I’ve yet to see Congress lift so much as an eyebrow.
The myopic hyperventilation about TikTok (and TikTok only!) is mostly a distraction. A distraction from the GOP’s ongoing quest to turn the internet into a propaganda dumpster fire. A distraction from our failures on consumer protection. A distraction from Congressional corruption. A distraction from the fact that we’ve lobotomized our regulators in exchange for Utopian promises never actually delivered.
Banning an app that may not even be popular five years from now — but doing absolutely nothing about the corruptive rot that enabled its privacy abuses — is a hollow performance that simply doesn’t strike at the heart of the actual problem.
Anyone who follows Techdirt knows we’re very interested in the progress of Bluesky, the decentralized social network that embraces our concept of protocols over platforms. Bluesky recently ended its invite-only beta and opened its doors to the public, so it seems like a great time for a check-in, and who better to check in with than Bluesky CEO Jay Graber? Jay joins us on this week’s episode for a discussion about Bluesky’s progress and what the future holds.
I had thought that maybe, just maybe, now that DeSantis had dropped out of the Presidential race, maybe (just maybe?) he’d stop pushing blatantly unconstitutional laws. That’s not to say that DeSantis has any good ideas. But it felt like over the last few years, he really leaned into the nonsense culture wars in an attempt to boost his own profile for a hilariously inept Presidential run. He seemed to think that going to war with a “woke Disney” would somehow appeal to the brainwashed fools who now make up the base of his party. It didn’t work.
And there were a few signs that maybe a slightly more reasonable DeSantis was emerging. A few weeks back he talked about how he supported amending legislation that was being used to ban books in school to be more explicit in not banning books (even though he knows full well that’s the intent of the law).
And then, there’s HB 1. This is yet another new anti-social media law, which we wrote about earlier this year. The bill was from Rep. Tyler Sirois, whose website claims he is “dedicated to the principals of limited government, individual responsibility, and constitutional liberty.” (Also, fwiw, he means principles, not principals, but I think we’ve established that Tyler Sirois is not the sharpest knife in the legislative drawer).
And his first bill of this session violated all three of those things. It’s a big government bill that removes individual liberty in a clearly unconstitutional way. So, of course, the Florida legislature passed it.
A few weeks back, DeSantis made some noise suggesting he would veto it, knowing that it was unconstitutional:
“I think that I’m not going to be supporting if I don’t think it’s going to be something that’s going to pass legal muster in the courts,” Ron DeSantis said in Cape Coral.
[….]
“What I’ve said previously, these things have huge legal hurdles. They’ve been held up in courts. I don’t want to go down the road of doing something that is not going to be going to pass muster legally,” DeSantis said.
[….]
“I don’t want to have anything where government is forcing the disclosure of folks. But when you’re talking about verifying ages, if you do that in a way that’s ham-handed, you’re going to lead to that,” DeSantis said.
That sounds almost reasonable? It sounds like someone who has had multiple laws he supported thrown out as unconstitutional and who is no longer running for President, so he doesn’t need to go “full culture war.”
But, come on, this is Ron DeSantis we’re talking about here. Did anyone think he’d actually give up on unconstitutional, censorial nonsense?
On Friday, he did, in fact, veto the bill. But, he immediately said he’d be supporting another bill that he thought was better.
I have vetoed HB 1 because the Legislature is about to produce a different, superior bill. Protecting children from harms associated with social media is important, as is supporting parents’ rights and maintaining the ability of adults to engage in anonymous speech. I anticipate the new bill will recognize these priorities and will be signed into law soon.
So what is that “different, superior bill”? Turns out it’s HB 3. It’s not that different. It’s definitely not superior. It’s just as bad and (importantly) just as unconstitutional.
It bans social media for anyone under the age of 16 (already found to be unconstitutional elsewhere). It requires parental controls/parental consent for accounts under 16 (already found to be unconstitutional elsewhere).
Therefore, I’m going to suggest that DeSantis doesn’t really much care about whether or not he supports a clearly unconstitutional bill, or that he will be wasting more taxpayer money defending it. He just didn’t like this one unconstitutional bill and appears to prefer a different one. That’s not progress. It’s moving sideways.
Politics is messy, and you get the feeling that a lot of internet companies want nothing to do with “politics” of any kind. Back in 2019 Twitter (when it was still Twitter) decided to ban all political ads, a near-impossible task guaranteed to make a mess of things (such as banning “get out the vote” ads). Soon after, both Google and Facebook (when it was still Facebook) also cut back on political ads.
This was always interesting, because it disproves the idea that companies will do anything for revenue. The constant political fighting made it seem too much of a hassle to make money this way, so it was easier to just claim that all such ads were blocked.
But, there’s a big problem with this approach — as we saw with the trouble with the ad bans earlier: how the hell do you define what’s “political”? Sure, some “politics” is obvious. Things about politicians running for office? Easy call. But it gets more and more difficult as things go.
Is an ad about the environment political? About healthcare? Libraries? In some contexts, yes. In others, maybe not?
We’re debating this again as Meta keeps insisting that it will not promote “political” content on Threads (which is sort of what would happen if Twitter and Instagram had a lovechild, where you might be surprised which genes the offspring got from which parent app). From early on Threads/Instagram boss Adam Mosseri has made it clear that he doesn’t want the site to be big for political content.
That’s gotten more attention in the last few weeks as the company said it’s tuning its algorithm to downplay political content (though you can opt back into it, if you want it).
But that leaves open the same question we discussed above: how the hell do they define “political” content? As you move outside of the ads space, it gets even more complicated. These days, your choice of food products or clothing can be considered political. What books you buy? What music you like? Where you live? All of them are possibly political. People’s very identities are often politicized.
How do you downplay your identity?
Many people have been asking, but Meta’s response to most reporters has been evasive. The company has now given a little more guidance to the Washington Post, but I’m not sure it helps much:
So far, the company has offered only clues about where it will draw those lines. In a blog post announcing the policy, Instagram described political content as “potentially related to things like laws, elections, or social topics.” Laws and elections seem clear-cut enough, as categories go, but “social topics” leaves a lot of room for guesswork.
In a statement to The Tech 202, Meta spokeswoman Claire Lerner offered a bit more detail.
“Social topics can include content that identifies a problem that impacts people and is caused by the action or inaction of others, which can include issues like international relations or crime,” she said. She added that Meta will work continually to refine its definition over time.
Got that? It’s “potentially related to things like laws, elections or social topics” where social topics is “content that identifies a problem that impacts people and is caused by the action or inaction of others.” Though this definition may need to be “refined” over time.
Yeah, so, that doesn’t clear up much of anything. Indeed it’s about as clear as mud.
Now, some of this is the very nature of content moderation. It is a constant game of taking wholly subjective rules about what is and what is not allowed, and having to apply them in a manner that pretends to be objective. It’s not possible to do well at scale.
But, based on this, it sounds like anything around climate change, mental health, poverty, housing, traffic, etc. could all be deemed “political.” Of course, it’s not clear to me that things like banning books in schools and libraries quite meet this definition? What about talking about the First Amendment? Or the Second Amendment? Or the Fourteenth.
The reality is that the politics here is in the deciding. By announcing that it will downplay political content, Meta is just shifting the issue. Rather than worrying about people fighting over politics on Threads (which will still happen), now they can also fight over Meta’s ever-evolving definition of what content is, and is not, political.
The very act of promising to downplay political content is, inherently, political content itself.
I can understand the desire to cut politics out as a platform, but it’s hard to see how this works in any reasonable way in practice. There are always politics around, and Meta is opening itself up to widespread criticism no matter how it defines politics, because each such decision will now be a political one — not by Meta’s users, but by Meta itself.
As Mike already noted, the weirdest moment of the nearly four-hour, double-case hearing at the Supreme Court on Monday in the NetChoice and CCIA legal challenges of Florida’s and Texas’s social media laws came maybe two thirds into the oral argument, when Justice Alito openly wondered, “If YouTube were a newspaper, how much would it weigh?” I was in the courtroom when he said it, but I have no more insight into what analytical issue he was wrestling with that could have prompted this inquiry to counsel than anyone who listened to the hearing remotely or read it in the transcript.
It should therefore not come as much of a shock to suggest that Justice Alito seemed to have had the least amount of sympathy for, or understanding of, NetChoice’s and CCIA’s arguments. It might however be a surprise that Justice Kavanaugh had the most. Perhaps not, as Mike observed, given that he was the author of the Halleck decision, where he displayed some significant interest in protective First Amendment doctrine. On the other hand, the politics of this case do not follow a traditional red-blue breakdown. If they did, one might expect a conservative justice to side with conservative government officials. But, like we noted with the 303 Creative case, the principle of First Amendment protection transcends politics. A lot of people read that case as conservative justices favoring conservative views because they preferred those views. But the reality is that the constitutional rule the Court announced there benefits everyone, no matter what views they have to express, because it tells the government that it doesn’t get to trump them when it doesn’t like them. Which is basically what these cases are about: governments trying to trump expression when it doesn’t like the views it expressed.
And Justice Kavanaugh in particular appeared most able to see that this was the issue at the heart of the case. The arguments that the states kept making, that they passed these laws in response to “censorship” fell flat before him, because over and over he kept reminding that “censorship” requires state action. Which destroyed any justification Florida and Texas claimed to defend their laws. Ultimately Florida and Texas were complaining about the expressive decisions of a private actor, and using their laws to take away the ability of this private actor to continue to make them. In other words, it was their state action that was now determining what expression could or could not appear online, which is the very essence of what is complained about when one complains of censorship, and what the First Amendment most definitely forbids.
The big question raised by these cases is whether the Court would recognize that it does offend a First Amendment right of the platforms when governments try to take away their ability to make those choices. Would the Court see that, just as it recognized that newspapers had the right to choose what op-eds to run, which no law could interfere with, so, too, do the platforms have the freedom to choose what user expression to either facilitate or moderate away?
Or at least it should have been the big question. Because it did seem that there were at least five justices who understood the implications of platforms not having that freedom, and who found the states’ arguments referencing the Court’s earlier rulings in Pruneyard and Turner – where the Court had limited an intermediary’s expressive discretion – to be inapplicable analogies. But it was not quite clear that NetChoice and CCIA will be able to walk away with the win that they should, and these laws remaining enjoined, because there seemed to be at least two issues bogging down the Court’s overall thinking.
One was that the procedural posture of the case seemed to displease them. The justices did not seem to like that it was a “facial challenge,” as opposed to an “as applied challenge.” With the latter, the plaintiffs would complain how a law hurt them, whereas with the former the argument is that the law is a fundamentally unconstitutional effort that needs to be stopped before it can hurt anyone. The problem with this sort of challenge though is that a law might be unconstitutional in some ways it would be applied, but fine in other contexts, and the facial challenge paints the whole thing with the same broad “unconstitutional” brush, which might not be a fair assessment of the whole law.
Of course, let’s remember what was going on when these particular laws were passed. Governors DeSantis of Florida and Abbott of Texas were very unhappy that some speakers and speech had been removed from certain large social media sites. These laws both seemed to be very transparent efforts to punish those sites for having made those expressive moderation choices and make sure they could not make them again. In fact, remember that Florida’s law originally had the “theme park” exemption, where, back when DeSantis still liked Disney, he made sure that the law wouldn’t reach any site owned by Disney and impinge on its moderation choices. And then, when he got mad at Disney, he got the law changed to make sure they were subject to it too.
So when presented with these rather baldfaced attempts to interfere with platforms’ First Amendment rights to moderate their sites as they saw fit, NetChoice and CCIA did not hesitate to sue on behalf of the platforms that would be affected. And as part of the lawsuit it asked for the laws to be enjoined, because one should not have to wait to be injured by an unconstitutional law before being able to show the courts that it would cause an unconstitutional injury. Instead that injury should be headed off at the pass, which is what preliminary injunctions are for. Which doesn’t mean that if there is a redeemable part of the law it can’t later be upheld, but it does mean that when an injury is shown to be likely we keep the status quo in place, with no injury risked, while we fully explore the question of just how unconstitutional the law is.
Furthermore, as NetChoice and CCIA pointed out, it wasn’t like the states defended their laws by saying they had also constitutional applications. Both Texas and Florida overtly wanted to do what NetChoice and CCIA feared: usurp platforms’ editorial discretion. Either the First Amendment lets Florida and Texas do this, or it doesn’t, and that’s why both parties centered that question in their litigation strategy, which was very strange for the Court to now second guess. NetChoice further noted that when it came to a law that violated the First Amendment, it would also be a problem if facial challenges to such laws could be stymied by lawmakers simply slipping in a provision that might be sometimes legitimate because it would mean that lawmakers could get away with causing an unconstitutional injury if that pretextual provision made the law now untouchable by the courts until that injury had accrued.
And then there was a second major point of confusion that arose for the justices on Monday, and Justice Gorsuch in particular, who wondered what the effect would be on Section 230 if they ruled in NetChoice and CCIA’s favor. The answer: there is no effect, but the problem is that it betrays a pretty significant misunderstanding of Section 230 to think there would be.
What seems to confuse is that when it comes to Section 230 platforms basically argue, “It is not our speech at issue,” and in the context of these cases, the platforms are basically arguing that it is their speech at issue. And how could both be true? But the reason both can be true is because when it comes to online speech there is more than one expressive act at issue. One of the major ways Section 230 operates is to make clear that the expressive message of the user is the user’s alone, and if there’s an issue with that message responsibility for it lies exclusively with the user who expressed it. Which is why platforms argue, when raising a Section 230 defense, that it is not their speech. Whereas what is at issue in the litigation here is the separate message platforms convey when they allow users to use their sites to spread their messages, or otherwise deny certain speakers or speech. Allowing (or denying) speech amounts to platforms saying the separate message — and their own message — of what speech they welcome. But that speech they welcome is still not their speech, but that of the user.
I wish this point had been emphasized more during the argument, but NetChoice/CCIA did drive home the separate point that Section 230 is obviously not in conflict with platforms having First Amendment rights preserving editorial discretion because part of its protection is designed to protect platforms when they exercise that discretion. The other major way Section 230 operates is to insulate platforms from liability arising from the acts they take to disallow speech. Congress wanted platforms to take steps to remove objectionable content, NetChoice/CCIA reminded the Court, and wrote the statute to make sure they could. So at minimum, even if platforms did not have the Constitutional right to moderate content, Section 230 would still give them the statutory right, and preempt states like Florida and Texas from messing with that protection, as these laws do. But in reality platforms have both rights, the First Amendment right to do this moderation and the statutory right to make sure that no one can try to take issue with how they’ve done so. These rights complement, not conflict, and hopefully the Court will not be distracted by misunderstandings that might suggest otherwise.
For semi-obvious reasons, I’ve been following developments at Bluesky closely, given that my Protocols, not Platforms paper was originally part of the reason Jack Dorsey decided to create Bluesky. I have no official association with the organization, though I did help Twitter review some of the early Bluesky proposals and spoke with a few of the candidates they looked at to lead the company (including Jay Graber, whom Jack eventually tabbed to run it).
While Dorsey has since soured on the approach that Bluesky is taking, preferring the nostr protocol’s approach (and deleting his Bluesky account entirely), I continue to believe that Bluesky is the most interesting and most promising of the various attempts at building a better social media system out there. I explained many of the reasons why a few weeks ago when Bluesky finally dropped its “private beta/invite-only” setup and opened to the public.
And yet, as many people pointed out to me, Bluesky still wasn’t really decentralized in any real way. It remained entirely centralized, as the company worked to build up both the new protocol for it, ATProtocol, and the Bluesky reference app on top of the protocol.
Today, we’re excited to announce that the Bluesky network is federating and opening up in a way that allows you to host your own data. What does this mean?
Your data, such as your posts, likes, and follows, needs to be stored somewhere. With traditional social media, your data is stored by the social media company whose services you’ve signed up for. If you ever want to stop using that company’s services, you can do that—but you would have to leave that social network and lose your existing connections.
It doesn’t have to be this way! An alternative model is how the internet itself works. Anyone can put up a website on the internet. You can choose from one of many companies to host your site (or even host it yourself), and you can always change your mind about this later. If you move to another hosting provider, your visitors won’t even notice. No matter where your site’s data is managed and stored, your visitors can find your site simply by typing the name of the website or by clicking a link.
We think social media should work the same way. When you register on Bluesky, by default we’ll suggest that Bluesky will store your data. But if you’d like to let another company store it, or even store it yourself, you can do that. You’ll also be able to change your mind at any point, moving your data to another provider without losing any of your existing posts, likes, or follows. From your followers’ perspective, your profile is always available at your handle—no matter where your information is actually stored, or how many times it has been moved.
It’s currently limited to smaller situations, of people who basically want to self-host their own Personal Data Servers. While things get settled, there are rate limits and guardrails for these PDS’s (so, things like only 10 user accounts for the time being). If you want to understand this even more (even if you’re not technical), Bluesky’s more “technical” explanation is still highly readable.
I know that some people hear “federation” and immediately think of Mastodon. However, Bluesky’s entire setup is very different and designed to be much more user friendly in multiple ways (once again, this is one of the reasons that Bluesky chose to create the ATProtocol, rather than going with ActivityPub).
ActivityPub federation has both pros and cons. When you sign up for an instance, you’re basically wholly reliant on whoever runs that instance. Rather than being part of a big centralized network, like Facebook, you’re part of a small centralized server that interconnects with lots of others. But whoever runs your server has pretty much ultimate control. That can work out great if they’re committed to it. But it can also unleash some problems.
Mastodon and related ActivityPub systems have put a lot of effort into minimizing some of the downsides of this. For example, threats of “defederation” are a fascinating incentivizing structure to keep ActivityPub instance admins from going totally rogue, while still allowing for there to be experimentation and differences among servers.
But in the end, you’ve still gone from a big centralized system to a little one, where someone else is in control.
With the Bluesky approach, there are many more layers involved, and federation is less about putting your entire social experience in the hands of one instance admin. Rather, it’s just about where your data/account information gets stored. As Bluesky explains:
A summary of some ways Bluesky differs from Mastodon:
A focus on the global conversation: On Mastodon, your “instance”, or server, determines your community, so your experience depends on which server you join. An instance can send and receive posts from other instances, but it doesn’t try to offer a global view of the network. Your Mastodon server is part of your username, and becomes part of your identity. On Bluesky, your experience is based on what feeds and accounts you follow, and you can always participate in the global conversation (e.g. breaking news, viral posts, and algorithmic feeds). You can use your own domain name as your username, and continue participating from anywhere your account is hosted.
Composable moderation: Moderation on Bluesky is not tied to your server, like it is on Mastodon. Defederation, a way of addressing moderation issues in Mastodon by disconnecting servers, is not as relevant on Bluesky because there are other layers to the system. Server operators can set rules for what content they will host, but tools like blocklists and moderation services are what help communities self-organize around moderation preferences. We’ve already integrated block and mute lists, and the tooling for independent moderation services is coming soon.
Composable feeds: We designed your timeline on Bluesky so that it’s not tied to your server. Anyone can build a feed, and there are currently over 40,000 algorithmic feeds to choose from. Your Mastodon timeline is only made up of posts from accounts you follow, and does not pull together posts from the whole network like Bluesky’s custom feeds.
Account portability: We designed federated hosting on Bluesky so that you can move servers easily. Moving hosting services should be like changing your cell phone provider — you should be able to keep your identity and data. Changing servers on Bluesky doesn’t disrupt your username, friends, or posts.
This is important, though there are still some details to be worked out, especially around the third-party moderation efforts. But, on the whole, having the ability to still interact with the wider Bluesky community while keeping your personal data server somewhere else that you control is a big step forward in realizing how a more decentralized social media could (and I’d argue, should) work. It brings us back towards the world of an open web, and away from locked-in silos.
Now, again, there are still some parts of the system that people are worried about, in particular how they could be open to centralized capture. The thing is, there is always going to be some risk of this on basically any system. To make things work properly, you tend to need certain parts of the stack to either work together seamlessly, or it just ends up that a very small number of giant players end up dominating the otherwise “open” system anyway.
This is a concern worth watching. However, it’s also been one that the Bluesky team has repeatedly and readily acknowledged, along with their ideas and thinking on how to guarantee that future Bluesky (or anyone else) is effectively incentivized against enshittification. That’s not to say it will all work out, but so far I’ve seen no reason not to believe that the team has been building with this in mind. Its last few major announcements have all shown continued movement in this direction.
At the end of this tunnel, there is a very powerful vision, one that is partially (though not entirely) laid out in the Protocols, Not Platforms paper. In this vision, people can either self-host their own data servers or find a trusted third party to do so, with the ability to move if the current host turns out to be a problem. It’s one where there are many different tools to allow people to craft their own experience (though composable moderation and algorithmic choice within the system) and the moderation layer is separate and extracted from the data server, the app, and the hosting company.
There will be services that combine them all (like Bluesky today), but also we’re increasingly moving towards the world in which people will be able to adjust things to their own liking. And that can be powerful in its own way. No, most users won’t want to get down into the weeds and tweak things themselves. But that’s where there’s an opportunity for organizations to step up and provide a comprehensive solution themselves, whether it’s Bluesky itself, or others.
But, just the fact that users can modify basically everything, and that third parties have free ability to build apps and services (and custom feeds) on top of this core, has an added advantage, even for those who don’t want to tweak the details and fiddle the knobs themselves. The very fact that it’s possible (or that it’s possible to jump to other providers) creates a strong anti-enshittification incentive structure.
One of the big reasons that enshittification occurs is because users are locked-in. There’s no easy way to leave, without a massive hassle. And part of that hassle is losing access to friends and family. The exciting part of Bluesky with federation is that there is no lock-in, which means there’s much less temptation for enshittification and rent extraction from users with nowhere else to go.
This move towards federation is a small move towards that larger vision, but it’s an important one.