We haven’t talked a great deal about SXSW in some time, but they are back in the news and not for good reasons! The conference and festival kicked off in March as planned, but less planned were the protests that organized against the conference as a result of its affiliations with defense contractors and the United States military and the ongoing support of Israel’s heavy-handed response to the attacks it suffered from Hamas last year. Performers backed out and a handful of protest groups organized alternative concerts and demonstrations out in front of SXSW.
One of those groups was the Austin for Palestine Coalition (APC) that put out communication and organized protests along these lines. In those communications, some of them included parodied versions of the SXSW branding to make it clear that the group believes the organization has blood on its hands. For instance:
Now, for the sin of publishing this parody content, which you will already recognize as protected speech under the First Amendment, SXSW made several trademark and copyright claims for takedowns of social media and internet content. To be clear, those claims are utter nonsense.
And if you want to understand the specifics as to why, the EFF has gotten involved in supporting AFC and has a great explainer in the link.
On the trademark question first:
The law is clear on this point. The First Amendment protects your right to make a political statement using trademark parodies, whether or not the trademark owner likes it. That’s why trademark law applies a different standard (the “Rogers test”) to infringement claims involving expressive works. The Rogers test is a crucial defense against takedowns like these, and it clearly applies here. Even without Rogers’ extra protections, SXSW’s trademark claim would be bogus: Trademark law is about preventing consumer confusion, and no reasonable consumer would see Austin for Palestine’s posts and infer they were created or endorsed by SXSW.
Completely correct. APC is protected when it comes to this content via several vectors. Parody is protected speech. Political messaging is protected speech. And, finally, trademark law struggles to be employed when there is no serious concern for confusion in the public. And if SXSW really wants to make the argument that someone is going to take messaging critical of it as affiliated with SXSW, I’m happy to sit back and laugh at them.
As for the copyright claim, it’s even worse.
SXSW’s copyright claims are just as groundless. Basic symbols like their arrow logo are not copyrightable. Moreover, even if SXSW meant to challenge Austin for Palestine’s mimicking of their promotional material—and it’s questionable whether that is copyrightable as well—the posts are a clear example of non-infringing fair use. In a fair use analysis, courts conduct a four-part analysis, and each of those four factors here either favors Austin for Palestine or is at worst neutral. Most importantly, it’s clear that the critical message conveyed by Austin for Palestine’s use is entirely different from the original purpose of these marketing materials, and the only injury to SXSW is reputational—which is not a cognizable copyright injury.
As far as the EFF has heard, SXSW hasn’t responded to its pushback. And, of course, guess what all of this bullying type behavior designed to bury the protests has actually done? Well, in true Streisand Effect fashion, the very information this bullying was supposed to tamp down is instead on a repeater as more and more outlets, including us, discuss the story in its entirety.
A gentleman’s agreement with the UK following years of colonialism has given rise to another form of oppression. China took over Hong Kong in 1997, promising to stay out of the day-to-day business of governing Hong Kong for 50 years. Not even halfway through this promised period of relative autonomy, the Chinese government began imposing its will.
Hong Kong residents were understandably unhappy with this development. Protests followed. Every wave of protest was followed swiftly by even more impositions by the Chinese government. In the last couple of years, it’s become apparent the Chinese government is no longer willing to tolerate any form of dissent in Hong Kong, despite its earlier agreement to take a hands-off position on governing Hong Kong until the middle of this century.
Now, it’s just China, but even richer. The Democratic government has been gutted. Nearly every position of power has been staffed by someone fully supported by (and fully supportive of) the Chinese government. It’s an actual police state now, thanks to the appointment of former Secretary of Security John Lee to the position of Chief Executive. Lee was best known for heading up the crackdown on pro-democracy protests in Hong Kong. Taking his place as second-in-command is Hong Kong’s police commissioner, who was similarly involved in the crackdowns.
Since then, pretty much the entire legislature has been purged of pro-democracy lawmakers. The democratic election system has been replaced with a voting system that only allows “patriots” to vote.
And to better serve the ongoing issue of ridding the country of dissent, a series of steadily escalating “national security” laws have been enacted for the sole purpose of handing out life sentences to critics, protesters, dissidents, and opposition political leaders.
But China’s government is never satisfied. Why settle for outrageously harsh sentences when those sentences can always be harsher? As Derren Chan reports for Jurist, another national security law is making its way through the legislature, where it is expected to face little debate or opposition.
The Hong Kong government released the new national security bill on Friday and sent it to the Legislative Council (LegCo) for deliberation.
The bill consists of nine parts, including criminalizing several national security offenses not covered in the 2020 National Security Law but listed in Article 23 of the Hong Kong Basic Law. The bill criminalizes new offenses, including treason, insurrection and incitement of Chinese armed force members to mutiny that could result in life imprisonment upon conviction. The bill also allows the court to impose harsher bail conditions on suspects.
I realize “sent it for deliberation” is just a turn of phrase commonly used when discussing pending legislation, but in this case, it’s meaningless. There’s no deliberation awaiting this bill, other than possibly how it could be expanded to punish more dissent and deter future opposition from citizens. This was the state of affairs in the Hong Kong legislature at the end of 2021:
The Hong Kong government has purged the last remaining opposition voice from the city’s legislature amid a deepening crackdown on dissent.
“We have come to the determination that Cheng Chung-tai hasn’t fulfilled the legal requirement of upholding the Basic Law and bearing allegiance to the Hong Kong SAR,” Chief Secretary John Lee said on Thursday, referring to the constitution of the Chinese special administrative region.
A committee led by Lee has been reviewing applicants for the Election Committee, which itself will screen legislative candidates as well as choose 40 of its members for the city’s expanded 90-seat legislature.
What’s even sadder about this additional, equally transparent power grab is that the pro-China legislators and city leaders in Hong Kong still feel they’re obliged to defend the bill as though it’s legitimate or would somehow face serious opposition if they didn’t say things about “common law” or “protecting” Hong Kong. And yet, they’re out there saying things that don’t matter to defend a bill that will become law because China wants it to become law and there’s no one left in Hong Kong with the power to oppose it.
The Secretary for Justice Paul Lam highlighted that the bill is a piece of local legislation written in common law traditions, which requires reasonable and practical clarity. Lam contended rights and freedoms are not absolute under international treaties, and necessary restrictions are justifiable because of national security.
I assume the yes-man work being done here is just for show, allowing Chinese leaders to easily see who’s advocating for must-pass will-pass legislation. I’m guessing Lam’s angling for whatever the Chinese equivalent of a dacha on the lake is.
Security chief Chris Tang told lawmakers there was a “genuine and urgent need” for the new law.
“Hong Kong had faced serious threats to national security, especially the color revolution and black-clad violence in 2019, which was an unbearably painful experience,” he said, referring to the democracy protests.
If there’s anything that might prompt some deliberation over this bill, it won’t be the residents of Hong Kong or their complete lack of legislative representation. It will be the rest of the world. And I don’t mean the part of the world willing to engage in their own demonstrations in support of democracy in Hong Kong.
No, it will be the money that does the most talking. Foreign investors and companies may choose to spend elsewhere, rather than continue to show their proxy support for China’s oppression of Hong Kong.
As with its predecessor, the proposed new security law states that offenses committed outside Hong Kong fall under its jurisdiction.
And in a section closely watched by Hong Kong’s foreign business community, the draft provides a multipronged definition of “state secrets” that covers not only technology but “major policy decisions” and the city’s “economic and social development.”
It also criminalizes the unlawful acquisition, possession and disclosure of state secrets, though it offers a “public interest” defense under specific conditions.
Authorities said the public submissions received during the consultative process revealed support from a majority.
But concerns have been raised by NGO workers, foreign businesses and diplomats, with critics saying the existing security law has already eviscerated Hong Kong’s political opposition and civil society.
Foreign businesses may have the most power here. But it’s foolhardy to expect a unified front demanding a rollback of China’s stranglehold on Hong Kong. China is synonymous with commerce and has been for decades. And so it’s inevitable the Chinese government’s pattern and practice of disappearing dissent will continue to cleanse Hong Kong of its pro-democracy “problem.”
In this week’s round-up of news about online speech, content moderation and internet regulation, Mike and Ben cover:
The US TikTok ban and what it could mean for the future of the internet (Techdirt)
The EU prepares to regulate Chinese marketplaces (Reuters)
Telegram’s CEO gives a rare interview – and what that says about online speech (Financial Times)
Generative AI is already messing with elections (Al Jazeera)
Bluesky open sources its moderation tooling software (Bluesky)
Trust & Safety software market is set to double by 2028, according to a new report (Duco)
The episode is brought to you with financial support from the Future of Online Trust & Safety Fund, and by our launch sponsor Modulate, the prosocial voice technology company making online spaces safer and more inclusive. In our Bonus Chat at the end of the episode, Modulate CEO Mike Pappas joins us to talk about how safety lessons from the gaming world can be applied to the broader T&S industry and how advances in AI are helping make voice moderation more accurate.
On Monday, the Supreme Court will be hearing the Murthy v. Missouri case, which we’ve been following for ages. As we’ve pointed out repeatedly, the record on the case is full of blatant falsehoods. If the US government was actually doing everything that the lawsuit (and some judges!) claims it did, I would be in agreement that it’s a clear First Amendment violation. The problem is that the plaintiffs misrepresented many, many things, and then the district court judge, Terry Doughty, made it even worse.
Among other things, he invented quotes by inserting words into a quote that weren’t actually said, directly changing its meaning. He also falsely represented that an email about a technical problem was about content moderation.
And while the 5th Circuit didn’t buy everything the district court judge said, and greatly limited the original ruling, it did contain more confusion in its opinion (actually, opinions, because it reissued the opinion and added CISA to the injunction with no explanation, even though it originally ruled that CISA did nothing wrong).
While I’ve pointed out some of the errors in the record before and was happy to see the reply brief from the Justice Department focus heavily on those false claims in the record, there were so many false and misleading statements that it’s been bothering me throughout this case. I wanted to find the time to go through and highlight them all, but it would have been a massive project.
But, thankfully, Dean Jackson, over at Tech Policy Press did a pretty thorough job of it instead. He notes that the statements claiming that the US government coerced social media don’t stand up to any amount of scrutiny. Indeed, scratch any claim that people throw out in support, and you’ll find that the plaintiffs and the courts totally misrepresented them, often to suggest the opposite of reality:
… the Fifth Circuit’s conclusions regarding the Federal government are erroneous. They rest on cherry-picked evidence, flawed analysis, and misunderstandings about the internal workings of social media companies.
In weighing the difference between persuasion and coercion, the Fifth Circuit presents snippets of email exchanges between government officials and social media platforms. The arrangement of these snippets tells a story of furious government officials and browbeaten platform staff. Because none of them are cited to source documents, it is difficult for a casual reader to put them in context to see if that story is true. But the quotes can be traced back to longer, publicly released email exchanges that show a bigger picture.
And that “bigger picture” shows that the claims of coercion are simply not supported by the record. There’s too much in the article to go through all of the examples, but time and time again you see how the plaintiffs take things totally out of context, and when put back into context, the exchanges don’t really support the claims of threats and coercion.
Take the issue of the FBI’s involvement with the platforms.
The Fifth Circuit ruling contains only three short paragraphs dedicated to the FBI. It alleges that “Per their operations, the FBI monitored platforms’ moderation policies, and asked for detailed assessments during their regular meetings. The platforms apparently changed their moderation policies in response to the FBI’s debriefs,” particularly around “hack and dump” operations. The FBI also “targeted domestically sourced ‘disinformation,’ like posts that stated incorrect poll hours or mail-in voting procedures.” The ruling contains no quotations from communications between the FBI and social media companies. Instead, it appears to be based on the deposition of FBI agent Elvis Chan, summarized in the memo accompanying the initial July 4 injunction.
Much of the discussion of Chan’s deposition revolves around “hack and dump” (or “hack-and-leak”) operations, especially the October 2020 release of materials allegedly taken from a laptop belonging to President Biden’s son, Hunter. According to the memo,
Social-media platforms updated their policies in 2020 to provide that posting “hacked materials” would violate their policies. According to Chan, the impetus for these changes was the repeated concern about a 2016-style “hack-and-leak” operation. Although Chan denies that the FBI urged the social-media platforms to change their policies on hacked material, Chan did admit that the FBI repeatedly asked the social-media companies whether they had changed their policies with regard to hacked materials because the FBI wanted to know what the companies would do if they received such materials.
Because the Hunter Biden laptop has become a source of scandal and conspiracy theories, it is important to note here that these policy changes pre-date the initial public reporting on its existence and the contents of its hard drive. The FBI and social media companies had good reason to worry about foreign state actors using hacked materials to influence the 2020 election: they had, after all, already done so in the 2016 election and again in the 2017 French elections.
When the New York Post reported on the laptop’s contents weeks before the 2020 Presidential election, Facebook and Twitter, believing mistakenly that the quoted materials might have been the result of a foreign hack-and-leak operation, took steps to limit the story’s reach. In testimony before the House Oversight Committee, Yoel Roth, Twitter’s former head of site integrity, called the decision a mistake but denied government involvement in it.
Roth further described his interactions with federal officials in an essay for the Knight First Amendment Institute. “Over the last few months,” he writes, “I’ve had the somewhat surreal experience of learning that my decisions are not my own.” He worries that “the factual foundation” of the Fifth Circuit’s ruling is “flawed” and later asserts that “[t]he FBI fastidiously… avoid[ed] both assertions that they’ve found platform policy violations, and requests that Twitter do anything other than assess the reported content under the platform’s applicable policies.”
In other words, law enforcement told platforms to do what they wanted with the information provided.
The article also highlights how the courts seem wholly ignorant of the nature of trust & safety, and how diminishing the reach of some content is different than banning that content.
Another misunderstanding is that at several points, the Fifth Circuit conflates the removal of content with the demotion of content. When the Fifth Circuit writes that…
Even when the platforms did not expressly adopt changes… they removed flagged content that did not run afoul of their policies. For example, one email from Facebook stated that although a group of posts did not “violate our community standards,” it “should have demoted them before they went viral.” In another instance, Facebook recognized that a popular video did not qualify for removal under its policies but promised that it was being “labeled” and “demoted” anyway after the officials flagged it.
…it confuses two different types of content moderation. As shown in the email exchange with Flaherty above, the platform policy is to reduce the distribution of “borderline” content that comes close to violating a policy but does not qualify for removal. That policy pre-dates the Biden administration; similar policies were in place, for example, in the run-up to the 2020 election and the January 6th insurrection. It is troubling that in making allegations of coercion, the Fifth Circuit cannot distinguish between exceptions to policy and application of existing policy.
On Monday, the Justices will hear oral arguments in this case. The most worrying part of it is that they’re likely to make a very meaningful ruling based on a near total misunderstanding of (1) what actually happened and (2) how internet content moderation actually works. That seems like a problem.
What a day. Texas is now the most populated U.S. state to be geo-blocked by Aylo, the parent company of the popular adult tube site Pornhub.com. With a population of barely over 29.5 million people, residents of the Lone Star State must use a VPN to view porn on Aylo’s network of free and premium websites.
The geo-block comes after the U.S. Fifth Circuit Court of Appeals ruled that a Texas age verification law targeting pornography was constitutional. The federal case was brought by Aylo, the parent companies of other adult websites, and the Free Speech Coalition.
Despite the Fifth Circuit completely overlooking decades of Supreme Court precedent indicating that any sort of age verification measure infringes on First Amendment rights, the conservative judges, 2-1, powered through. As Mike Masnick noted in his column on the decision, Judge Patrick Higginbotham – in dissent from the two other judges of the panel – rightfully pointed out that First Amendment protections aren’t thrown out just because Texas tries to be the nanny state. Senior U.S. District Judge David Alan Ezra initially ruled the Texas age verification law, House Bill 1181, unconstitutional and issued a preliminary injunction to block the law. Texas won on appeal. Litigation is still ongoing. Ken Paxton, the attorney general of Texas, also filed a lawsuit against Pornhub in Travis County courts alleging violations of House Bill 1181, and seeks millions in damages.
A few states away, Indiana just adopted an age verification law. Senate Bill 17, which was proposed by state Sen. Mike Bohacek of Michiana Shores and is set to enter force on July 1, 2024. I wrote for Techdirt about Senate Bill 17 because an early version of the bill carried with it criminal penalties for violators of the age verification requirement. Luckily, the bill was amended to drop those penalties. Still, SB 17 is a very slippery slope for Hoosiers and the United States in general. The Indiana chapter of the American Civil Liberties Union called the bill an unconstitutional violation against adults.
The legal environment pertaining to age verification and free speech online is now more fraught than ever. Developments like these reveal an ongoing civil liberties clusterfuck instigated by the anti-pornography lobby in the name of “protecting” minors. In much of my previous work for Techdirt and for other publishers, I have highlighted how efforts to restrict or even ban legal pornography in the U.S. are steeped in the far-right Christian nationalism that has gripped the Republican Party. Don’t forget about Project 2025. This group openly wants to ban porn and imprison those whom they deem “pornographers.”
To hear some people talk about it, anyone having anything to do with adult content should be imprisoned. This is why, as a journalist and a commentator, I keep writing about anti-porn clusterfucks like Aylo bowing to Texas or any other state controlled by politicians declaring “victory” against porn.
The First Amendment still exists. Case law still exists. Hopefully, the likes of Texas and Indiana – really all of the states under the yoke of authoritarian anti-porn, pro-censorship laws – are finally reminded that this type of paternalistic meddling is un-American.
Michael McGrady covers the legal and tech side of the online porn business, among other topics.
The Complete Python Programmer Bundle has nine courses to help you learn more about programming. This bundle starts with fundamental Python functionality such as arithmetic, conditional statements, and working with basic data structures. It then expands upon your working knowledge of data structures to work with full-blown datasets in the Pandas package. You’ll learn all about working with Python through hands-on tasks. The bundle is on sale for $40.
Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
Last month, we wrote about Nevada’s Attorney General filing an absolutely preposterous, but extremely dangerous, legal filing, demanding that a court bar Meta from offering end-to-end encryption for its messaging apps. Almost everything about this request was crazy. First, Nevada sued Meta, with vague, unsubstantiated claims of “harm to children,” and then it filed a demand for a temporary restraining order, blocking Meta from using encryption, giving the company basically a day to respond.
This all seemed weird, given that encryption has been available in tons of places for many, many years, including on some of Meta’s messaging offerings going back years. Why was it suddenly so necessary to stop them immediately? Nevada also claimed that Meta offering encryption was a “deceptive trade practice” because it says it’s offering encryption to keep people safer when, according to Nevada, it’s inherently harmful.
Thankfully, the court did not issue the immediate TRO, but asked the parties to brief the issue and appear for a hearing next Wednesday. Earlier this week, a bunch of organizations, including the ACLU, EFF, Fight for the Future, Internet Society, Signal, and Mozilla all filed an amicus brief that I’d describe as 43-pages of “what the fuck is this, I don’t even…”
The State’s motion for a preliminary injunction attempts to substitute the judgment of the Attorney General’s office for a national policy developed over decades of discussion with multiple stakeholders. The State paints a picture of E2EE as solely a danger to children. But the reason that E2EE has been widely adopted is that it prevents crime-crime affecting both children and adults. The State has many avenues for pursuing its child-safety investigations without this extraordinary order. It is especially ill-advised to upend decades-old, encryption- specific policies based on a reinterpretation of a broad, general purpose law such as the Nevada Unfair and Deceptive Trade Practices Act, N.R.S. 598.0903-598.0947.
While the Attorney General may disagree, the assertion that E2EE is good for children is a mainstream point of view and not properly classified as “deceptive” (Mot. at 16-17). Millions of children have long used E2EE platforms such as WhatsApp and iMessage. It can hardly be “unconscionable” for Meta to upgrade its product to meet the security and privacy standards that other exceedingly popular products-ones the Attorney General has not challenged have offered to the public for years.
The motion for a preliminary injunction that would stop Meta from providing secure communications to its users is baseless and dangerous. Meta’s provision of end-to-end encryption by default to all Messenger users is not deceptive or unconscionable, meaning the State is unlikely to succeed on the merits. To the contrary, because E2EE protects consumers, its continuation will not cause irreparable harm and in fact benefits the public interest (a preliminary injunction factor the State does not discuss). Clark Cnty. Sch. Dist. v. Buchanan. 112 Nev. 1146, 1150, 924 P.2d 716, 719 (1996). The Court should reject the State’s request.
The overall brief is fantastic. It points out, among other things, that historically most conversations were ephemeral and not recorded, and law enforcement didn’t think that people talking to each other was an inherent threat to children.
Society has long recognized that people thrive when we have the ability to engage in private, unmonitored conversations. Sharing confidences enables people to form friendships and intimate relationships, obtain information about sensitive matters, and construct different identities depending on the audience. We know this from our own lives, whether engaging in pillow talk, meeting a friend for a walk, or forming an invitation-only club. Important, human things happen when we can be confident that no one is listening in.
Before the Internet, these conversations were not recorded or preserved. Our words vanished into the air as they were spoken. Unless someone was eavesdropping, conversations were private, secret, and unrecoverable. Police could not access these interactions. Mail carriers did not make copies of letters and senders and recipients were free to write in code or foreign languages and to destroy the documents after they had been received.
In any other era, a claim that government may obligate us to record and preserve our conversations, just in case investigators wanted to review them later. would be laughably ridiculous. It would simply have been beyond the pale to suggest that people could be required to record their conversations in a language that law enforcement could readily understand and access. Basic conversational privacy was assumed, and rightly so.
The brief gives many examples of why end-to-end encryption makes everyone, including children, more secure. It highlights how many government agencies have endorsed encryption.
But also, importantly, it highlights just how stupid this demand is, given that Nevada law enforcement has plenty of ways to investigate criminal actions, even when there is encryption in messaging. After all, Meta has access to metadata, and any victims can directly provide the content to law enforcement as well.
Riana Pfefferkorn (who also signed onto the brief as an amicus) also wrote a column about this case. She notes that Nevada’s request would not only make children less safe, but it’s extremely unlikely that this destruction of encryption would remain local to Nevada.
If the court grants the Nevada AG’s latter-day request after this month’s hearing, the resulting injunction won’t just affect Nevada’s children. Anyone (adult or child) who talks to them, or is mistakenly identified by Meta as being one of them, will no longer get default E2EE on Messenger either. Plus, a successful request in Nevada might inspire copycat demands elsewhere. That multi-state social media addiction lawsuit against Meta that I mentioned above? It has 42 state AGs as plaintiffs. A copycat injunction for Messenger would mean no more default E2EE for most of the country’s children (and a significant number of adults, as said).
Hopefully those other state AGs would pick a wiser course than this one rogue state AG has chosen. Consumer protection regulators have spent years telling Meta to do better at protecting user privacy. Making Messenger E2EE by default is the best thing Meta has done in that regard in a long time. The Nevada AG’s own complaint against Meta says that “[i]n the digital privacy ecosystem, this is a move that might be lauded.” Yet rather than laud it, the Nevada AG is trying to undo it. He would rather force Meta to give the state’s youngest users worse digital privacy and security than everyone else. That isn’t promoting child safety online; it’s undermining it. Even more astonishing, he’s trying to rebrand default E2EE as an unconscionable and deceptive trade practice. Strong encryption isn’t a violation of consumer protection; it’s a vindication of it.
The Nevada AG’s request is so wildly contrary to well-established best practices and long-standing interpretations of consumer protection law that it would almost be funny if it weren’t so dangerous. We can only hope the judge in Nevada laughs him out of court. The children of Nevada deserve better than this.
As you probably noticed, the House just passed the controversial ban on TikTok, with 352 Representatives in favor, and 65 opposed. The bill is now likely to be slow-walked to the Senate where its chance of passing is murky, but possible. Biden (which has been using the purportedly “dangerous national security threat” to campaign with) has stated he’ll sign the bill should it survive the trip.
The ban (technically a forced divestment, followed by a ban after ByteDance inevitably refuses to sell) passed through the house with more than a little help from Democrats:
Not talked much about in press coverage is the fact that the majority of constituents don’t actually support a ban (you know, the whole representative democracy thing). Support for a ban has been dropping for months, even among Republicans, and especially among the younger voters Democrats have already been struggling to connect with in the wake of the bloody shitshow in Gaza:
As the underlying Pew data makes clear, a lot of Americans aren’t sure what to think about the hysteria surrounding TikTok. And they’re not sure what to think, in part, because the collapsing U.S. tech press has done a largely abysmal job covering the story, either by parroting bad faith politician claims about the proposal and app, or omitting key important context.
The press has also been generally terrible at explaining to the public that the ban doesn’t actually do what it claims to do.
Banning TikTok, but refusing to pass a useful privacy law or regulate the data broker industry is entirely decorative. The data broker industry routinely collects all manner of sensitive U.S. consumer location, demographic, and behavior data from a massive array of apps, telecom networks, services, vehicles, smart doorbells and devices (many of them *gasp* built in China), then sells access to detailed data profiles to any nitwit with two nickels to rub together, including Chinese, Russian, and Iranian intelligence.
Often without securing or encrypting the data. And routinely under the false pretense that this is all ok because the underlying data has been “anonymized” (a completely meaningless term). The harm of this regulation-optional surveillance free-for-all has been obvious for decades, but has been made even more obvious post-Roe. Congress has chosen, time and time again, to ignore all of this.
Banning TikTok, but doing absolutely nothing about the broader regulatory capture and corruption that fostered TikTok’s (and every other companies’) disdain for privacy or consumer rights, isn’t actually fixing the problem. In fact, as Mike has noted, the ban creates entirely new problems, from potential constitutional free speech violations, to its harmful impact on online academic research.
I’ve mentioned more than a few times that I think the ongoing quest to ban TikTok is mostly a flimsy attempt to transfer TikTok’s fat revenues to Microsoft, Google, Twitter, Oracle, or Facebook under the pretense of national security and privacy, two things our comically corrupt, do-nothing Congress has repeatedly demonstrated in vivid detail they don’t have any genuine interest in.
TikTok creators seem to understand this better than the gerontocracy or the U.S. tech press:
But if Congress were really serious about privacy, they’d pass a privacy law or regulate data brokers.
If Congress were serious about national security, they’d meaningfully fight corruption, and certainly wouldn’t support a multi-indictment facing authoritarian NYC real estate con man with a fourth-grade reading level for fucking President.
So when Congress pops up to claim it’s taking aim at a single popular app because it’s suddenly super concerned about consumer privacy, propaganda, and national security, skeptics are right to steeply arch an eyebrow. You realize we can see your voting histories and policy priorities, right?
Xenophobia, Protectionism and Information Warfare
The GOP motivation for a TikTok ban has long been obvious: they believe TikTok’s growing ad revenues technically belong, by divine right, to white-owned U.S. companies. But the GOP also sees TikTok as an existential threat to their ever-evolving online propaganda efforts, which have become a strategic cornerstone of an increasingly extremist, authoritarian party whose policies are broadly unpopular.
The GOP is fine with rampant privacy abuses and propaganda — provided they’re the ones violating privacy or slinging political propaganda. You’ll recall Trump’s big original fix for the “TikTok problem” (before a right wing investor in TikTok recently changed his mind, for now) was a cronyistic transfer of ownership of TikTok to his Republican friends at Walmart and Oracle.
Former Trump Treasury Secretary Steve Mnuchin and his Saudi-funded Liberty Strategic Capital is already hard at work putting investors together to buy the app. If the GOP (or a proxy) manages to buy TikTok, they’ll engage in every last abuse they’ve accused the Chinese government of. TikTok will be converted, like Twitter, into a right wing surveillance and propaganda echoplex, where race-baiting authoritarian propaganda is not only unmoderated, but encouraged.
All under the pretense of “protecting free speech,” “antitrust reform,” or whatever latest flimsy pretense authoritarians are currently using to convince a gullible and lazy U.S. press that they’re operating in good faith.
Why Democrats would support any of this remains an open question. The ban would likely aid GOP propaganda efforts, piss off young voters, and advertise the party (which had actually been faster to embrace TikTok than the GOP) as woefully out of touch. All while not actually protecting consumer privacy or national security in any meaningful way. And creating entirely new problems.
National security, consumer privacy, or good faith worries about propaganda don’t enter into it.
Some Democratic Reps, like Ro Khanna, Alexandria Ocasio-Cortez and Sara Jacobs seem to understand the trap, keeping the focus on a need for a federal privacy law that reins in the privacy and surveillance abuses of all companies that do business in the U.S., foreign or domestic. Some senators, like Ron Wyden, have worked hard to ensure equal attention is paid toward rampant data broker abuses.
But 155 House Democrats voted for the ban, either because they’re corrupt, or they have absolutely no idea how any of this actually works. Pissing off your constituents by ruining an app used by 150+ million (mostly young) Americans during an election season is certainly a choice, especially given negligible constituent support–and growing evidence it likely creates more problems than it professes to solve.