How Forcing TikTok To Completely Separate Its US Operations Could Actually Undermine National Security

from the when-government-demands-backfire dept

Back in August 2020, the Trump White House issued an executive order purporting to ban TikTok, citing national security concerns. The ban ultimately went nowhere — but not before TikTok and Oracle cobbled together “Project Texas” as an attempt to appease regulators’ privacy worries and keep TikTok available in the United States.

The basic gist of Project Texas, Lawfare reported earlier this year, is that TikTok will stand up a new US-based subsidiary named TikTok US Data Security (USDS) to house business functions that touch US user data, or which could be sensitive from a national security perspective (like content moderation functions impacting Americans). Along with giving the government the right to conduct background checks on potential USDS hires (and block those hires from happening!), TikTok committed as part of Project Texas to host all US-based traffic on Oracle-managed servers, with strict and audited limits on how US data could travel to non-US-based parts of the company’s infrastructure. Needless to say, Oracle stands to make a considerable amount of money from the whole arrangement.

Yesterday’s appearance by TikTok CEO Shou Zi Chew before the House Energy and Commerce Committee shows that even those steps, and the $1.5 billion TikTok are reported to have spent standing up USDS, may prove to be inadequate to stave off the pitchfork mob calling for TikTok’s expulsion from the US. The chair of the committee, Representative Cathy Rodgers of Washington, didn’t mince words in her opening statement, telling Chew, “Your platform should be banned.”

Even as I believe at least some of the single-minded focus on TikTok is a moral panic driven by xenophobia, not hard evidence, I share many of the national security concerns raised about the app. 

Chief among these concerns is the risk of exfiltration of user data to China — which definitely happened with TikTok, and is definitely a strategy the Chinese government has employed before with other American social networking apps, like Grindr. Espionage is by no means a risk unique to TikTok; but the trove of data underlying the app’s uncannily prescient recommendation algorithm, coupled with persistent ambiguities about ByteDance’s relationship with Chinese government officials, pose a legitimate set of questions about how TikTok user data might be used to surveil or extort Americans.

But there’s also the more subtle question of how an app’s owners can influence what people do or don’t see, and which narratives on which issues are or aren’t permitted to bubble to the top of the For You page. Earlier this year, Forbes reported the existence of a “heating” function available to TikTok staff to boost the visibility of content; what’s to stop this feature from being used to put a thumb on the scale of content ranking to favor Chinese government viewpoints on, say, Taiwanese sovereignty? Chew was relatively unambiguous on this point during the hearing, asserting that the platform does not promote content at the request of the Chinese government, but the opacity of the For You page makes it hard to know with certainty why featured content lands (or doesn’t land) in front of viewers.

Whether you take Chew’s word for it that TikTok hasn’t done any of the nefarious things members of Congress think it has — and it’s safe to say that members of the Energy and Commerce Committee did not take him at his word — the security concerns stemming from the possibility of TikTok’s deployment as a tool of Chinese foreign policy are at least somewhat grounded in reality. The problem is that solutions like Project Texas, and a single-minded focus on China, may end up having the counterproductive result of making the app less resilient to malign influence campaigns targeting the service’s 1.5 billion users around the world.

A key part of how companies, TikTok included, expose and disrupt coordinated manipulation is by aggregating an enormous amount of data about users and their behavior, and looking for anomalies. In infosec jargon, we call this “centralized telemetry” — a single line of sight into complex technical systems that enables analysts to find a needle (for instance, a Russian troll farm) in the haystack of social media activity. Centralized telemetry is incredibly important when you’re dealing with adversarial issues, because the threat actors you’re trying to find usually aren’t stupid enough to leave a wide trail of evidence pointing back to them.

Here’s a specific example of how this works:

In September 2020, during the first presidential debate of the 2020 US elections, my team at Twitter found a bunch of Iranian accounts with an awful lot to say about Joe Biden and Donald Trump. I found the first few — I wish I was joking about this — by looking for Twitter accounts registered with phone numbers with Iran’s +98 country code that tweeted with hashtags like “#Debate2020.” Many were real Iranians, sharing their views on American politics; others were, well, this:

Yes, sometimes even government-sponsored trolling campaigns are this poorly done.

As we dug deeper into the Iranian campaign, we noticed that similar-looking accounts (including some using the same misspelled hashtags) were registered with phone numbers in the US and Europe rather than Iran, and were accessing Twitter through different proxy servers and VPNs located all over the world. Many of the accounts we uncovered looked, to Twitter’s systems, like they were based in Germany. It was only by comparing a broad set of signals that we were able to determine that these European accounts were actually Iranian in origin, and part of the same campaign.

Individually, the posts from these accounts didn’t clearly register as being part of a state-backed influence operation. They might be stupid, offensive, or even violent — but content alone couldn’t expose them. Centralized telemetry helped us figure out that they were part of an Iranian government campaign.

Let’s turn back to TikTok, though:

TikTok… do a lot of this work right now, too! They’ve hired a lot of very smart people to work on coordinated manipulation, fake engagement, and what they call “covert influence operations” — and they’re doing a pretty good job! There’s a ton of data about their efforts in TikTok’s (also quite good!) transparency report. Did you know TikTok blocks an average of 1.8 billion fake likes per month? (That’s a lot!) Or that they remove an average of more than half a million fake accounts a day? (That’s also a lot!) And to their credit, TikTok’s state-affiliated media labels appear on outlets based in China. TikTok have said for years that they invest heavily in addressing manipulation and foreign interference in elections — and their own data shows that that’s generally true.

Now, you can ask very reasonable questions about whether TikTok’s highly capable threat investigators would expose a PRC-backed covert influence operation if they found one — the way Twitter and Facebook did with a campaign associated with the US Department of Defense in 2022. I personally find it a little… fishy… that the company’s Q3 2022 transparency report discloses a Taiwanese operation, but not, say, the TikTok incarnation of the unimaginably prolific, persistent, and platform-agnostic Chinese influence campaign Spamouflage Dragon (which Twitter first attributed to the Chinese government in 2019, and which continues to bounce around every major social media platform).

But anyway: the basic problem with Project Texas and the whole “we’re going to air-gap US user data from everything else” premise is that you’re establishing geographic limits around a problem that does not respect geography — and doing so meaningfully hinders the company’s ability to find and shut down the very threats of malign interference that regulators are so worried about. 

Let’s assume that USDS staff have a mandate to go after foreign influence campaigns targeting US users. The siloed nature of USDS means they likely can only do that work using data about the 150 million or so US-based users of TikTok, a 10% view of the overall landscape of activity from TikTok’s 1.5 billion global users. Their ability to track persistent threat actors as they move across accounts, phone numbers, VPNs, and hosting providers will be constrained by the artificial borders of Project Texas.

(Or, alternatively, do USDS employees have unlimited access to TikTok’s global data, but not vice versa? How does that work under GDPR? The details of Project Texas remain a little unclear on this point.)

As for the non-USDS parts of TikTok, otherwise known as “the overwhelming majority of the platform,” USDS turns any US-based accounts into a data void. TikTok’s existing threat hunting team will be willfully blind to bad actors who host their content in the US — which, not for nothing, they definitely will do as a strategy for exploiting this convoluted arrangement. 

USDS may seem like a great solution if your goal is not to get banned in the US (although yesterday’s hearing suggests that it may actually be a failure when it comes to that, too). But it’s a terrible solution if your goal is to let threat investigators find the bad actors actually targeting the people on your platform. Adversarial threats don’t respect geographic limits; they seek out the lowest-friction, lowest-risk ways to carry out their objectives. Project Texas raises the barriers for TikTok to find and disrupt inauthentic behavior, and makes it less likely that the company’s staff will be successful in their efforts to find and shut down these campaigns. I struggle to believe the illusory benefits of a US-based data warehouse exceed the practical costs the company necessarily takes on with this arrangement.

At the end of the day, Project Texas’s side effects are another example of the privacy vs security tradeoffs that come up again and again in the counter-influence operations space. This work just isn’t possible to do without massive troves of incredibly privacy-sensitive user data and logs. Those same logs become a liability in the event of a data breach or, say, a rogue employee looking to exfiltrate information about activists to a repressive government trying to hunt them down. It’s a hard problem for any company to solve — much less one doing so under the gun of an impending ban, like TikTok have had to. 

But, whatever your anxieties about TikTok (and I have many!), banning it, and the haphazard Project Texas reaction to a possible ban, won’t necessarily help national security, and could make things worse. In an effort to stave off Chinese surveillance and influence on American politics, Project Texas might just open the door for a bunch of other countries to be more effective in doing so instead.

Yoel Roth is a technology policy fellow at UC Berkeley, and was the head of Trust & Safety at Twitter.

Filed Under: , , , , , , , , ,
Companies: oracle, tiktok, tiktok usds

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “How Forcing TikTok To Completely Separate Its US Operations Could Actually Undermine National Security”

Subscribe: RSS Leave a comment
21 Comments
Anonymous Coward says:

the security concerns stemming from the possibility of TikTok’s deployment as a tool of Chinese foreign policy are at least somewhat grounded in reality.

And when the same suspicion is applied to Google, Facebook, Twitter, Microsoft, Oracle, and etc., the US will just shrug and accept its loss of its ability to spy on the rest of the world..

Anonymous Coward says:

Thank you, Mr. Roth, for the quite lucid explanation of why banning TikTok is problematic. Of course, this particular problem is just one of many and likely to pale in comparison to the First Amendment problem. The real problem is that the folks we elected to Congress are either too lazy or not competent to understand the impact of the lack of a national privacy policy and supporting statutes. Most of the user preferences available to the operators of TikTok can be purchased for a very modest sum from a myriad of data brokers. TikTok may be able to derive some more subtle inclinations based on specific user behavior, but almost all major attitudes are already aggregated. This is just more of the performative b.s. we get from a Congress whose primary expertise is avoiding all of the important issues they should be addressing but won’t because their big donors don’t want them to.

This comment has been flagged by the community. Click here to show it.

This comment has been flagged by the community. Click here to show it.

Anonymous Coward says:

Re: Re:

Nope. The guy who was part of the Trust & Safety Team that gave Trump nigh-infinite affordances until he outlived his profitability, and continued to let high-profile bad actors like Libs Of TikTok steer her followers toward drag shows with the intent to cause violence, he can bugger off and stop pretending he’s some expert on security and trust & safety.

T.L. (profile) says:

The problem is TikTok is dealing with politicians who are tech-averse and tech-illiterate, and therefore too untrustworthy to legislate anything technology-related because of their lack of understanding about it.

Couple that with the fact that China hawks only care about sticking it to China without regard to the many flaws that their ideas regarding how to deal with TikTok are not grounded in versed thought (the issues with Project Texas, the illegality and political problems with a ban, the fact that a forced sale could easily be stalled by a China frustrated with an economic containment strategy that only risks provoking a broader trade war that would hurt the U.S. economy and would net too small a circle of potential buyers to be a realistic option due to TikTok’s high evaluation) and without regard to the damage TikTok’s ouster from the U.S. market would have on the creator economy and small businesses who advertise on the platform, and you get what happened at Chou’s hearing.

If there’s any solace in what happened, the reactions from those who saw it suggest that hearing very much worked against the pro-ban movement in Congress. Experts like Yael Roth need to get in touch with Washington about the problems with their options and convince them that the best solution is actually investigating the claims, rather than regurgitating talking points unsubstantiated by others’ research, as well as passing privacy legislation of some kind (if not encompassing all businesses and social media platforms, but regulating how that data is handled by foreign companies and limiting who data brokers can sell it to) that ensures TikTok users’ data is better protected.

Anonymous Coward says:

Re:

Clever Hans was a horse that because famous for being able to do math. Unsurprisingly, Hans was actually not able to do math, but was really good at observing the reactions of his trainer, and could tell when to stop tapping his hoof on the ground when he got to the right answer.

There are plenty of folks in Congress that no know more about technology (and the Constitution) than Hans knew about math. But just like Hans wasn’t “wrong” when he tapped his hoof 4 times when asked what 2 + 2 is, Congress isn’t wrong to be concerned about the problem of the PCR having a massive data capture tool on the smartphone of half of the US population.

People who know a lot more about technology have briefed Congress on how the PRC has already been observed using TikTok as an intelligence gathering tool and attempts to use it as a misinformation platform. What Congress needs now are people who understand a lot more about the Constitution and the law (ironically, as they are themselves the lawmakers) to help them chart a path to addressing the problems in a constitutional manner. With any luck, they might be able to address privacy concerns that exist on other social media platforms that aren’t as easy to use used as intelligence gathering tools for nation states. (Yeah, we all know that Facebook and Google are already running private intelligence gathering tools very effectively.) Unfortunately, our congressional hoof tappers aren’t looking at the Constitution to know when to stop tapping. They are paying attention to the same media coverage that their constituents are watching.

JBD44 (profile) says:

TikTok--Cyber Actors

There are several real cyber attacks which have actually been carried out by TiKTok. This is not a “ban media”, Anti-freedom of expression, or anti-Chinese lets-make-the-public-fear issue. These folks have attacked our phones, for real. In one instance, they exploited an iPhone feature that caches passwords in order to gain entry into iPhone applications. The second incident has not been put out there yet. (sorry not everyone can know sensitive things) The reality is we need to take action on them, (and as someone pointed out) any collector of information where unauthorized. Another interesting part of articles like this is the tremendous amount of money that China has paid to journalists and influencers to say favorable things about China, and quash anything that criticizes them. I believe the figure was over 100,000 journalists world wide that are being paid money to do this. That’s a scarry statistic, and I can’t help but look twice and wonder at the impartiality of every article that seems to go against known, verified issues linked to China, whether the Wuhan Lab or TikTok.

This comment has been flagged by the community. Click here to show it.

Del says:

Tik Tok / what a crock

Typical click bait served up by Masnick and TechShlock. This site is predictably lightweight, snotty and just – stupid. If there were any way to erase my click, I’d do it in a heartbeat.

This has been a garbage site for years. Only clicking because it showed up – due to the absurd headline – on TechMeme. Well, it’s been years since I read this POS and it will be a lifetime before I again make the same mistake.

This comment has been flagged by the community. Click here to show it.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...