China and India are widely expected to be two of the most powerful global players in the decades to come. In some ways, they are alike. As Techdirt has reported, both have dismal records when it comes to Internet freedom, online censorship and privacy. But they differ in terms of their impact on the IT sector outside their home countries. China has produced a worldwide success story in TikTok, alongside well-known Internet giants such as Alibaba, Baidu and Tencent. India, by contrast, is chiefly famous in the computing world for its vast digital biometric identity system, Aadhaar. That may be about to change, thanks to another Indian creation, the Unified Payments Interface (UPI).
As its rather boring name suggests, UPI is a way of allowing all the different payment systems and companies that make up India’s financial sector to interoperate seamlessly. In practice, this means that Indians can send money to more or less anyone, or any company, in India, with a few clicks on a UPI mobile phone app without worrying about the details. An article from 2017 on Medium provides an excellent detailed history of the project up to that time. A post on the Rest of the World brings the story up to date:
UPI, introduced in 2016, has surpassed the use of credit and debit cards in India. Nearly 260 million Indians use UPI — in January 2023, it recorded about 8 billion transactions worth nearly $200 billion. The transactions can be facilitated using mobile numbers or QR codes, ranging from a few cents to 100,000 rupees ($1,221) a day.
At the heart of UPI lies Aadhaar:
Users without debit cards can use a UPI address — similar to an email address — to transfer money from their Aadhaar-linked bank accounts in real time. Over the past decade, the government has used Aadhaar as a building block for a host of digital services, such as payments, e-signatures, and health apps; these interlinked sets of digital platforms are called India Stack.
UPI is clearly a big success in India, not least for providing poorer sectors of society with advanced financial services via their mobile phone. But the real story may be the one developing outside India:
That makes sense, because India is one of the largest remittance recipients in the world, receiving around $100 billion in 2022. But there’s another key aspect:
India’s federal bank has been pushing for the internationalization of UPI since 2020. One of the reasons for this aggressive global expansion is to mitigate geopolitical risk. In February 2022, the U.S. and its Western allies blocked Russian banks’ access to Swift, an international payments system used by thousands of financial institutions, hurting Russia severely. It spooked other countries about secondary sanctions — especially India, which continues to purchase crude oil from Russia.
A global roll-out of UPI would obviously be great news for Russia, offering a way to circumvent the ban on using Swift that was imposed following its invasion of Ukraine. It would also bolster India’s geopolitical power, since it controls the underlying UPI technology, and it would place Indian companies at the heart of this emerging international payments system. UPI may have a dull name and low visibility currently. But behind the scenes the implications of its wider adoption outside India could be dramatic, and just as influential as China’s more obvious approach to bolstering its soft power in the online world.
Last week, we wrote about the positively ridiculous lawsuit filed by the Seattle Public School district against basically all of social media claiming social media was “a public nuisance.” As we noted, the school district appeared to be wasting taxpayer money, that could have gone to educating their kids, on this lawsuit that screamed out to the public that the school district had totally failed in educating their children how to be good digital citizens, how to use the internet properly, and how to be prepared for living life in the age of the internet.
And, now it appears that the Mesa, Arizona school district has decided to do the same thing. Using the same lawyers. The law offices of Keller Rohrback appears to be trying to carve out this corner of the market as their own: having public school districts waste a shitload of time and resources to publicly proclaim that they can’t prepare the children they’re in charge of educating for the modern internet world.
The Mesa complaint is, not surprisingly, similar to the Seattle complaint. It’s suing the same companies (really: Meta, Google, Snap, Tiktok). Like the Seattle complaint, it argues that social media is a “public nuisance.” Like the Seattle complaint, it says that Section 230 doesn’t protect the companies (it’s wrong). Like the Seattle complaint, it posts a few cherry-picked studies claiming that social media is bad for kids, and ignores more comprehensive studies that argue that opposite. Like the Seattle complaint, it goes hard in proving that Mesa public schools apparently are staffed by administrators and teachers who suck at educating children, and find themselves powerless against… entertainment.
In short, it’s pathetic.
The one main “difference” between the Seattle complaint and the Mesa one is that in Mesa they’ve added a “negligence” claim, saying that social media companies “owe” the school district “a duty not to expose Plaintiff to an unreasonable risk of harm….”
This is all laughably stupid, and not at all how the law works. I mean, it’s possible that the lawyers at Keller Rohrback figure that if they file enough of these lawsuits, eventually they’ll find a judge who lets the moral panic of “social media is bad for kids” overwhelm the actual legal issues, but it’s difficult to see it standing up to any legitimate judicial scrutiny.
Of course, now that we have these two lawsuits, it means it’s almost certain that they’re shopping for similar lawsuits. One hopes that other school districts will reject this nonsense. The whole point of these lawsuits is almost certainly to try to shake down the social media companies to get them to settle, but that seems unlikely.
Either way, if you’re a parent of a student in the Mesa public schools, you should be asking why your school’s administrators seem to be publicly admitting that they can’t teach your children how to deal with the modern internet world.
I just wrote about Utah’s ridiculously silly plans to sue every social media company for being dangerous to children, in which I pointed out that the actual research doesn’t support the underlying argument at all. But I forgot that a few weeks ago, Seattle’s public school district actually filed just such a lawsuit, suing basically every large social media company for being a “public nuisance.” The 91-page complaint is bad. Seattle taxpayers should be furious that their taxes, which are supposed to be paying for educating their children, are, instead, going to lawyers to file a lawsuit so ridiculous that it’s entirely possible the lawyers get sanctioned.
The lawsuit was filed against a variety of entities and subsidiaries, but basically boils down to suing Meta (over Facebook, Instagram), Google (over YouTube), Snapchat, and TikTok. Most of the actual lawsuit reads like any one of the many, many moral panic articles you read about how “social media is bad for you,” with extremely cherry-picked facts that are not actually supported by the data. Indeed, one might argue that the complaint itself, filed by Seattle Public Schools lawyer Gregory Narver and the local Seattle law firm of Keller Rohrback, is chock full of the very sort of misinformation that they so quickly wish to blame the social media companies for spreading.
First: as we’ve detailed, the actual evidence that social media is harming children basically… does not exist. Over and over again studies show a near total lack of evidence. Indeed, as recent studies have shown, the vast majority of children get value from social media. There are plenty of moral paniciky pieces from adults freaked out about what “the kids these days” are doing, but little evidence to support any of it. Indeed, the parents often seem to be driven into a moral panic fury by… misinformation they (the adults) encountered on social media.
The school’s lawsuit reads like one giant aggregation of basically all of these moral panic stories. First, it notes that the kids these days, they use social media a lot. Which, well, duh. But, honestly, when you look at the details it suggests they’re mostly using them for entertainment, meaning that it hearkens back to previous moral panics about every new form of entertainment from books, to TV, to movies, etc. And, even then, none of this even looks that bad? The complaint argues that this chart is “alarming,” but if you asked kids about how much TV they watched a couple decades ago, I’m guessing it would be similar to what is currently noted about YouTube and TikTok (and note that others like Facebook/Instagram don’t seem to get that much use at all according to this chart, but are still being sued):
There’s a whole section claiming to show that “research has confirmed the harmful effects” of social media on youth, but that’s false. It’s literally misinformation. It cherry-picks a few studies, nearly all of which are by a single researcher, and ignores the piles upon piles of research suggesting otherwise. Hell, even the graphic above that it uses to show the “alarming” addition to social media is from Pew Research Center… the organization that just released a massive study about how social media has made life better for teens. Somehow, the Seattle Public Schools forgot to include that one. I wonder why?
Honestly, the best way to think about this lawsuit is that it is the Seattle Public School system publicly admitting that they’re terrible educators. While it’s clear that there are some kids who end up having problems exacerbated by social media, one of the best ways to deal with that is through good education. Teaching kids how to use social media properly, how to be a good digital citizen, how to have better media literacy for things they find on social media… these are all the kinds of things that a good school district builds into its curriculum.
This lawsuit is effectively the Seattle Public School system publicly stating “we’re terrible at our job, we have not prepared your kids for the real world, and therefore, we need to sue the media apps and services they use, because we failed in our job.” It’s not a good look. And, again, if I were a Seattle taxpayer — and especially if I were a Seattle taxpayer with kids in the Seattle public school district — I would be furious.
The complaint repeatedly points out that the various social media platforms have been marketed to kids, which, um, yes? That doesn’t make it against the law. While the lawsuit mentions COPPA, the law designed to protect kids, it’s not making a COPPA claim (which it can’t make anyway). Instead, it’s just a bunch of blind conjectures, leading to a laughably weak “public nuisance” claim.
Pursuant to RCW 7.48.010, an actionable nuisance is defined as, inter alia,
“whatever is injurious to health or indecent or offensive to the senses, or an obstruction to the
free use of property, so as to essentially interfere with the comfortable enjoyment of the life and
property.”
Specifically, a “[n]uisance consists in unlawfully doing an act, or omitting to
perform a duty, which act or omission either annoys, injures or endangers the comfort, repose,
health or safety of others, offends decency . . . or in any way renders other persons insecure in
life, or in the use of property.”
Under Washington law, conduct that substantially and/or unreasonably interferes
with the Plaintiff’s use of its property is a nuisance even if it would otherwise be lawful.
Pursuant to RCW 7.48.130, “[a] public nuisance is one which affects equally the
rights of an entire community or neighborhood, although the extent of the damage may be
unequal.”
Defendants have created a mental health crisis in Seattle Public Schools, injuring
the public health and safety in Plaintiff’s community and interfering with the operations, use, and
enjoyment of the property of Seattle Public Schools
Employees and patrons, including students, of Seattle Public Schools have a right
to be free from conduct that endangers their health and safety. Yet Defendants have engaged in
conduct which endangers or injures the health and safety of the employees and students of
Seattle Public Schools by designing, marketing, and operating their respective social media
platforms for use by students in Seattle Public Schools and in a manner that substantially
interferes with the functions and operations of Seattle Public Schools and impacts the public
health, safety, and welfare of the Seattle Public Schools community
This reads just as any similar moral panic complaint would have read against older technologies. Imagine schools in the 1950s suing television or schools in the 1920s suing radios. Or schools in the 19th century suing book publishers for early pulp novels.
For what it’s worth, the school district also tries (and, frankly, fails) to take on Section 230 head on, claiming that it is “no shield.”
Plaintiff anticipates that Defendants will raise section 230 of the Communications
Decency Act, 47 U.S.C. § 230(c)(1), as a shield for their conduct. But section 230 is no shield for
Defendants’ own acts in designing, marketing, and operating social media platforms that are
harmful to youth.
….
Section 230 does not shield Defendants’ conduct because, among other
considerations: (1) Defendants are liable for their own affirmative conduct in recommending and
promoting harmful content to youth; (2) Defendants are liable for their own actions designing
and marketing their social media platforms in a way that causes harm; (3) Defendants are liable
for the content they create that causes harm; and (4) Defendants are liable for distributing,
delivering, and/or transmitting material that they know or have reason to know is harmful,
unlawful, and/or tortious.
Except that, as we and many others explained in our briefs in the Supreme Court’s Gonzalez case, that’s all nonsense. All of them are still attempting to hold companies liable for the speech of users. None of the actual complaints are about actions by the companies, but rather how they don’t like the fact that the expression of these sites users are (the school district misleadingly claims) harmful to the kids in their schools.
First, Plaintiff is not alleging Defendants are liable for what third-parties have
said on Defendants’ platforms but, rather, for Defendants’ own conduct. As described above,
Defendants affirmatively recommend and promote harmful content to youth, such as proanorexia and eating disorder content. Recommendation and promotion of damaging material is
not a traditional editorial function and seeking to hold Defendants liable for these actions is not
seeking to hold them liable as a publisher or speaker of third party-content.
Yes, but recommending and promoting content is 1st Amendment protected speech. They can’t be sued for that. And, it’s not the “recommendation” that they’re really claiming is harmful, but the speech that is being recommended which (again) is protected by Section 230.
Second, Plaintiff’s claims arise from Defendants’ status as designers and
marketers of dangerous social media platforms that have injured the health, comfort, and repose
of its community. The nature of Defendants’ platforms centers around Defendants’ use of
algorithms and other designs features that encourage users to spend the maximum amount of
time on their platforms—not on particular third party content.
One could just as reasonably argue that the harm actually arises from the Seattle Public School system’s apparently total inability to properly prepare the children in their care for modern communications and entertainment systems. This entire lawsuit seems like the school district foisting the blame for their own failings on a convenient scapegoat.
There’s a lot more nonsense in the lawsuit, but hopefully the court quickly recognizes how ridiculous this is and tosses it out. Of course, if the Supreme Court screws up everything with a bad ruling in the Gonzalez case, well, then this lawsuit should give everyone pretty clear warning of what’s to come: a whole slew of utterly vexatious, frivolous lawsuits against internet websites for any perceived “harm.”
The only real takeaways from this lawsuit should be (1) Seattle parents should be furious, (2) the Seattle Public School system seems to be admitting it’s terrible at preparing children for the real world, and (3) Section 230 remains hugely important in protecting websites against these kinds of frivolous SLAPP suits.
Back in June we wrote about a blockbuster article in Buzzfeed by Emily Baker-White detailing how ByteDance engineers in China were still accessing data on US TikTok users. That was notable, given that ByteDance had signed this big deal with Oracle, while former President Trump held a proverbial gun to its head, to try to separate out its US data and keep it separate. It’s also still not entirely clear what Oracle is really doing with regards to TikTok, as each announcement seems less and less informative.
Either way, in October, we again wrote about another story by Baker-White, now at Forbes, talking about how ByteDance appeared to use TikTok data to try to spy on certain US citizens, though the details were vague. As we said at the time, this seemed like the sort of thing that should spur people to pass a comprehensive federal privacy law, not that that’s happened. Instead, we’ve just been getting more and more performative nonsense focused exclusively on TikTok, rather than on the underlying problem.
Now, Baker-White has the third piece in this trilogy that ties them all together. Apparently one of the US citizens ByteDance was trying to spy on… was Baker-White herself, and it was because of the original Buzzfeed article, as the company sought to track down how the initial info was leaked. It’s quite a story and you should read the whole thing, though here’s just a snippet.
According to materials reviewed by Forbes, ByteDance tracked multiple Forbes journalists as part of this covert surveillance campaign, which was designed to unearth the source of leaks inside the company following a drumbeat of storiesexposing the company’s ongoing links to China. As a result of the investigation into the surveillance tactics, ByteDance fired Chris Lepitak, its chief internal auditor who led the team responsible for them. The China-based executive Song Ye, who Lepitak reported to and who reports directly to ByteDance CEO Rubo Liang, resigned.
“I was deeply disappointed when I was notified of the situation… and I’m sure you feel the same,” Liang wrote in an internal email shared with Forbes. “The public trust that we have spent huge efforts building is going to be significantly undermined by the misconduct of a few individuals. … I believe this situation will serve as a lesson to us all.”
That is to say, it’s unfortunate, but true, that tech companies have a bit of a history attacking critical journalists, and abusing their own access to data to do so. It’s very, very bad, and it should not be allowed, but (once again), it’s not unique to TikTok, nor will any solution focused solely on TikTok do anything to “solve” this issue.
It is certainly yet another frightening example, though, and it remains ridiculous that this is how any company responds to a little critical press coverage. Tech execs need to realize that the press covers them critically. It’s how things work.
Emily Baker-White has quite the story over at Forbes, revealing how ByteDance, the Chinese company that owns TikTok, apparently planned to have its “Internal Audit and Risk Control” department spy on the location of some American citizens:
The team primarily conducts investigations into potential misconduct by current and former ByteDance employees. But in at least two cases, the Internal Audit team also planned to collect TikTok data about the location of a U.S. citizen who had never had an employment relationship with the company, the materials show. It is unclear from the materials whether data about these Americans was actually collected; however, the plan was for a Beijing-based ByteDance team to obtain location data from U.S. users’ devices.
[….]
But the material reviewed by Forbes indicates that ByteDance’s Internal Audit team was planning to use this location information to surveil individual American citizens, not to target ads or any of these other purposes. Forbes is not disclosing the nature and purpose of the planned surveillance referenced in the materials in order to protect sources. TikTok and ByteDance did not answer questions about whether Internal Audit has specifically targeted any members of the U.S. government, activists, public figures or journalists.
Given the near non-stop moral panics about TikTok from the past few years, I’m am absolutely sure that this will be used (yet again) to argue that TikTok is somehow uniquely problematic, when the reality (yet again) is that what it’s doing is really no different than what a ton of American internet companies already do and have done in the past. Baker-White, who is one of the best reporters on this beat, makes that clear in her reporting:
ByteDance is not the first tech giant to have considered using an app to monitor specific U.S. users. In 2017, the New York Times reported that Uber had identified various local politicians and regulators and served them a separate, misleading version of the Uber app to avoid regulatory penalties. At the time, Uber acknowledged that it had run the program, called “greyball,” but said it was used to deny ride requests to “opponents who collude with officials on secret ‘stings’ meant to entrap drivers,” among other groups.
[….]
Both Uber and Facebook also reportedly tracked the location of journalists reporting on their apps. A 2015 investigation by the Electronic Privacy Information Center found that Uber had monitored the location of journalists covering the company. Uber did not specifically respond to this claim. The 2021 bookAn Ugly Truth alleges that Facebook did the same thing, in an effort to identify the journalists’ sources. Facebook did not respond directly to the assertions in the book, but a spokesperson told the San Jose Mercury News in 2018 that, like other companies, Facebook “routinely use[s] business records in workplace investigations.”
So, rather than making this a big thing about “oh no TikTok/China bad,” this should be a recognition that Congress should stop bickering about stupid stuff, and that includes pushing silly performative legislation, and come up with an actual federal privacy law that gives the public greater ability to protect their own privacy from all sorts of companies.
But, of course, that would take competence, and probably wouldn’t be useful for grandstanding or headlines… so it’ll never happen.
Of course, there are questions about what this means regarding TikTok’s widely discussed plans to separate US user data from ByteDance’s peeking eyes. I thought Oracle was supposed to protect us from all this, right? Right?
As you may recall, back during the Trump administration, after a bunch of kids on TikTok trolled Trump into believing one of his campaign rallies would be massively attended (which it was not), Trump decided to take out his anger on TikTok by issuing an almost certainly unconstitutional executive order demanding that TikTok’s owner, the Chinese firm ByteDance, sell TikTok to an American company. While a few potential buyers lined up to pick up the increasingly popular social media company on the cheap (due to the forced sale nature of it), White House insiders revealed that they would only approve the sale if it went to a friend of Donald Trump’s (this, of course, is corrupt nonsense, but hey, no one cares about that any more). That left precious few options, as Trump wouldn’t approve the sale to the few companies that actually wanted to buy the whole thing outright: namely Microsoft and Walmart.
In the end, Trump wanted the company to go to his buddy Larry Ellison’s Oracle. Of course, there was a problem: Oracle had no use for TikTok as a subsidiary. Oracle does enterprise stuff, not social media. But, what Oracle does have is a cloud hosting offering that is way down the list behind industry leaders like Amazon, Microsoft, IBM, Google and others. So, Oracle and the Trump administration cooked up… a hosting deal for Oracle.
Basically, Oracle would get TikTok’s US hosting business with some vague promises of protecting data privacy, while Trump would get to help out a friend (Ellison) while pretending he’d actually accomplished something (even though it wasn’t at all what he initially demanded). Of course, this was all about posturing and headlines, so not much came of the deal for a while.
But, with new (somewhat questionable) claims about US TikTok data being accessible to ByteDance employees making news, the company apparently (two years later) has started to make good on the deal and in June announced that all of its US data was routed to Oracle.
It’s not exactly clear what this means in practice — and we’ll remind folks that there were reports last year claiming that Oracle had Chinese law enforcement customers, which raised at least some questions about its actual commitment to protecting data from the Chinese government. Also notable: Oracle has spent years gleefully trying to undermine basically all content moderation by funding groups to advocate against Section 230. Oh, and I guess we should mention, that for all the claims of TikTok being controlled by the Chinese government, remember that Oracle got its start… as a CIA project. There is something richly ironic in the idea that Oracle is somehow a trustworthy partner here.
Given all that — what exactly does it mean for Oracle to be “auditing” TikTok’s algorithms and content moderation? Given that the company doesn’t have the best track record on privacy and has worked to undermine content moderation for years now, the whole thing is… just kinda strange. Oracle’s explanation is not very clear at all:
The reviews give Oracle visibility into how TikTok’s algorithms surfaces content “to ensure that outcomes are in line with expectations and that the models have not been manipulated in any way,” the spokesperson said.
I mean, what does “manipulated” even mean in that sentence? Of course they’re manipulated. Someone wrote the algorithm. If they mean “not manipulated to promote Chinese propaganda” or “not manipulated to suppress anti-Chinese content” then… maybe say that. Because “manipulation” on its own doesn’t mean anything reasonable here.
There is nothing in Oracle’s history or experience that suggests the company has any useful insight into how TikTok handles recommendations or content moderation. There are plenty of reasons to think that Oracle might actually be problematic in this role.
The whole setup seems quite strange, and really feels like everyone just sort of making it up as they go along. TikTok needs some sort of US oversight to appease people who are freaked out that a Chinese-owned social media company is successful in the US, and Oracle was right there to say it would do it, in exchange for a lucrative hosting deal for its lagging cloud offering. This also feels vaguely similar to how the US has been accusing Chinese firms like Huawei and ZTE of using their tech to snoop on people… when that’s actually exactly what the US government has been doing via Cisco for years.
Also, what kind of precedent does this set? Will we be okay if other countries demand that their own favored companies have to audit US firms’ algorithms and content moderation practices? Because… that is going to create quite a mess.
As you’ll recall, last summer there was a whole performative nonsense thing with then President Trump declaring TikTok to be a national security threat (just shortly after some kids on TikTok made him look silly by reserving a million tickets to a Trump rally they never intended to attend). Trump and his cronies insisted that TikTok owner ByteDance had to sell the US operations of TikTok to an American firm. The whole rationale about this was the claim — unsupported by any direct evidence — that TikTok was a privacy risk, because it was owned by a firm based in Beijing, and that firm likely had connections to the Chinese government (as do basically all large Chinese firms). But how was that privacy risk any worse than pretty much any other company? No one ever seemed to be able to say.
Eventually, after Trump blocked both Microsoft and Walmart from doing the deal, he “approved” a non-sale, but “hosting” deal with Oracle, whose founder/chair, Larry Ellison, and CEO, Safra Catz, were both big Trump supporters. It quickly came out that TikTok’s investors deliberately went hunting for a company that they knew Trump liked, and that’s why they asked Oracle.
But, part of the announcement of the “deal” was that Oracle would make sure that US TikTok users had their data protected, and that Oracle would keep that data outside the hands of the Chinese government. That seemed somewhat rich, considering that Oracle’s initial rise to being a tech giant was built almost entirely on its close connections to the US government, and specifically the intelligence agencies. But it’s become even more rich now that the Intercept reports that Oracle actually has a lucrative business helping repressive law enforcement in China do surveillance work. The long story is absolutely full of totally shocking — but somehow not surprising — details. It starts off by noting that Oracle hosted a presentation on its own website, literally describing how it helped police in Liaoning province better sort through all of the surveillance data they collected:
Police in China’s Liaoning province were sitting on mounds of data collected through invasive means: financial records, travel information, vehicle registrations, social media, and surveillance camera footage. To make sense of it all, they needed sophisticated analytic software. Enter American business computing giant Oracle, whose products could find relevant data in the police department?s disparate feeds and merge it with information from ongoing investigations.
So explained a China-based Oracle engineer at a developer conference at the company?s California headquarters in 2018. Slides from the presentation, hosted on Oracle?s website, begin with a ?case outline? listing four Oracle ?product[s] used? by Liaoning police to ?do criminal analysis and prediction.? One slide shows Oracle software enabling Liaoning police to create network graphs based on hotel registrations and track down anyone who might be linked to a given suspect. Another shows the software being used to build a police dashboard and create ?security case heat map[s].? Apparent pictures of the software interface show a blurred face and various Chinese names. The concluding slide states that the software helped police, whose datasets had been ?incomprehensible,? more easily ?trace the key people/objects/events? and ?identify potential suspect[s]? ? which in China often means dissidents.
And, yes, if you’re wondering, apparently Oracle is helping police in Xingjiang, where there has been ongoing genocide happening against Uyghur Muslims.
In marketing materials, Oracle said that its software could help police leverage information from online comments, investigation records, hotel registrations, license plate information, DNA databases, and images for facial recognition. Oracle presentations even suggested that police could use its products to combine social media activity with dedicated Chinese government databases tracking drug users and people in the entertainment industry, a group that includes sex workers. Oracle employees also promoted company technology for China?s ?Police Cloud,? a big data platform implemented as part of the emerging surveillance state.
Several Oracle materials imply that the company has gone substantially further than marketing to Chinese police, which operate as part of the country?s Ministry of Public Security: One presentation detailing Oracle?s database and data security products contains a slide titled ?Oracle and the national defense industry.? That title is followed by a list of multiple Chinese military entities, including the People?s Liberation Army, China National Nuclear Corporation, and China Aerospace Science and Technology Corporation. Defense entities are also the apparent target for two additional Oracle Chinese-language presentations, the most recent of which is dated 2015, and for events called the ?People?s Armed Police Force?Oracle Cloud Computing Exchange Forum? and the ?Oracle Xi?an Aviation and National Defense Industry Informatization Seminar? listed in Chinese on Oracle?s site.
Yikes.
So the whole blasted pitch about taking control over TikTok was about keeping the data away from the Chinese, while at the very same time, the same company is helping Chinese law enforcement scan through tons of surveillance data to better suppress its people?
If you’re wondering how Oracle responded to the report, it had a spokesperson claim that the examples in these documents were “theoretical” pitch decks to show how the technology could be used, not how they were being used. But as the article notes, that appears to not be true, and the presentation makes it clear that it’s talking about an actual case study, and even points out that the police supplied their own data, which they then analyzed with Oracle’s technology.
The article is quite long, but turns up a shocking number of smoking guns:
Some of the Chinese-language presentations on Oracle?s site are labeled ?CONFIDENTIAL,? despite being publicly available. It is easy to see why someone might have wanted to keep them hidden. Taken together, they show an extreme willingness to aid in the construction of the surveillance state. One Chinese-language presentation, for example, promotes ?Oracle?s recommendation: a more complete platform to meet the needs of public security big data processing.?
[….]
Another pitch depicts a broad array of sensitive citizen data being converted into ones and zeros, including DNA, mental illness records, and other medical information. Still other documents from China boast that Oracle technology can help police trawl internet activity to ?analyze potential suspected criminal behavior among hundreds of millions of netizens,? capture license plate data from ?tens of thousands of cameras,? and analyze call records to build out criminal networks, then link them to fingerprint and facial recognition images.
Of course, it’s not clear if the Oracle TikTok deal will ever happen. It seems that no one was much interested in it beyond the headlines, and the Biden administration doesn’t seem keen on enforcing the executive order that created the need for Oracle to step in. And, honestly, after reading this report, perhaps that’s much better for the privacy of Americans using TikTok.
On Monday, the FTC announced that it was issuing what’s known as 6(b) orders to nine social media and video streaming companies, demanding a fairly massive amount of information regarding their data collection and usage policies, as well as their advertising practices. To me, this is a huge missed opportunity. If the FTC is truly trying to gain a better understanding of data collection, privacy, and advertising practices, perhaps to better inform Congress on how to, say, pass a truly comprehensive (and useful?!?) privacy legislation, then there are ways to do that. But this… is not that. This looks like a weird fishing expedition for a ton of unrelated information, from an odd selection of nine companies, many of whom are in a very different business than the others. It leaves me quite perplexed.
First, let’s look at the odd selection of companies. The letters are going to:
Amazon (apparently including Twitch)
Bytedance (TikTok)
Discord
Facebook
Reddit
Snap
Twitter
WhatsApp (owned by Facebook)
YouTube
Okay, so they’ve definitely focused on many of the big players, but they’ve also left out a ton as well. Where’s LinkedIn? Or Github? Or WeChat? Or Pinterest? Or Quora? They list Facebook and Whatsapp… but not Instagram? Where’s Zoom? Now it’s true that sometimes the FTC will randomly sample a bunch of companies in a particular industry to get a look at certain practices — but for that to make sense, you want to sample from a set of similarly situated companies. This is… not that.
For the smaller companies on the list, such as Reddit and Discord, the FTC demanding they file a ton of paperwork in a very short time frame is going to mean a tremendous waste of time.
The second concern is the broad nature of the requests. The “sample order” is massive. There are 53 separate requests, many with multiple sub-parts. They’re not just asking for specific information, but rather going on what appears to be an incredibly broad fishing expedition for information about a wide variety of practices at all of these companies — including broad demands for future strategies and plans. For example, beyond just information on the number of users, it demands all documents relating to “business strategies or plans,” “research and development efforts,” “strategies or plans to reduce costs, improve products or services…” It also seems to be demanding all “presentations to management committees, executive committees, and boards of directors.”
That feels like a fishing expedition, rather than an attempt to actually understand data collection and usage practices.
There are categories of information included here that I think it would be useful for the FTC to understand. But there’s just so much information requested that it seems likely to bury the useful information.
The one FTC Commissioner who dissented from this effort, Noah Joshua Phillips, raises important questions in his dissent:
Effective 6(b) orders look carefully at business practices in which companies engage in a manner
designed to elicit information, understand it, and then present it to the public in way that is
usable and can form a basis for sound public policy.
The first step is to select a group of recipients that will permit such examination, usually a group
of firms engaged in conduct that can be compared. But the logic behind the choice of recipients
here is not clear at all. The 6(b) orders target nine entities: Facebook, WhatsApp, Snap, Twitter,
YouTube, ByteDance, Twitch, Reddit, and Discord. These are different companies, some of
which have strikingly different business models. And the orders omit other companies engaged
in business practices similar to recipients, for example, Apple, Gab, GroupMe, LinkedIn, Parler,
Rumble, and Tumblr, not to mention other firms the data practices of which have drawn
significant government concern, like WeChat. The only plausible benefit to drawing the lines
the Commission has is targeting a number of high profile companies and, by limiting the number
to nine, avoiding the review process required under the Paperwork Reduction Act, which is not
triggered if fewer than ten entities are subject to requests.
Phillips calls out the same broad demands I raised above regarding business plans, R&D and presentations, noting:
Such a request would be suited to an antitrust investigation. But as part of an inquiry ostensibly
aimed at consumer privacy practices, it amounts to invasive government overreach. And that is
just one of the order?s 50-plus specifications.
And, finally, he highlights how this effort is just demanding way too much information to be of use for a comprehensive policy recommendation:
The biggest problem is that today?s 6(b) orders simply cover too many topics to make them
likely to result in the production of comparable, usable information?yet another feature proper
oversight and public comment could have flagged. Rather than a carefully calibrated set of
specifications designed to elicit information that the agency could digest and analyze as a basis
for informing itself, Congress, stakeholders, and the public, these 6(b) orders instead are
sprawling and scattershot. Their over 50 specifications, most with numerous and detailed
subparts, address topics including, but not limited to: advertising (reach, revenue, costs, and
number and type); consumer data (collection, use, storage, disclosure, and deletion); as noted
above, all strategic, financial, and research plans; algorithms and data analytics; user engagement
and content moderation; demographic information; relationships with other services; and
children and teens (policies, practices, and procedures).
Recipients of 6(b) orders typically negotiate to limit their productions, to tailor them in light of
their specific business models and business practices. Perhaps the Commission will push back on
attempts to do so, devoting additional lawyers to litigating the orders and having a federal judge
oversee them, rather than OIRA. Or negotiation may reduce the burdens. But if that happens,
each recipient will be responding to a different set of negotiated specifications. That certain of
the companies in question have very different business models makes this even more likely. The
end result of that is, say, the agency learning a lot about one recipient?s advertising practices, but
not as much about its algorithms. For another recipient, the agency might receive information
about privacy practices but very little about its plans to expand. Each of the nine recipients will
produce differing, if any, amounts of information to each of the 50-plus specifications.
I actually think it would be a good thing for the FTC to better understand how these companies work and their practices. I think it could be useful for them to gain such an understanding, and then make recommendations on a comprehensive federal privacy law. But I don’t see how this fishing expedition does any of that. Instead, it just asks for basically everything and the kitchen sink from a somewhat random selection of companies, some of whom will have difficulty producing all of this information.
Yesterday we noted that TikTok had made a filing with the government asking what the fuck was going on with the supposed ban on their application that was supposed to go into effect this week. While a court had issued an injunction saying the Commerce Department couldn’t put the ban into effect, the Trump administration basically hadn’t said anything since then, and the ban was set to go into effect yesterday.
Late yesterday, the Commerce Department put out a notice basically saying that it’s complying with the injunction issued by the court, and therefore not implementing the executive order and the ban:
However, on October 30, 2020, the District Court issued an Order granting the Plaintiffs’
renewed motion for a preliminary injunction. This Order enjoined the Department from
enforcing the Identification and the prohibition on transactions identified in Paragraphs 1-6
above.
The Department is complying with the terms of this Order. Accordingly, this serves as NOTICE
that the Secretary?s prohibition of identified transactions pursuant to Executive Order 13942,
related to TikTok, HAS BEEN ENJOINED, and WILL NOT GO INTO EFFECT, pending
further legal developments
Of course, the Commerce Department saying it won’t enforce the order doesn’t answer the larger question of whether or not the US government is still demanding that it sell off all of its US assets — or even whether or not the grifty non-sale to Oracle will suffice.
Basically, highlighting how much of a joke this whole thing was, it seems that the supposed “national security” rationale behind all of this was complete garbage, and since Trump has his hands full trying to pretend he won the election he very clearly lost, everyone’s just going to let this slide until the Biden administration comes in and likely drops the executive order altogether. But kudos to Larry Ellison for getting a lucrative hosting deal.
We’ve repeatedly made it pretty clear that President Trump’s effort to ban TikTok is little more than a performative, xenophobic, idiotic mess. For one, the effort appears more focused on trying to get Trump-allied Oracle a new hosting deal than any serious concern about consumer privacy and security. Two, banning a teen dancing and lip syncing app does jack shit in terms of thwarting China or protecting U.S. consumer privacy, since the U.S. telecom, app, and adtech markets are largely an unaccountable privacy mess making it trivial to obtain this kind of data elsewhere.
Further highlighting the performative nature of the proposed ban, TikTok this week effectively stated that Trumpland appears to have forgotten about the proposed ban entirely. TikTok filed a petition this week in a US Court of Appeals calling for a review of actions by the Trump administration?s Committee on Foreign Investment in the United States (CFIUS), pointing out that the deadline for ByteDance to sell off its US assets over national security concerns came and went this week with no action from Trumpland or word on any extension.
Apparently the whole TikTok thing fell off the radar as the administration focuses on pretending it didn’t lose the election. Whoops:
“For a year, TikTok has actively engaged with CFIUS in good faith to address its national security concerns, even as we disagree with its assessment,? TikTok says in a statement to The Verge. ?In the nearly two months since the President gave his preliminary approval to our proposal to satisfy those concerns, we have offered detailed solutions to finalize that agreement ? but have received no substantive feedback on our extensive data privacy and security framework.”
This is, of course, because the TikTok ban was largely performative bullshit and cronyism, designed to drum up some hysteria over China and provide Trump with leverage in his harmful trade war while driving some additional cash and influence to his U.S. allies.
If the Trump administration and GOP genuinely cared about U.S. consumer privacy and security, there’s a laundry list of more pressing issues it could do something about, including shoring up vulnerable U.S. telecom infrastructure, passing even a baseline U.S. privacy law for the internet era, better policing the widespread abuse of <a href=https://www.techdirt.com/articles/20190108/11090741358/another-day-another-massive-cellular-location-data-privacy-scandal-well-probably-do-nothing-about.shtml”>user location data (by corporations and the government alike), attempt to shore up security in the internet of broken things sector, secure election integrity, and stop, you know, trying to destroy encryption.
Neither Trump Inc. nor the Trump-allied GOP does any of this because it’s more interested in bad faith posturing and bullshit than serious governance or public welfare. If the folks fanning their face over TikTok truly cared about consumer privacy and security, we’d do something about the broader, unaccountable mess that is adtech, app, government, and telecom data privacy. Instead we get a giant pile of bullshit and a mountain of billable legal man hours over something that was never serious adult policy to begin with.