We noted that the tool was built on the back of services from a company named Onerep, which basically offers the same service. We also noted that the effort was likely a game of whac-a-mole given the sheer volume of data brokers and other companies trafficking in consumer data in a country too corrupt to pass even a baseline privacy law for the internet era.
More specifically, Krebs found that Onerep CEO and founderDimitiri Shelest had founded dozens of data-hoovering “people finder” type websites over the years, including Nuwber, a data broker with a checkered past that sells detailed consumer behavior, location, and other data gleaned from user devices.
Shelest was forced to issue an apology for not being more up front about his not insignificant role in an industry he professes to be protecting people from:
“I get it. My affiliation with a people search business may look odd from the outside. In truth, if I hadn’t taken that initial path with a deep dive into how people search sites work, Onerep wouldn’t have the best tech and team in the space. Still, I now appreciate that we did not make this more clear in the past and I’m aiming to do better in the future.”
Mozilla issued its own statement clarifying that no user data was put at risk, but that “the outside financial interests and activities of Onerep’s CEO do not align with our values.”
We’ve noted repeatedly how the U.S.’ corrupt refusal to pass a privacy law or regulate data brokers isn’t much of a laughing matter. The largely unregulated industry is now routinely caught up in dangerous scandals involving over-collecting consumer data, then selling access to any nitwit with a nickel (like, say, right wing activists targeting abortion clinic visitors with misinformation).
Mozilla, which publishes numerous excellent reports on consumer privacy, likely provided Onerep with a reputation boost. But this latest mess once again highlights how modern America’s online privacy problems aren’t something that can be fixed with an app. The rot runs deep, and fixing it requires passing a privacy law — and giving regulators the staff and resources they’ll need to enforce it.
Unfortunately when you have so many interconnected industries making a killing on the existing dysfunction (even apparently the ones claiming to help), meaningful reform is hard to come by.
Earlier this month the New York Times published a major story confirming that automakers collect driver behavior data then sell it to a long list of companies. That includes insurance companies, who are now jacking up insurance rates if they see behavior in the dataset they don’t like.
The absolute bare minimum you could could expect from the auto industry here is that they’re doing this in a way that’s clear to car owners. But of course they aren’t; they’re burying “consent” deep in the mire of some hundred-page end user agreement nobody reads, usually not related to the car purchase itself but the apps consumers now use to manage roadside assistance and other programs.
“OnStar Smart Driver customer data is no longer being shared with LexisNexis or Verisk,” a G.M. spokeswoman, Malorie Lucich, said in an emailed statement. “Customer trust is a priority for us, and we are actively evaluating our privacy processes and policies.”
Of course if “consumer trust ” was actually a priority, GM would have done the absolute bare minimum here and openly and clearly informed consumers this was happening. Instead, like most companies, they buried it fifty pages deep in the end user agreement for embedded support and monitoring services.
And they did that because they know there’s no meaningful penalty.
The U.S still has no meaningful modern privacy law. And U.S. privacy regulators have been steadily defanged, defunded, understaffed and boxed into a corner for the better part of a generation under the pretense that this would unlock vast and untold innovative synergies. Instead, as consumer groups and privacy activists long warned, it created an environment rife for widespread abuse.
Florida resident Romeo Chicco, whose insurance rates skyrocketed after his Cadillac collected his driving data, has filed a complaint seeking class-action status against GM, OnStar and LexisNexis. Federal regulators will also likely come knocking, even if a four year investigation likely results in a fine that’s a tiny percentage of the amount of money GM made from monetizing the data.
At that point automakers (which a recent Mozilla report stated have some of the worst privacy and security standards in all of tech) will have moved on to abusing your privacy in entirely new ways (or in the same way, simply with a few new creative wrinkles). Such is life in a country that’s too corrupt to pass a meaningful privacy law — or adequately support the agencies tasked with existing legal enforcement.
Florida Gov. Ron DeSantis, who failed miserably in his run for president, signed a very controversial bill into law that requires age verification for porn websites and bans social media for minors under the age of 14. The act, House Bill (HB) 3, is one of the most restrictive laws of its kind to be implemented in the United States. The bill enters force on January 1, 2025, but it will be ripe for a legal showdown brought by social media companies and adult industry firms.
Like age verification laws implemented elsewhere in the country, HB 3 is broad and offers very little clarification on how to enforce the provisions of these laws. The Florida bill, in my view, attempts to do too much by simply relying on the “protect the children” narrative. According to HB 3, minors who want to use social media must get permission from their parents through an age check. Also, the bill tries to lump the age verification debate surrounding porn into a single issue. The legislation’s sponsors and Gov. DeSantis falsely present House Bill 3 as a data privacy measure protecting minors and adults alike. But, as we’ve seen time and again, mandatory age verification requirements – no matter how advanced or secure age verification technology can be – are actually a violation of a user’s right to privacy and anonymity on the internet as a whole.
The American Civil Liberties Union (ACLU) of Florida issued a warning discussing these very shortcomings in the law in the weeks before DeSantis signed HB 3. The warning itself fell on countless deaf ears at the Florida State House, as ACLU of Florida’s legislative director Kara Gross accurately said, “The age-verification requirements in HB 3 place barriers between users, whether they’re adults or minors, and their constitutional right to speak online. Age verification requirements blatantly chill the speech and threaten the privacy of adults by requiring them to surrender their anonymity to engage in constitutionally protected speech.” Gross isn’t wrong.
No matter how you handle age verification, you’re still verifying your age through the use of some sort of personal information. This ranges from government identification to artificial intelligence-assisted age estimation and (now, more than ever) biometrics. While the vendors of age verification software tout high-end security, they do so by significantly downplaying or overtly dismissing the most basic lesson in security studies: no system is impenetrable. And the assumption that requiring the broad use of age-gating software can suddenly serve as a silver bullet to protect minors from viewing age-restricted content on the internet is not only faulty reasoning but very dangerous.
Beyond that, I need to remind you all that all of the current legal and policy instruments being used to require age verification are unconstitutional.
Michael McGrady covers the legal and tech side of the online porn business, among other topics.
All you need is Google. That’s how things have been going in the law enforcement world. If you don’t know who you’re looking for, just ask Google to do it for you. A variety of warrants that demand Google search its data stores for personal information (that might lead investigators to find potential suspects [who can then be properly targeted with more normal warrants]) have been standard operating procedure for years.
There’s no probable cause to believe Google has committed any crimes. Nor is there necessarily even any reason to believe Google is housing data pertaining to criminal activity. At best, these warrants — ones that seek anything from mass groupings of location data to information on people using certain search words when utilizing Google’s search engine — simply assume Google has collected so much data, it’s a logical place to start an investigation.
The most common form of these Google-centric warrants is the “geofence warrant,” a warrant that asks Google to provide certain information about anybody in a certain area at a certain time. These warrants make anyone in the area a criminal suspect and, if Google complies, citizens are at the mercy of investigators who have the power to decide who is or isn’t a criminal suspect, even when the geofenced areas include things like apartment complexes, churches, or heavily trafficked business areas.
The next most popular is the “keyword” warrant. Using even more specious reasoning, investigators approach courts with warrant affidavits attesting that Google houses information on Google searches that may be relevant to the investigation. Without a doubt, Google stores information about keyword searches. But just because it does store this info doesn’t mean the keywords provided by investigators have anything to do with the crimes being investigated.
This is the latest wrinkle in the Investigatory world. As Thomas Brewster reports for Forbes, keyboard warriors working for federal agencies are now using warrants and court orders to demand Google turn over information on users who may have watched certain videos that have been viewed tens of thousands of times.
Federal investigators have ordered Google to provide information on all viewers of select YouTube videos, according to multiple court orders obtained by Forbes. Privacy experts from multiple civil rights groups told Forbes they think the orders are unconstitutional because they threaten to turn innocent YouTube viewers into criminal suspects.
In a just-unsealed case from Kentucky reviewed by Forbes, undercover cops sought to identify the individual behind the online moniker “elonmuskwhm,” who they suspect of selling bitcoin for cash, potentially running afoul of money laundering laws and rules around unlicensed money transmitting.
In conversations with the user in early January, undercover agents sent links of YouTube tutorials for mapping via drones and augmented reality software, then asked Google for information on who had viewed the videos, which collectively have been watched over 30,000 times.
The feds couldn’t figure out how to set up a honey pot, nor could they figure out how to monitor these links on their own. Following these failures, they then asked a judge for permission to hassle Google into turning over information on (potentially) 30,000 different YouTube viewers. I’m sure it’s more nuanced than that, but that’s what the plain text conveys.
The unsealed court order wasn’t just fishing for a list of vague identifiers that could be winnowed down to a list of suspects and a follow-up warrant demanding actual identifying information on these ~30,000 YouTube users. No, it appears the feds led with the big ask, demanding names, addresses, phone numbers, and user activity for every viewer of these videos between January 1-8, 2023. AND(!!) it asked Google to provide IP addresses for all viewers who were not logged into (or did not possess) Google accounts.
And if you think that fishing hole is pretty fucking big, just keep reading. Brewster has tracked down a few other similar demands for YouTube viewer data and 30,000 viewers is actually on the shallow end of this metaphor. An attempt to find someone who called in a bomb threat resulted in this spectacular abuse of process:
[Federal investigators] asked Google to provide a list of accounts that “viewed and/or interacted with” eight YouTube live streams and the associated identifying information during specific timeframes. That included a video posted by Boston and Maine Live, which has 130,000 subscribers.
This was supposedly justified by the fact that one camera installed by a local business provided a continuous live stream of the area where the supposed bomb had been placed. (It does not appear that any bomb was actually placed anywhere, but a bomb threat alone is often enough to attract the attention of federal officers.)
If 30,000 users being subjected to a single federal law enforcement search is unequivocally bad, the search of perhaps 130,000 users is an almost unimaginable abuse of government power.
We still don’t know how these inexplicably broad requests were handled by Google, nor whether they were instrumental in the prosecution of criminal activity. The DOJ refused to comment on the court orders or the cases. Google has yet to say whether or not it complied with these ridiculous court orders. The court system itself hasn’t been much help to the general public, even though it’s more than willing to assist another government branch by acquiescing to its requests for secrecy.
It’s not just the Fourth Amendment in play here. There’s also the First Amendment. Much like in cases involving mass keyword searches, citizens should feel free to consume any non-illegal content they want without fearing the government may demand their content provider turn over their identifying info.
This is a scary step forward by law enforcement. Hopefully, Google has been resisting these clearly unconstitutional demands for data. And even more hopefully, courts will start seeing enough of these broad warrants, they’ll start shutting down this new form of government overreach.
We’ve noted a few times now how the quest to ban TikTok is heavily peppered with bad faith actors who historically don’t care about consumer privacy or national security. We’ve also noted how it’s performative to hyperventilate about one single sometimes-dodgy app, but ignore the broader dysfunction and corruption (like our lack of a modern privacy law, or refusal to regulate data brokers) that paved the way.
The central argument of those advocating for a TikTok ban is that it poses such a dire, unique threat to U.S. consumer privacy and national security that a ban is warranted. While TikTok certainly has engaged in idiotic behavior (like when it spied on journalists) the case why it’s so much worse than dozens of other domestic and international companies (like data brokers) still hasn’t been publicly made.
Some of the lawmakers who were briefed last week leaked word to Axios that they were “shocked” at TikTok’s “access to personal data.” But then again, these Senators aren’t the most objective or tech savviest folks on Earth, and it sounds like that a lot of what was “revealed” to them is fairly (and unfortunately) routine across most apps, services, and hardware.
Like here, where they express ambiguous concern about China’s ability to “harvest user data” and then “weaponize it” in the form of misinformation:
“One senator said national security officials described how China can harvest user data and weaponize it through propaganda and misinformation.”
Except this is already happening across a litany of apps and services. Senator Ron Wyden’s office just got done revealing how data brokers sold abortion clinic visitor location data to right wing activists, who then turned around and harassed vulnerable women with health care misinformation. Congress hasn’t made a peep, and the press coverage the story received was relatively miniscule.
Or here, where Senators leak word to Axios that the TikTok app can “determine what users are doing on other apps,” or abuse hardware permissions to monitor user behavior:
“Another lawmaker said they were told TikTok is able to spy on the microphone on users’ devices, track keystrokes and determine what the users are doing on other apps.”
From doorbell manufacturers to cable companies and your TV set, there’s no limit of companies, apps, or hardware vendors (many Chinese) that abuse hardware permissions to engage in a litany of consumer surveillance, then monetize that info globally. Congress generally couldn’t care less about the lack of privacy or consumer security in the internet-of-broken-things space or anywhere else.
We’ve noted repeatedly how international data brokers hoover up vast swaths of consumer location, behavior, demographic, and other data, using them to build elaborate consumer profiles. Access is then sold to a parade of dodgy groups, individuals, and organizations (including Chinese intelligence and right wing activists) without an iota of congressional concern.
I know I’m being redundant here, but the reason this stuff happens (whether it’s TikTok or anybody else) is because Congress has proven too corrupt to pass a meaningful internet-era privacy law. They’ve proven too corrupt to regulate data brokers, despite the fact they engage in worse behavior — at an even greater scale — than what TikTok is being critiqued for. This corruption is the real national security threat.
I suspect that if lawmakers truly had seen some kind of smoking gun related to TikTok (that goes well above and beyond broader market dysfunction we now see everyday), it would have been leaked to every right wing news outlet imaginable during their three year cable TV TikTok hyperventilation campaign.
I still tend to think the quest to ban TikTok is an unserious slurry of xenophobia and anti-competitive corruption posing as good faith concerns about privacy and national security, two subjects Congress very clearly and demonstrably couldn’t care any less about.
With strong bipartisan support, the U.S. House voted 352 to 65 to pass HR 7521 last week, a bill that would ban TikTok nationwide if its Chinese owner doesn’t sell the popular video app. The TikTok bill’s future in the U.S. Senate isn’t yet clear, but President Joe Biden has said he would sign it into law if it reaches his desk.
The speed at which lawmakers have moved to advance a bill with such a significant impact on speech is alarming. It has given many of us — including, seemingly, lawmakers themselves — little time to consider the actual justifications for such a law. In isolation, parts of the argument might sound somewhat reasonable, but lawmakers still need to clear up their confused case for banning TikTok. Before throwing their support behind the TikTok bill, Americans should be able to understand it fully, something that they can start doing by considering these five questions.
1. Is the TikTok bill about privacy or content?
Something that has made HR 7521 hard to talk about is the inconsistent way its supporters have described the bill’s goals. Is this bill supposed to address data privacy and security concerns? Or is it about the content TikTok serves to its American users?
From what lawmakers have said, however, it seems clear that this bill is strongly motivated by content on TikTok that they don’t like. When describing the “clear threat” posed by foreign-owned apps, the House report on the bill cites the ability of adversary countries to “collect vast amounts of data on Americans, conduct espionage campaigns, and push misinformation, disinformation, and propaganda on the American public.”
This week, the bill’s Republican sponsor Rep. Mike Gallagher told PBS Newshour that the “broader” of the two concerns TikTok raises is “the potential for this platform to be used for the propaganda purposes of the Chinese Communist Party.” On that same program, Representative Raja Krishnamoorthi, a Democratic co-sponsor of the bill, similarly voiced content concerns, claiming that TikTok promotes “drug paraphernalia, oversexualization of teenagers” and “constant content about suicidal ideation.”
2. If the TikTok bill is about privacy, why aren’t lawmakers passing comprehensive privacy laws?
It is indeed alarming how much information TikTok and other social media platforms suck up from their users, information that is then collected not just by governments but also by private companies and data brokers. This is why the EFF strongly supports comprehensive data privacy legislation, a solution that directly addresses privacy concerns. This is also why it is hard to take lawmakers at their word about their privacy concerns with TikTok, given that Congress has consistently failed to enact comprehensive data privacy legislation and this bill would do little to stop the many other ways adversaries (foreign and domestic) collect, buy, and sell our data. Indeed, the TikTok bill has no specific privacy provisions in it at all.
It has been suggested that what makes TikTok different from other social media companies is how its data can be accessed by a foreign government. Here, too, TikTok is not special. China is not unique in requiring companies in the country to provide information to them upon request. In the United States, Section 702 of the FISA Amendments Act, which is up for renewal, authorizes the mass collection of communication data. In 2021 alone, the FBI conducted up to 3.4 million warrantless searches through Section 702. The U.S. government can also demand user information from online providers through National Security Letters, which can both require providers to turn over user information and gag them from speaking about it. While the U.S. cannot control what other countries do, if this is a problem lawmakers are sincerely concerned about, they could start by fighting it at home.
3. If the TikTok bill is about content, how will it avoid violating the First Amendment?
Whether TikTok is banned or sold to new owners, millions of people in the U.S. will no longer be able to get information and communicate with each other as they presently do. Indeed, one of the given reasons to force the sale is so TikTok will serve different content to users, specifically when it comes to Chinese propaganda and misinformation.
The First Amendment to the U.S. Constitution rightly makes it very difficult for the government to force such a change legally. To restrict content, U.S. laws must be the least speech-restrictive way of addressing serious harms. The TikTok bill’s supporters have vaguely suggested that the platform poses national security risks. So far, however, there has been little public justification that the extreme measure of banning TikTok (rather than addressing specific harms) is properly tailored to prevent these risks. And it has been well-established law for almost 60 years that U.S. people have a First Amendment right to receive foreign propaganda. People in the U.S. deserve an explicit explanation of the immediate risks posed by TikTok — something the government will have to do in court if this bill becomes law and is challenged.
4. Is the TikTok bill a ban or something else?
Some have argued that the TikTok bill is not a ban because it would only ban TikTok if owner ByteDance does not sell the company. However, as we noted in the coalition letter we signed with the American Civil Liberties Union, the government generally cannot “accomplish indirectly what it is barred from doing directly, and a forced sale is the kind of speech punishment that receives exacting scrutiny from the courts.”
Furthermore, a forced sale based on objections to content acts as a backdoor attempt to control speech. Indeed, one of the very reasons Congress wants a new owner is because it doesn’t like China’s editorial control. And any new ownership will likely bring changes to TikTok. In the case of Twitter, it has been very clear how a change of ownership can affect the editorial policies of a social media company. Private businesses are free to decide what information users see and how they communicate on their platforms, but when the U.S. government wants to do so, it must contend with the First Amendment.
5. Does the U.S. support the free flow of information as a fundamental democratic principle?
Until now, the United States has championed the free flow of information around the world as a fundamental democratic principle and called out other nations when they have shut down internet access or banned social media apps and other online communications tools. In doing so, the U.S. has deemed restrictions on the free flow of information to be undemocratic.
In 2021, the U.S. State Department formally condemned a ban on Twitter by the government of Nigeria. “Unduly restricting the ability of Nigerians to report, gather, and disseminate opinions and information has no place in a democracy,” a department spokesperson wrote. “Freedom of expression and access to information both online and offline are foundational to prosperous and secure democratic societies.”
Whether it’s in Nigeria, China, or the United States, we couldn’t agree more. Unfortunately, if the TikTok bill becomes law, the U.S. will lose much of its moral authority on this vital principle.
A lot of police work in the United States is just playing the odds. Roll the dice enough times, and you’re sure to come up a winner now and then. The odds really don’t matter because law enforcement agencies are playing with house money, so being wrong time and time again will never bankrupt them.
Most of this guesswork masquerading as investigative work begins with pretextual stops. Come up with a reason — any reason — to pull someone over and let the games begin. Privacy rights are lower when there’s a car on a public road involved. Probable cause can be obtained by bringing a dog into the mix. Literally anything a driver does or doesn’t do when interacting with an officer can be considered suspicion reasonable enough to continue detaining them.
Even with the guidelines established by the Rodriguez decision (a stop is over once the objective has been completed [ticket, warning, etc.]), little has changed in the way this part of police business is handled. All these options work together to create more opportunities for warrantless searches. And even if all these fail, there’s still a chance a cop can talk someone into “consenting” to a search, something that can be accomplished by insinuating that refusing will result in an arrest or the loss of their car or both.
So, there are plenty of tools available for cops to use to separate people from their rights. And if “consent” is obtained (even if it’s implied or directly coerced), no harm, no foul… at least according to all cops and most courts.
But what are we gaining from this reliance on “consent?” It certainly can’t be an increased respect for Fourth Amendment rights. And it certainly isn’t any measurable gains on the crime-fighting front, as this op-ed for Scientific American points out.
Typically, if law enforcement wants to search you or your property in the U.S., they will need either a warrant or probable cause—at least some evidence of wrongdoing. Those things can be hard to come by, limiting the ability of the police to stop and search people at will. But there is a loophole. What if they simply ask for your permission? This is known as a “consent” search because its constitutionality derives from an individual agreeing to be searched rather than any evidence of criminal activity. This common type of police search is tailor-made to circumvent our Fourth Amendment rights to privacy and is so ineffective at locating criminals that its contributions to public safety—insofar as we can measure the concept—appear nonexistent.
The writers of this editorial aren’t just making claims without facts in evidence. Derek Epp is one of the authors of Suspect Citizens: What 20 Million Traffic Stops Teach Us About Policing and Race. Hannah Walker is the author of Mobilized by Injustice, a book that examines the civil rights movement’s intersection with the anti-police violence movement that became even more intertwined following recent high-profile killings of unarmed black men by police officers. Marcel Roman is a post-doctorate fellow at Harvard University whose work focuses on racial and ethnic politics. Megan Roman is a Poly-Sci PhD student.
This isn’t a random group of keyboard warriors trying to convince you cops are bad. That’s not even the point they’re making. They’re simply pointing out the fact that police work that combines a quantity-over-quality approach with a casual disregard for constitutional rights doesn’t make the public safer. And, in an era where law enforcement officials constantly complain that they’re unable to hire or retain officers, it makes very little sense to prioritize outdated law enforcement tactics.
And outdated they are. Pressuring people to consent to searches is relatively new in terms of the overall history of law enforcement. But its origin dates back to the days when crime rates were at historical highs and everyone from mayors to police commissioners to congressional reps to sitting presidents thought the best solution would just be more of the same stuff that didn’t work before, only harder and faster.
Consent searches rose to prominence in the 1980s and 1990s during what has come to be known as America’s tough-on-crime period.
[…]
A popular police academy textbook in the 1990s devoted chapters to discretionary searches and the art of getting to yes, in which aspiring officers are told: “Gaining his [the driver’s] cooperation requires that you extend the play-dumb guise that you’ve used already.… You need to decide, finally, whether you want to search the suspect’s vehicle. If you do, you now need to position him emotionally to grant you his permission…..
This sort of thing has always worked out well for law enforcement. While many people have a passing familiarity with their rights, it’s not always obvious when you’re surrounded by cops who are all suggesting the best thing to do would be to waive those rights. Very few people — even if they fully understand their rights — are willing to terminate encounters with officers because, even in situations where drivers have all the rights, the cops still have all the power. Leaving before an officer says you can (even if they’re in the wrong) can lead to additional criminal charges (at best) or severe injury or death (at worst.)
So, the odds will always favor law enforcement. But even with this advantage, millions of stops aren’t really putting a dent in crime.
Using police records of over 900,000 searches, we find that consent searches are about 30 percent less likely to locate contraband than searches based on probable cause.
Sure, if you stop enough people, you’re bound to stumble upon evidence of criminal activity. And those are the only searches cops want to talk about — the ones that lead to criminal charges (or a bunch of cash). The millions of stops where nothing is found are swept under the rug. Law enforcement officers don’t bother keeping a tally of wins and losses because they know how often they come out on the losing side. People who are hassled for minutes or hours before being cut loose rarely sue. We usually only hear about unlawfully extended searches and/or non-consensual searches framed as “consensual” when contraband is discovered and the evidence is being challenged in court.
This reality obscures the overall futility of police fishing expeditions and officers’ over-reliance on consensual searches. Almost everything related to this stems from criminal cases, which makes it appear as though cops have a preternatural ability to sniff out criminals who’ve done nothing more than, say, cross a fog line, before the traffic stop is initiated.
But that’s not what’s happening. Millions of traffic stops occur every year. Only a small percentage uncover criminal activity. This isn’t smart policing. It’s brute forcing busts, banging on as many cars and drivers as possible in hopes of hitting the jackpot.
This success rate isn’t acceptable anywhere else. Doctors who misdiagnose or mistreat 80-90% of their patients would likely lose their licenses to practice medicine. Factory workers creating parts that met specifications less than 15% of the time would soon find themselves looking for other work. But it’s acceptable here for some reason, even if the only thing it’s guaranteed to do is further destroy the relationship between cops and communities. That it’s gone on so long without interruption — even during periods of alleged understaffing — clearly indicates agencies and officers prefer to do things that are easy and pointless, rather than the harder stuff that might actually make a difference.
Last week the New York Times published a story confirming what everybody assumed was already happening. Automakers collect reams of personal behavior, phone, and other data (without making it clear to consumers) then sell it to a long list of companies. Including insurance companies, who are now jacking up insurance rates if they see behavior in the dataset they don’t like.
The absolute bare minimum you could could expect from the auto industry here is that they’re doing this in a way that’s clear to car owners. But of course they aren’t; they’re burying “consent” deep in the mire of some hundred-page end user agreement nobody reads, usually not related to the car purchase itself but the apps consumers now use to manage roadside assistance and other programs.
Unsurprisingly, one of the folks who was being tracked in this way has now filed suit (see: complaint) against both General Motors and Lexis Nexis, which the insurance industry uses to digest driver data and then create driver behavior reports used to impact insurance rates. And again, it’s the failure to be transparent with consumers that got the companies into trouble:
“What no one can tell me is how I enrolled in it,” Mr. Chicco told The Times in an interview this month. “You can tell me how many times I hard-accelerated on Jan. 30 between 6 a.m. and 8 a.m., but you can’t tell me how I enrolled in this?”
A report last month by Mozilla highlighted how the auto industry was the absolute worst industry the organization tracks on privacy practices, routinely over-collecting and failing to adequately protect or encrypt broad swaths of data. Not just data from the vehicle; but troves of data collected from your phone every time you sync it with your car’s infotainment and navigation systems.
It’s another giant mess made possible, in part, by a corrupt Congress’ absolute refusal to pass a meaningful privacy law for the internet era, despite a steady parade of stories just like this one. Yes, automakers should be transparent, but consumers should also be empowered to opt out of punitive surveillance and data collection without losing features or seeing entirely new, annoying restrictions.
As you probably noticed, the House just passed the controversial ban on TikTok, with 352 Representatives in favor, and 65 opposed. The bill is now likely to be slow-walked to the Senate where its chance of passing is murky, but possible. Biden (which has been using the purportedly “dangerous national security threat” to campaign with) has stated he’ll sign the bill should it survive the trip.
The ban (technically a forced divestment, followed by a ban after ByteDance inevitably refuses to sell) passed through the house with more than a little help from Democrats:
Not talked much about in press coverage is the fact that the majority of constituents don’t actually support a ban (you know, the whole representative democracy thing). Support for a ban has been dropping for months, even among Republicans, and especially among the younger voters Democrats have already been struggling to connect with in the wake of the bloody shitshow in Gaza:
As the underlying Pew data makes clear, a lot of Americans aren’t sure what to think about the hysteria surrounding TikTok. And they’re not sure what to think, in part, because the collapsing U.S. tech press has done a largely abysmal job covering the story, either by parroting bad faith politician claims about the proposal and app, or omitting key important context.
The press has also been generally terrible at explaining to the public that the ban doesn’t actually do what it claims to do.
Banning TikTok, but refusing to pass a useful privacy law or regulate the data broker industry is entirely decorative. The data broker industry routinely collects all manner of sensitive U.S. consumer location, demographic, and behavior data from a massive array of apps, telecom networks, services, vehicles, smart doorbells and devices (many of them *gasp* built in China), then sells access to detailed data profiles to any nitwit with two nickels to rub together, including Chinese, Russian, and Iranian intelligence.
Often without securing or encrypting the data. And routinely under the false pretense that this is all ok because the underlying data has been “anonymized” (a completely meaningless term). The harm of this regulation-optional surveillance free-for-all has been obvious for decades, but has been made even more obvious post-Roe. Congress has chosen, time and time again, to ignore all of this.
Banning TikTok, but doing absolutely nothing about the broader regulatory capture and corruption that fostered TikTok’s (and every other companies’) disdain for privacy or consumer rights, isn’t actually fixing the problem. In fact, as Mike has noted, the ban creates entirely new problems, from potential constitutional free speech violations, to its harmful impact on online academic research.
I’ve mentioned more than a few times that I think the ongoing quest to ban TikTok is mostly a flimsy attempt to transfer TikTok’s fat revenues to Microsoft, Google, Twitter, Oracle, or Facebook under the pretense of national security and privacy, two things our comically corrupt, do-nothing Congress has repeatedly demonstrated in vivid detail they don’t have any genuine interest in.
TikTok creators seem to understand this better than the gerontocracy or the U.S. tech press:
But if Congress were really serious about privacy, they’d pass a privacy law or regulate data brokers.
If Congress were serious about national security, they’d meaningfully fight corruption, and certainly wouldn’t support a multi-indictment facing authoritarian NYC real estate con man with a fourth-grade reading level for fucking President.
So when Congress pops up to claim it’s taking aim at a single popular app because it’s suddenly super concerned about consumer privacy, propaganda, and national security, skeptics are right to steeply arch an eyebrow. You realize we can see your voting histories and policy priorities, right?
Xenophobia, Protectionism and Information Warfare
The GOP motivation for a TikTok ban has long been obvious: they believe TikTok’s growing ad revenues technically belong, by divine right, to white-owned U.S. companies. But the GOP also sees TikTok as an existential threat to their ever-evolving online propaganda efforts, which have become a strategic cornerstone of an increasingly extremist, authoritarian party whose policies are broadly unpopular.
The GOP is fine with rampant privacy abuses and propaganda — provided they’re the ones violating privacy or slinging political propaganda. You’ll recall Trump’s big original fix for the “TikTok problem” (before a right wing investor in TikTok recently changed his mind, for now) was a cronyistic transfer of ownership of TikTok to his Republican friends at Walmart and Oracle.
Former Trump Treasury Secretary Steve Mnuchin and his Saudi-funded Liberty Strategic Capital is already hard at work putting investors together to buy the app. If the GOP (or a proxy) manages to buy TikTok, they’ll engage in every last abuse they’ve accused the Chinese government of. TikTok will be converted, like Twitter, into a right wing surveillance and propaganda echoplex, where race-baiting authoritarian propaganda is not only unmoderated, but encouraged.
All under the pretense of “protecting free speech,” “antitrust reform,” or whatever latest flimsy pretense authoritarians are currently using to convince a gullible and lazy U.S. press that they’re operating in good faith.
Why Democrats would support any of this remains an open question. The ban would likely aid GOP propaganda efforts, piss off young voters, and advertise the party (which had actually been faster to embrace TikTok than the GOP) as woefully out of touch. All while not actually protecting consumer privacy or national security in any meaningful way. And creating entirely new problems.
National security, consumer privacy, or good faith worries about propaganda don’t enter into it.
Some Democratic Reps, like Ro Khanna, Alexandria Ocasio-Cortez and Sara Jacobs seem to understand the trap, keeping the focus on a need for a federal privacy law that reins in the privacy and surveillance abuses of all companies that do business in the U.S., foreign or domestic. Some senators, like Ron Wyden, have worked hard to ensure equal attention is paid toward rampant data broker abuses.
But 155 House Democrats voted for the ban, either because they’re corrupt, or they have absolutely no idea how any of this actually works. Pissing off your constituents by ruining an app used by 150+ million (mostly young) Americans during an election season is certainly a choice, especially given negligible constituent support–and growing evidence it likely creates more problems than it professes to solve.
While countless lawmakers looking to get on cable TV spent much of the last few years freaking out about TikTok privacy issues, none of those same folks seem bothered by the parade of nasty vulnerabilities in the nation’s telecom networks.
In a new letter sent to the White House, Senator Ron Wyden notes that the steady rush toward mindless deregulation of U.S. telecom has had a negative impact on overall security, particularly on U.S. wireless networks. That, in turn, has been exploited by a growing roster of hacking companies that sell largely untraceable phone hacking tools at ever increasing scale, notes Wyden:
“These phone company hacking services exploit flaws in two obscure technologies, known as Diameter and Signaling System 7 (SS7). These two technologies are used by wireless carriers around the world to deliver text messages between phone companies, and for roaming by their customers traveling abroad. For the last decade, cybersecurity researchers and investigative journalists have highlighted how wireless carriers’ failure to secure their networks against rogue SS7 and Diameter requests for customer data has been exploited by authoritarian governments to conduct surveillance.”
Wyden singles out the FCC’s consistent failure to set even minimum cybersecurity standards for wireless carriers like AT&T, T-Mobile, and Verizon. The FCC has, as we’ve often noted, steadily had its authority, staff, and resources chipped away by industry to the point where the agency is often purely decorative, and routinely unwilling to stand up to telecom giants on any issue of note.
At the same time, Wyden notes that CISA is “actively hiding information about it from the American people,” making it tougher to even understand the scale of the problem. His office recommended that the FCC, CISA and NSA all work in concert to shore up U.S. security standards, using the UK Telecommunications Security Code of Practice as a model.
We probably won’t do any of that, because you get more attention and cable TV news appearances as an FCC official if you purposelessly hyperventilate about stuff like TikTok. Not only are many FCC officials not doing their basic jobs, they’re actively undermining the few things the FCC does attempt to accomplish (like formally recognizing racism in broadband deployment), all to the benefit of industry.