Just 23% Of Americans Know The U.S. Has Failed To Pass An Internet-Era Privacy Law
from the not-the-sharpest-knife dept
We’ve noted repeatedly how the hyperventilation about TikTok privacy is largely just a distraction from the U.S.’ ongoing failure to pass even a basic privacy law or meaningfully regulate data brokers.
We haven’t done those things for two reasons. One, the dysfunctional status quo (where companies mindlessly over-collect data and fail to secure it, resulting in endless privacy scandals) is hugely profitable to everybody in the chain. Two, the government long ago realized it can abuse the barely regulated info-hoovering user tracking system we’ve built to avoid having to get warrants.
There’s simply no meaningful incentive for reform.
None of this is helped by the fact that an ad-based, wealth-obsessed tech press is financially incentivized to prioritize engagement clickbait (billionaire cage matches! Poorly-made blockchain-based ape art will change the world!), over nuance and deeper analysis. A media ecosystem owned by billionaires that seems to have an ever-dwindling interest in meaningfully challenging money, power, or the status quo.
The result of our collective superficiality isn’t hard to find when looking at the tech knowledge of the broader public. A recent Pew survey of 5,101 U.S. adults found that 80 percent of Americans know that Elon Musk now owns Tesla and Twitter, but just 23 percent were aware that the United States lacks a meaningful privacy law addressing how companies can use the data they collect:

52 percent of the public wasn’t sure if we had a privacy law. At the same time, while 77 percent of the public knows that Facebook changed its name to Meta in 2021, less than half (48 percent) of those surveyed know what two-factor authentication is. And while 87 percent know that more complicated passwords are better, just 32 percent have a basic understanding of how “AI” (LLMs) function.
When the press covers consumer privacy, the fact that the U.S. government has proven too corrupt to pass even a basic internet-era privacy law rarely gets mentioned. The idea that the government has been lobbied into apathy on this subject for 30 years by a broad coalition of industries (opposed to anything but the most toothless oversight) rarely even warrants a mention in mainstream tech coverage.
While I’m sure a superficial, clickbait obsessed tech press isn’t the only culprit here (our shaky education standards surely play a role), I can’t imagine it helps much. As a tech reporter I’ve watched a long, long line of quality independent tech news outlets get dismantled in favor of superficial clickbait machines, terrified of offending anyone in power, whose output is now being clumsily supercharged by “AI”.
Tech journalism’s failure to accurately portray the sorry state of U.S. privacy was perfectly exemplified by coverage over the TikTok privacy scandals. Endless outlets parroted worries that a single app might share U.S. consumer data with the Chinese government; few if any could be bothered to note that same Chinese government can buy endless reams of consumer data from barely regulated data brokers.
As a broadband and telecom beat reporter in particular, I’ve similarly seen how when press outlets cover substandard broadband, the real underlying problem (consolidated monopoly power has lobbied a corrupt government into apathy) again rarely warrants a mention. It’s systemic, and until we dedicate some serious time toward creatively funding independent journalism, it’s simply not getting better.
Filed Under: corruption, federal privacy law, legislation, pew, privacy, survey, tech, two factor authentication
Comments on “Just 23% Of Americans Know The U.S. Has Failed To Pass An Internet-Era Privacy Law”
Good
Good. It’s like saying that 23% of people don’t realize that the US had failed to pass an alien- invasion law. When the government doesn’t pass a law that’s stupid, meaningless, useless, and counterproductive, people don’t have to think about it.
Re:
I see you have difficulty in distinguishing between things that laws can deal with and things that they can’t. Also, are you happy that anybody with money and a reason can put your life under a microscope, like before offering you insurance, a job or somewhere to live.
Re:
Except there are laws against aliens entering the USA. It’s quite a big deal, politically, in the southern states, which is why Trump promised to build a wall to keep them out.
only because it hasn’t been broadcast and govt + security services wanna continue to be able to spy on/infiltrate personal/private info!!
Deepfake definition is incorrect
“A deepfake is a seemingly real … video or audio of something that did not occur.”
This statement can be true or false. You can absolutely make a deepfake of something that DID occur but wasn’t recorded.
Re:
Sure, and the statement about cookies is wrong. They’re not things with the ability to track you: being merely bits of data, cookies can’t “do” anything, but may be used by web site operators (with help from your browser) to track you. The Facebook statement is misleading: “Facebook, Inc.” changed its name, but the thing known simply as “Facebook” is a site that did not change its name. The privacy law statement is false: the USA has several national privacy laws including VPPA and HIPAA addressing how companies can use the data they collect (and those laws are useless for most cases, but nobody said anything about utility). The password and two-factor authentication statements refer to external data we’re not privy to, so I can’t evaluate those.
But none of that matters, right? I’m sure they got the data to support the conclusions they wanted to make. Remember that when you read about survey results. Usually, we don’t even get to see the exact wording people were responding to, just some over-simplified summary.
Re:
Then it isn’t a deepfake, it’s a reconstruction.
Re: Re:
Wikipedia says “Deepfakes … are synthetic media that have been digitally manipulated to replace one person’s likeness convincingly with that of another.”
By that definition, what mick’s talking about could be a deepfake. Provided that one person’s likeness was replaced by another, as opposed to inserting their likeness where there was no person. (Wikipedia may be wrong here. It agrees with the Cambridge Dictionary but contradicts Merriam-Webster and Wiktionary. I’d never heard of “replace” or “convincingly” being pre-requisites, though “convincingly” would usually be a goal.)
Usually, a “re-creation” would involve an actor protraying a person, without “face-replacement” or such effects.
Re: Re: Re:
Wikipedia also says in the same paragraph: “While the act of creating fake content is not new, deepfakes leverage powerful techniques from machine learning and artificial intelligence to manipulate or generate visual and audio content that can more easily deceive.”
That’s generally a re-enactment but it’s kind of irrelevant for what we are discussing. What matters is what the goal is, the goal of deepfakes is to fool or deceive people while a recreation or a re-enactment is based on facts which can be made by different CGI-technologies. For example, this article have 10 historic persons faces recreated using CGI, I doubt they can be called deepfakes.
Re: Re: Re:2
Despite the text you quoted from Wikipedia, its definition doesn’t require any intent to deceive; nor does any other I found. The quoted text merely says they could be useful for that purpose, which is true.
Well, the Cambridge definition only applies to video or sound recordings, which those still images aren’t. The Wiktionary definition would classify them as deepfakes if artificial intelligence were used (which the article doesn’t mention either way). Collins has a definition so vague it would cover even “Syncro-Vox”-style lip-replacement, if done digitally; that was used comedically by Conan O’Brian specifically because it was so cheesy and unconvincing.
Maybe you’re right that an intent to deceive should be part of the definition. But for now, it seems there’s very little about the term “deepfake” that people agree on. We might as well say, like pornography, “I’ll know it when I see it”.
Re: Re: Re:3
The whole point with producing a deepfake is to make it look like the real thing and pass it off as such although it isn’t. Why do you think they called it deepfake to begin with?
And the Wikipedia quoted, it said within a context. Ignore that context and you can draw whatever conclusion you want.
Re: Re: Re:4
Okay, but that has little to do with deception. The dinosaurs in Jurassic Park were all fake, meant to look like the real things although they weren’t. But it wasn’t some conspiracy to make the public think the dinosaurs had actually returned.
Re: Re: Re:5
It’s all about the context. Re-creating dinosaurs with CGI is in a sense a deception but it’s done for entertainment purposes where it’s expected that the audience suspends their disbelief (which goes for most entertainment built on fiction). No one would ever believe a fake video of a dinosaur rampaging through Macy’s for example, but they may well believe a fake video of soliciting sex of a prostitute.
Keep it going
The only people who NEED to know WHO and where you are, are bill collectors.
Swamp the World with our data, with our CC# and SS#, and watch what happens.
They cant Prove you bought anything. You can contest everything.
So what can they do. Something they have been trying to do for along time. Facial ID and Chips.
They are allowing it, They Want it to spread. They want a Perfect ID.
68% of people did not choose the correct answer of the 4 presented regarding LLMs.
Coincidentally LLMs do not threaten warehouse/dock workers, baristas, waitresses, or billionaires¹.
¹ – The sample size of billionaires surveyed is not sufficient for statistical extrapolation.
Electronics communications privacy law?
We actually do have a longstanding law (the Electronic Communications Privacy Act) limiting what internet service providers can do with user data. It doesn’t address every sort of company, and it mostly provides limits against disclosures of message content and disclosure to the government, but it is a national law, and it does address how companies can use people’s data. (Which is not to say the that law is perfectly fine or even sufficient.). So the correct answer to the question in the headline – whether the U.S. has a “national privacy law addressing how companies can use the data they collect” – would seem to be “Not sure.” And just over half of Americans surveyed got it right.
For those of you advocating for a federal privacy law: what do you envision such a law would look like? What sorts of practices should be outlawed or regulated? What sorts of data would take priority? Would you want something that focuses more on the purchase and use of data by third parties, as opposed to the mere collection?