It’s been a while since we’ve talked about Google’s Stadia product. What was originally billed as a forthcoming world class cloud video game streaming platform launched terribly, never gained much traction, and eventually was announced to be pivoting to serving as the backend platform for other companies that actually knew what the hell they were doing with game streaming. While most of Stadia and its team had been fully sunsetted, it was only a few weeks ago that Google finally gave up entirely and shut down its plans to be even a backend service for anyone else to use.
When Google killed the service, the narrative from the company was that Stadia’s technology would live on in Google Cloud, but, according to Stephen Totilo of Axios, even Stadia’s white-label game-streaming service is now dead.
When Stadia’s shutdown was formally announced, Stadia VP and General Manager Phil Harrison made a big deal of the continuation of Stadia’s technology, with even the title being called “A message about Stadia and our long term streaming strategy.” The post read: “The underlying technology platform that powers Stadia has been proven at scale and transcends gaming. We see clear opportunities to apply this technology across other parts of Google like YouTube, Google Play, and our Augmented Reality (AR) efforts—as well as make it available to our industry partners, which aligns with where we see the future of gaming headed.”
Those comments from Harrison were made, literally, a couple of months ago. Two months later, the latest report regarding Stadia is not only that any pivoted-to plans have been shut down and killed, but that Harrison is absconding now that he has no more platforms at Google to murder.
Google Stadia and all its associated projectsare dead, and that means it’s finally time for the division’s leader, Phil Harrison, to move on. Business Insider reports Harrison has left Google. The report claims he left in January, but Harrison’s Linkedin was only updated in the last few days to say he left Google in April. Harrison spent five years working on Stadia.
It’s impossible to know how useful executives are when we’re outside a company, but Harrison joined Google with a bad reputation with gamers. His previous major executive roles oversaw Sony’s Playstation 3 launch and Microsoft’s launch of the Xbox One and the Kinect. Those both happen to be the consensus worst console releases from each company and presiding over the life and death of Stadia is not helping Harrison’s prodigious reputation.
For critics of the way Google rolls out and then supports, or not, its high profile projects, this is red meat. Delicious, juicy red meat. The tech industry is absolutely lousy with failure, of course. Ambitious projects and ideas are entertained all the time. Hell, that’s why we get so much actual cool stuff that works coming out of the industry.
But for a company with the resources of Google to fail this hard, this fast, and this completely in an endeavor that really kinda should be at least partially in its wheelhouse is not a good look.
When NSO Group began making the wrong kind of headlines all over the world, suddenly lots of governments began at least feigning an interest in caring about what third-party tools their intelligence and security agencies were using to conduct surveillance.
The government of India was one of many that opened investigations into NSO Group and its own agencies’ acquisition and use of its spyware. In India, though, the investigation didn’t originate with Prime Minister Narendra Modi or the legislative branch that has helped enable his worst abuses and excesses. Those entities probably couldn’t care less if government critics, investigative journalists, and opposition party figures were targeted by illegal surveillance. This investigation was opened by the country’s Supreme Court, which has one of the few government entities willing to stand up to Modi.
This investigation determined the Indian government had never deployed NSO Group’s most powerful phone exploit, Pegasus. The government, for its part, refuses to confirm or deny acquisition or use of Pegasus. But it is very much still interested in purchasing powerful spyware that will no doubt be abused by the Modi government. But this abusive government has some standards: it won’t be buying from NSO.
Several companies from countries including Australia, Italy, France, Cyprus and Belarus are likely to take part in India’s auction. Request for proposals are likely to go out in the coming weeks, the FT said.
India, the FT reported, feels that Israeli company NSO and its Pegasus hacking software have become too high-profile and is looking for a more low-key alternative. “India’s move shows how demand for this sophisticated – and largely unregulated – remains strong despite growing evidence that governments worldwide have abused spyware by targeting dissidents and critics,” the FT said.
This isn’t the government being responsible. It’s just a government that generates enough negative press on its own trying to avoid generating even more. And while the report says plenty of companies not based in Israel and founded by former Israeli intelligence operatives will be submitting bids, chances are the Indian government will be going to Exploit Central to obtain its next set of phone hacking tools.
The top contenders to supply India with a Pegasus lookalike include Intellexa, an Israeli company which makes Predator surveillance spyware and which is being sued in a Greek court. Other companies likely to be in the running include Quadream and Cognyte, both Israeli surveillance technology companies. Quadream’s founders include two ex-NSO employees.
The problem with this is that NSO’s competitors are starting to generate negative press of their own. There’s the Greek government fiasco mentioned in the article. And NSO competitor Candiru was sanctioned by the US Commerce Department the same day NSO Group was, which makes it no less headline-worthy than buying from current surveillance Public Enemy No. 1.
Cognyte is no better than these other options, having been caught selling powerful tech to UN-blacklisted countries. Neither is Quadream, which — like all of its Israeli-originating competitors — is more than happy to sell spyware to governments with long histories of human rights abuse.
Of course, the Indian government is currently in the human rights abuse business, so it’s probably not going to reject offers just because they came from companies formed by Israeli intelligence operatives who are more than willing to set morals and ethics aside in order to rack up sales. But it’s not going to buy from NSO Group because it’s unwilling to take that particular PR hit. That’s where the Modi government draws the line.
Poor Matt Taibbi. He destroyed his credibility to take on the Twitter Files, and did so in part to raise the profile of his Substack site, Racket News. Indeed, Substack has become a home for nonsense peddlers of all kinds to create their own little bubbles of nonsense. In congressional testimony, Taibbi admitted that having Elon Musk hand pick him to deliver the “Twitter Files” has increased the number of paying subscribers to his Substack (though he defended it by claiming that the money has all gone towards journalism).
But… apparently Elon has decided that no one on Twitter is allowed to even like or reply to any tweet that links to a Substack site. Including to Taibbi’s. Oops.
Let’s back up, though. You may recall that back in December, as the number of people deserting Twitter became scary, Twitter instituted a new policy saying that you were not allowed to mention a somewhat arbitrary and random grab bag of other social media sites.
A day or so later, after many people yelled about it (and his Mom was the only one defending it), Elon rolled back that policy, admitting that it “was a mistake.”
Of course, since then, he’s systematically moved to make it more and more difficult to move to services like Mastodon, but at least people are still able to link to Mastodon and other social media.
But now, suddenly Substack is a problem? Twitter will still allow users to send a tweet with a link to a Substack page, but that tweet can no longer be liked, replied to, or retweeted. Basically, tweets with Substack links are dead in the water.
It seems that Substack’s “crime” is releasing a tool for more short form content that looks a bit like Twitter, called “Notes.”
And thus, the world’s pettiest man has decided to retaliate.
You could almost (but not really) understand banning links to Substack. But banning likes and replies? That’s just crazy. If you try to do any of those things with a tweet that links to Substack, you get an error message:
Amusingly, this is acting as a bit of a Streisand Effect for Notes. I had seen a headline fly by about it, but hadn’t looked at the details until now.
This move by Twitter impacts many people, amusingly including many in the Substack crowd who have been falsely going on and on about how Musk was a savior to their free speech. And now he’s blocking basically anyone promoting or interacting with their content.
And, among those impacted… Matt Taibbi, who threw all of his credibility eggs into the Musk basket. Just yesterday Taibbi literally refused to criticize Musk for anything during the Mehdi Hasan interview, saying he thought Musk was clearly good for free speech on Twitter. And today he’s saying that Twitter is now unusable:
Also, yesterday in the interview, I noted that it was funny that Taibbi claimed that the Biden campaign got special treatment from Twitter in that they could reach out to people there, but he couldn’t. So when someone asked him if he had reached out to Musk about the Substack blocks, Taibbi admitted that of course he had, though he hadn’t heard back yet:
Of course, maybe that explains why Taibbi refused to criticize Musk yesterday. Didn’t want to cut off that sweet, sweet, access.
Either way, considering just how frequently these capricious moves are being made by the “new” Twitter, it again raises questions why people are still relying on it as a key source of information and as a way to distribute their own content.
That’s the case here. Randal Reid, a Georgia resident, was picked up by Georgia law enforcement relying on a tip passed on to them by Louisiana law enforcement. The origin of the so-called “tip” was the Jefferson Parish Sheriff’s Office, which used its facial recognition tech to turn Randal Reid into the prime suspect in a string of luxury purse thefts.
There were several problems with this assumption, starting with the tech and ending with the Baton Rouge PD, who decided to “adopt” this so-called tip and pass it on to neighboring agencies. The tech employed by the Sheriff’s office said search results should not be considered probable cause. The law enforcement agencies involved in this wrongful arrest either ignored that warning or were never apprised of this fact.
Either way, it ended in the arrest of Reid — someone who didn’t actually match the description and had never traveled to the Louisiana towns where the alleged fraud had taken place. Reid spent nearly a week in jail before the agency that started this whole debacle stepped in and “rescinded” its obviously bogus warrant.
The New York Times has done an in-depth investigation of AI’s failure to properly identify criminal suspects, focusing on Reid’s life-destroying encounter with facial recognition tech. This is how it started. And how it started is enough to end life as they know it for anyone falsely accused of a crime.
[Reid’s] parents made phone calls, hired lawyers and spent thousands of dollars to figure out why the police thought he was responsible for the crime, eventually discovering it was because Mr. Reid bore a resemblance to a suspect who had been recorded by a surveillance camera. The case eventually fell apart and the warrants were recalled, but only after Mr. Reid spent six days in jail and missed a week of work
The arrest warrant makes no reference to the tech used to misidentify Reid. All it says is that a detective watched the surveillance video and the suspect caught on camera “appeared to match the description the suspect” when paired with info from Louisiana’s Department of Motor Vehicles database. But that non-admission of facial recognition tech is likely just another layer of law enforcement deception.
Unfortunately for the law enforcement agencies, the information withheld from the judge approving the arrest warrant has leaked out as Reid continues to seek justice for his false arrest. And it began with a Jefferson County Sheriff’s officer making the mistake of referring to this as a “positive match,” suggesting tech capable of cross-referencing CCTV footage with law enforcement databases had been used.
The other law enforcement entities that participated in this false arrest made similarly revealing statements, albeit not to a judge who might have had questions about the tech used and/or the quality of the video image being used to run searches. But the Jefferson County Sheriff’s Office did most of the lying.
Andrew Bartholomew, the Jefferson Parish financial crimes detective who sought the warrant to arrest Mr. Reid, wrote in an affidavit only that he had been “advised by a credible source” that the “heavyset black male” was Mr. Reid.
Oh, really? Would Mr. Reid be able to cross-examine this “credible source?” Would he be able to ask a judge to determine whether this “source” was indeed “credible?” Of course not. The “credible source” was an algorithm — one that remains unproven for several reasons. But the most important reason is that the Sheriff’s Office is unwilling to inform the courts that it’s using this tech to seek warrants.
And it gets worse. Rather than utilizing a more respectable facial recognition tech purveyor, the Sheriff’s Office decided to go with the cheapest, easiest, shadiest option available.
The Sheriff’s Office has a contract with one facial recognition vendor: Clearview AI, which it pays $25,000 a year. According to documents obtained by The Times in a public records request, the department first signed a contract with Clearview in 2019.
Even Clearview is smart enough to tell law enforcement customers that matches alone shouldn’t be considered probable cause for searches or arrests.
The company’s chief executive, Hoan Ton-That, said an arrest should not be based on a facial recognition search alone.
But clicking “ok” on a dialog box before seeking a warrant isn’t a deterrent. (Yanking contracts when cops abuse Clearview’s search results would be, but Clearview definitely isn’t going to do that.) Cops aren’t going to police themselves, so outside parties working with cops need to do this job for them. Unfortunately, the desire for market share often overwhelms the desire to protect people from government abuse.
So even if Clearview says (as it does in this article) that it has “tremendous empathy” for those falsely accused as the result of its tech, it’s not going to stop selling access to its AI or its billions of facial images scraped from the internet.
The end result is the abomination seen here: cops arresting the wrong man for the crime of using a fake credit card to “purchase” a designer purse. But it’s even worse than that. The warrant affidavit suggests an innocuous search of the state DMV database to identify the suspect. That may have happened but it apparently wasn’t until after the Sheriff’s Office ran screenshots pulled from CCTV footage against Clearview’s multi-billion image database that it was able to finger the wrong person for the job.
There’s even more tech involved here, each bit of it designed to reduce the friction between the real world and things cops want to do. Clearview’s facial recognition appears to be implicated here — a mass surveillance tool that turns the open web into a law enforcement playground.
On top of that, there’s the e-warrant service the sheriff’s office uses, known as CloudGavel. For around $40,000 a year, the Sheriff’s Office can send warrant requests 24/7 to judges who only need to click a couple of buttons to set everything in motion. While it’s probably preferable to getting judges out of bed or interrupting them during numerous government holidays, it also makes it that much easier to apply a rubber stamp to get back to Inbox Zero.
This particular warrant was e-signed at 4:28 pm on July 18, 2022 — well within normal government business hours. But it’s a lot easier to get something signed when you don’t have to be confronted in person about the deficiencies of your warrant affidavit. And it’s a lot easier to rubber stamp something when it’s almost quittin’ time. I would normally say I’m not suggesting either of these things happened here, but in this case I’m going with the other: this looks like government interests aligning to give cops a questionable warrant and a judge a quicker start to their evening.
Law enforcement is still in the clear at the moment. But Reid’s life is a mess. A week in jail awaiting the supposed thief with Louisiana law enforcement officers so desperate to catch this purse fraudster that they omitted information from their warrant request. A week with no job, no income, and the added insult/injury of having his car impounded, just because. No matter the outcome, the officers involved will likely keep their jobs. Even if their cars had been impounded, there are always plenty of transportation options at work. No one gets hurt but the little people. And all because a computer said “Maybe?” and investigators chose to read it as “Definitely.”
Scrivener is the go-to app for writers of all kinds, used every day by best-selling novelists, screenwriters, non-fiction writers, students, academics, lawyers, journalists, translators, and more. Scrivener won’t tell you how to write—it simply provides everything you need to start writing and keep writing. Scrivener makes it easy to structure ideas, write a first draft, and give structure to your finished work. If you’re a scriptwriter, journalist, or creative writer who wants to write your next great book, this is the best writing tool for you. It’s on sale for $30.
Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
So here’s the deal. If you think the Twitter Files are still something legit or telling or powerful, watch this 30 minute interview that Mehdi Hasan did with Matt Taibbi (at Taibbi’s own demand):
Hasan came prepared with facts. Lots of them. Many of which debunked the core foundation on which Taibbi and his many fans have built the narrative regarding the Twitter Files.
We’ve debunked many of Matt’s errors over the past few months, and a few of the errors we’ve called out (though not nearly all, as there are so, so many) show up in Hasan’s interview, while Taibbi shrugs, sighs, and makes it clear he’s totally out of his depth when confronted with facts.
Since the interview, Taibbi has been scrambling to claim that the errors Hasan called out are small side issues, but they’re not. They’re literally the core pieces on which he’s built the nonsense framing that Stanford, the University of Washington, some non-profits, the government, and social media have formed an “industrial censorship complex” to stifle the speech of Americans.
The errors that Hasan highlights matter a lot. A key one is Taibbi’s claim that the Election Integrity Partnership flagged 22 million tweets for Twitter to take down in partnership with the government. This is flat out wrong. The EIP, which was focused on studying election interference, flagged less than 3,000 tweets for Twitter to review (2,890 to be exact).
And they were quite clear in their report on how all this worked. EIP was an academic project to track election interference information and how it flowed across social media. The 22 million figure shows up in the report, but it was just a count of how many tweets they tracked in trying to follow how this information spread, not seeking to remove it. And the vast majority of those tweets weren’t even related to the ones they did explicitly create tickets on.
In total, our incident-related tweet data included 5,888,771 tweets and retweets from ticket status IDs directly, 1,094,115 tweets and retweets collected first from ticket URLs, and 14,914,478 from keyword searches, for a total of 21,897,364 tweets.
Tracking how information spreads is… um… not a problem now is it? Is Taibbi really claiming that academics shouldn’t track the flow of information?
Either way, Taibbi overstated the number of tweets that EIP reported by 21,894,474 tweets. In percentage terms, the actual number of reported tweets was 0.013% of the number Taibbi claimed.
Okay, you say, but STILL, if the government is flagging even 2,890 tweets, that’s still a problem! And it would be if it was the government flagging those tweets. But it’s not. As the report details, basically all of the tickets in the system were created by non-government entities, mainly from the EIP members themselves (Stanford, University of Washington, Graphika, and Digital Forensics Lab).
This is where the second big error that Taibbi makes knocks down a key pillar of his argument. Hasan notes that Taibbi falsely turned the non-profit Center for Internet Security (CIS) into the government agency the Cybersecurity and Infrastructure Security Agency (CISA). Taibbi did this by assuming that when someone at Twitter noted information came from CIS, they must have meant CISA, and therefore he appended the A in brackets as if he was correcting a typo:
Taibbi admits that this was a mistake and has now tweeted a correction (though this point was identified weeks ago, and he claims he only just learned about it). I’ve seen Taibbi and his defenders claim that this is no big deal, that he just “messed up an acronym.” But, uh, no. Having CISA report tweets to Twitter was a key linchpin in the argument that the government was sending tweets for Twitter to remove. But it wasn’t the government, it was an independent non-profit.
The thing is, this mistake also suggests that Taibbi never even bothered to read the EIP report on all of this, which lays out extremely clearly where the flagged tweets came from, noting that CIS (which was not an actual part of the EIP) sent in 16% of the total flagged tweets. It even pretty clearly describes what those tweets were:
Compared to the dataset as a whole, the CIS tickets were (1) more likely to raise reports about fake official election accounts (CIS raised half of the tickets on this topic), (2) more likely to create tickets about Washington, Connecticut, and Ohio, and (3) more likely to raise reports that were about how to vote and the ballot counting process—CIS raised 42% of the tickets that claimed there were issues about ballots being rejected. CIS also raised four of our nine tickets about phishing. The attacks CIS reported used a combination of mass texts, emails, and spoofed websites to try to obtain personal information about voters, including addresses and Social Security numbers. Three of the four impersonated election official accounts, including one fake Kentucky election website that promoted a narrative that votes had been lost by asking voters to share personal information and anecdotes about why their vote was not counted. Another ticket CIS reported included a phishing email impersonating the Election Assistance Commission (EAC) that was sent to Arizona voters with a link to a spoofed Arizona voting website. There, it asked voters for personal information including their name, birthdate, address, Social Security number, and driver’s license number.
In other words, CIS was raising pretty legitimate issues: people impersonating election officials, and phishing pages. This wasn’t about “misinformation.” These were seriously problematic tweets.
There is one part that perhaps deserves some more scrutiny regarding government organizations, as the report does say that a tiny percentage of reports came from the GEC, which is a part of the State Department, but the report suggests that this was probably less than 1% of the flags. 79% of the flags came from the four organizations in the partnership (not government). Another 16% came from CIS (contrary to Taibbi’s original claim, not government). That leaves 5%, which came from six different organizations, mostly non-profits. Though it does list the GEC as one of the six organizations. But the GEC is literally focused entirely on countering (not deleting) foreign state propaganda aimed at destabilizing the US. So, it’s not surprising that they might call out a few tweets to the EIP researchers.
Okay, okay, you say, but even so this is still problematic. It was still, as a Taibbi retweet suggests, these organizations who are somehow close to the government trying to silence narratives. And, again, that would be bad if true. But, that’s not what the information actually shows. First off, we already discussed how some of what they targeted was just out and out fraud.
But, more importantly, regarding the small number of tweets that EIP did report to Twitter… it never suggested what Twitter should do about them, and Twitter left the vast majority of them up. The entire purpose of the EIP program, as laid out in everything that the EIP team has made clear from before, during, and after the election, was just to be another set of eyes looking out for emerging trends and documenting how information flows. In the rare cases (again less than 1%) where things looked especially problematic (phishing attempts, impersonation) they might alert the company, but made no effort to tell Twitter how to handle them. And, as the report itself makes clear, Twitter left up the vast majority of them:
We find, overall, that platforms took action on 35% of URLs that we reported to them. 21% of URLs were labeled, 13% were removed, and 1% were soft blocked. No action was taken on 65%. TikTok had the highest action rate: actioning (in their case, their only action was removing) 64% of URLs that the EIP reported to their team.)
They don’t break it out by platform, but across all platforms no action was taken on 65% of the reported content. And considering that TikTok seemed quite aggressive in removing 64% of flagged content, that means that all of the other platforms, including Twitter, took action on way less than 35% of the flagged content. And then, even within the “took action” category, the main action taken was labeling.
In other words, the top two main results of EIP flagging this content were:
Nothing
Adding more speech
The report also notes that the category of content that was most likely to get removed was the out and out fraud stuff: “phishing content and fake official accounts.” And given that TikTok appears to have accounted for a huge percentage of the “removals” this means that Twitter removed significantly less than 13% of the tweets that EIP flagged for them. So not only is it not 22 million tweets, it’s that EIP flagged less than 3,000 tweets, and Twitter ignored most of them and removed probably less than 10% of them.
When looked at in this context, basically the entire narrative that Taibbi is pushing melts away.
The EIP is not part of the “industrial censorship complex.” It’s a mostly academic group that was tracking how information flows across social media, which is a legitimate area of study. During the election they did exactly that. In the tiny percentage of cases where they saw stuff they thought was pretty worrisome, they’d simply alert the platforms with no push for the platforms to take any action, and (indeed) in most cases the platforms took no action whatsoever. In a few cases, they added more speech.
In a tiny, tiny percentage of the already tiny percentage, when the situation was most extreme (phishing, fake official accounts) then the platforms (entirely on their own) decided to pull down that content. For good reason.
That’s not “censorship.” There’s no “complex.” Taibbi’s entire narrative turns to dust.
There’s a lot more that Taibbi gets wrong in all of this, but the points that Hasan got him to admit he was wrong about are literally core pieces in the underlying foundation of his entire argument.
At one point in the interview, Hasan also does a nice job pointing out that the posts that the Biden campaign (note: not the government) flagged to Twitter were of Hunter Biden’s dick pics, not anything political (we’ve discussed this point before) and Taibbi stammers some more and claims that “the ordinary person can’t just call up Twitter and have something taken off Twitter. If you put something nasty about me on Twitter, I can’t just call up Twitter…”
Except… that’s wrong. In multiple ways. First off, it’s not just “something nasty.” It’s literally non-consensual nude photos. Second, actually, given Taibbi’s close relationship with Twitter these days, uh, yeah, he almost certainly could just call them up. But, most importantly, the claim about “the ordinary” person not being able to have non-consensual nude images taken off the site? That’s wrong.
You can. There’s a form for it right here. And I’ll admit that I’m not sure how well staffed Twitter’s trust & safety team is to handle those reports today, but it definitely used to have a team of people who would review those reports and take down non-consensual nude photos, just as they did with the Hunter Biden images.
As Hasan notes, Taibbi left out this crucial context to make his claims seem way more damning than they were. Taibbi’s response is… bizarre. Hasan asks him if he knew that the URLs were nudes of Hunter Biden and Taibbi admits that “of course” he did, but when Hasan asks him why he didn’t tell people that, Taibbi says “because I didn’t need to!”
Except, yeah, you kinda do. It’s vital context. Without it, the original Twitter Files thread implied that the Biden campaign (again, not the government) was trying to suppress political content or embarrassing content that would harm the campaign. The context that it’s Hunter’s dick pics is totally relevant and essential to understanding the story.
And this is exactly what the rest of Hasan’s interview (and what I’ve described above) lays out in great detail: Taibbi isn’t just sloppy with facts, which is problematic enough. He leaves out the very important context that highlights how the big conspiracy he’s reporting is… not big, not a conspiracy, and not even remotely problematic.
He presents it as a massive censorship operation, targeting 22 million tweets, with takedown demands from government players, seeking to silence the American public. When you look through the details, correcting Taibbi’s many errors, and putting it in context, you see that it was an academic operation to study information flows, who sent the more blatant issues they came across to Twitter with no suggestion that they do anything about them, and the vast majority of which Twitter ignored. In some minority of cases, Twitter applied its own speech to add more context to some of the tweets, and in a very small number of cases, where it found phishing attempts or people impersonating election officials (clear terms of service violations, and potentially actual crimes), it removed them.
There remains no there there. It’s less than a Potemkin village. There isn’t even a façade. This is the Emperor’s New Clothes for a modern era. Taibbi is pointing to a naked emperor and insisting that he’s clothed in all sorts of royal finery, whereas anyone who actually looks at the emperor sees he’s naked.
Despite several years of blistering hype about the rise of the “Metaverse” (read: Facebook’s clumsy attempt to dominate a market simply by rebranding video games, AR, and VR as…something else), new data from Piper Sandler indicates that there’s little real interest among younger Americans.
According to the firm’s latest survey of 5,600 teens (part of a much broader study of teens in general), just 27 percent own a VR device, compared to an 87 percent iPhone ownership rate among teens. Just four percent of U.S. teens actually use VR on a daily basis:
The survey results suggest that virtual reality hardware and software has yet to catch on with the public despite billions of dollars in investment in the technology from Big Tech companies and a number of low-cost headsets on the market. Teenagers are often seen as early adopters of new technology and their preferences can provide a preview of where the industry is going.
That’s a comically stark difference from several years of hype related to the technology, which always seemingly assumed mass adoption of a niche technology with numerous barriers to entry. Cost and comfort remains an obstacle for many, though so does motion and “simulator sickness,” which continues to impact a large chunk of potential customers and was the subject of a Verge story this week.
I enjoy VR myself, but can spend about twenty minutes in VR before I get cold sweats and am disoriented and ill for hours. Especially when it comes to so many of the jankier titles that don’t implement meaningful counter balances to motion sickness. All of the supposed tricks, from slow, expanded usage to train the brain… or pointing a fan at me while playing, don’t work. Data suggests I’m not alone.
None of this is to say that VR and AR won’t increasingly be useful and exciting technologies as the underlying tech advances. And somebody (probably not Mark Zuckerberg) will eventually offer a low-cost must-have VR gadget that truly delivers something truly revolutionary that doesn’t make you puke. But until then, the best we’re left with is yet another cautionary tale about the perils of speculative hype.