Countless sectors are rushing to implement “AI” (undercooked language learning models) without understanding how they work — or making sure they work. The result has been an ugly comedy of errors stretching from journalism to mental health care thanks to greed, laziness, computer-generated errors, plagiarism, and fabulism.
NYC’s government is apparently no exception. The city recently unveiled a new “AI” powered chatbot to help answer questions about city governance. But an investigation by The Markup found that the automated assistant not only doled out incorrect information, it routinely advises city residents to break the law across a wide variety of subjects, from landlord agreements to labor issues:
“The bot said it was fine to take workers’ tips (wrong, although they sometimes can count tips toward minimum wage requirements) and that there were no regulations on informing staff about scheduling changes (also wrong). It didn’t do better with more specific industries, suggesting it was OK to conceal funeral service prices, for example, which the Federal Trade Commission has outlawed. Similar errors appeared when the questions were asked in other languages, The Markup found.”
Folks over on Bluesky had a lot of fun testing the bot out, and finding that it routinely provided bizarre, false, and sometimes illegal results:
There’s really no reality where this sloppily-implemented bullshit machine should remain operational, either ethically or legally. But when pressed about it, NYC Mayor Eric Adams stated the system will remain online, albeit with a warning that the system “may occasionally produce incorrect, harmful or biased content.”
But one administration official complained about the fact that journalists pointed out the whole error prone mess in the first place, insisting they should have worked privately with the administration to fix the problems cause by the city:
If you can’t see that, it’s reporter Joshua Friedman reporting:
At NYC mayor Eric Adams’s press conference, top mayoral advisor Ingrid Lewis-Martin criticizes the media for publishing stories about the city’s new Al-powered chatbot that recommends illegal behavior. She says reporters could have approached the mayor’s office quietly and worked with them to fix it
That’s not how journalism works. That’s now how anything works. Everybody’s so bedazzled by new tech (or keen on making money from the initial hype cycle) they’re just rushing toward the trough without thinking. As a result, uncooked and dangerous automation is being layered on top of systems that weren’t working very well in the first place (see: journalism, health care, government).
The city is rushing to implement “AI” elsewhere in the city as well, such as with a new weapon scanning system tests have found have an 85 percent false positive rate. All of this is before you even touch on the fact that most early adopters of these systems see them are a wonderful way to cut corners and undermine already mistreated and underpaid labor (again see: journalism, health care).
There are lessons here you’d think would have been learned in the wake of previous tech hype and innovation cycles (cryptocurrency, NFTs, “full self driving,” etc.). Namely, innovation is great and all, but a rush to embrace innovation for innovation’s sake due to greed or incurious bedazzlement generally doesn’t work out well for anybody (except maybe early VC hype wave speculators).
We’ve noted for years how the “race to 5G” was largely just hype by telecoms and hardware vendors eager to sell more gear and justify high U.S. mobile data prices. While 5G does provide faster, more resilient, and lower latency networks, it’s more of a modest evolution than a revolution.
But that’s not what telecom giants like Verizon, T-Mobile, and AT&T promised. All three routinely promised that 5G would change the way we live and work, usher forth the smart cities of tomorrow, and even revolutionize the way we treat cancer. None of those things wound up being true.
In fact, when 5G did arrive in the U.S., speeds and performance wound up being significantly worse than in many overseas deployments. At prices far higher than in most developed countries.
As a result, many consumers wound up associating the standard not with progress, but with empty hype. And as The Verge notes in a new piece tracking 5G’s trajectory to date, investors are growing increasingly unhappy about the lack of returns on what was a major overall investment:
“And we don’t have to guess whether investors are asking questions about when they’ll see a return — they asked point blank in the company’s most recent earnings call. [Verizon] CEO Hans Vestberg fielded the question, balancing the phrases “having the right offers for our customers” and “generating the bottom line for ourselves,” while nodding to “price adjustments” that also “included new value” for customers. It was a show of verbal gymnastics that meant precisely nothing.”
There’s one area where 5G did wind up being a benefit: fixed wireless access (FWA). For folks stuck without any access, or stuck on a DSL line straight out of 2003, a home 5G FWA line is a notable improvement. It’s still not something that’s going to be as fast and reliable as fiber (or even cable), but 5G has proven to be useful when it comes to shoring up overall home broadband coverage gaps.
But unchecked and often pointless industry consolidation continues to reduce any incentive to seriously compete on price over the longer term. And as investor demand for recouped investment grows, the outcome will most assuredly be more nickel-and-diming of wireless customers. Customers on good FWA deals now will, as is wireless industry tradition, steadily see costs head skyward over time.
All told, U.S. 5G is a far cry from the “fourth industrial revolution” telecoms and gear vendors tried desperately to present it as, and all most consumers really wanted was a reliable, more affordable connection.
We’ve long noted how 5G wireless is more of an evolution than a revolution. Yes, it results in faster, better networks, but it’s not a technology that’s truly transformative.
Knowing this, the wireless industry spent years coming up with all kinds of outlandish claims about how 5G can cure cancer or solve climate change in a bid to drum up interest and sales. My favorite type of this marketing involves taking something that doesn’t actually need 5G to work, and pretending that only 5G innovation made it possible. Then watching as a lazy press just regurgitates the claims.
Like when T-Mobile got a bunch of credulous press coverage for a robot that could give remote tattoos over 5G (which could have been done over 4G, or Wi-Fi, or even DSL). Or when a Korean coffee brand got oodles of free press for a “5G powered robot barista” (which could have been done over Wi-Fi). Or when the industry claimed that 5G and AR would revolutionize fashion by letting folks watch fashion shows in AR or VR (which could have been done… you get the point).
Mindless 5G medical hype has been a particularly healthy niche. Like when Verizon hyped “5G-powered” medical gear that not only didn’t actually require 5G to work, but wasn’t likely to be used by actual medical professionals who generally prefer fiber, Ethernet, and gigabit Wi-Fi due to the less reliable nature of cellular.
There’re just endless examples of this kind of marketing symbiosis between wireless carriers and a lazy, gullible tech press.
The latest and potentially greatest example of this art form involves the claim that 5G helped conduct a remote surgery on a banana between London and Los Angeles. A video purportedly showing the procedure has been making the rounds for a few years, often resulting in clickbait stories all over the internet about how this was only made possible by the low-latency, innovative potential of 5G!
More recently, The Verge’s Nilay Patel did some very basic due diligence and found that the entire thing was bullshit. So much bullshit, in fact, that played absolutely no role in what was shown:
“This video does not in any way show a robotic surgery being done over 5G. The video was first posted to TikTok during the pandemic by Dr. Kais Rona, who is a bariatric and robotic surgeon at Smart Dimensions Weight Loss in Southern California, and he’s been actively telling people that it’s not 5G ever since.”
Usually, a company like Verizon or Huawei will conduct an elaborate marketing scheme involving doing medical procedures over 5G to pretend that it’s the 5G making it all possible. Press outlets, some of them reputable, will then regurgitate the claims without noting that 5G isn’t actually making this possible, or that the procedure just as easily could have been done over Wi-Fi, or preferably, fiber optics and Ethernet.
This kind of media gullibility is helpful to a wireless industry keen on obscuring pesky facts like Americans pay some of the highest prices in the world for 5G that’s a half-cooked mess when compared to overseas deployments. It’s hard to find many stories about how U.S. wireless is expensive and mediocre due to monopolization, but you’ll find no shortage of “news” reports lauding 5G’s overstated or outright fraudulent innovation potential.
In this case the 5G bullshit didn’t even need the industry’s involvement. All that was required was a single fake claim on a posted video for the hype to resonate across AI-generated clickbait mills for all of eternity. A pump primed years earlier thanks to uncritical telecom trade mags, and lazy, underpaid reporters who can’t be bothered to ask basic questions or pick up the phone.
We’ve noted for several years how the “race to 5G” was largely just hype by telecoms and hardware vendors eager to sell more gear and justify high U.S. mobile data prices. While 5G does provide faster, more resilient, and lower latency networks, it’s more of an evolution than a revolution.
But that’s not what telecom giants like Verizon, T-Mobile, and AT&T promised. Both routinely promised that 5G would change the way we live and work, usher forth the smart cities of tomorrow, and even revolutionize the way we treat cancer. None of those things wound up being true (I enjoyed talking to one medical professional who basically laughed in my face about the cancer claim).
When 5G did arrive, it didn’t even live up to its basic promise, really. U.S. implementations were decidedly slower, spottier, and more expensive than many overseas networks, thanks to the usual industry consolidation and U.S. regulatory fecklessness. The end result: wireless carriers associated a promising but not world-changing technological improvement with hype and bluster in the mind of consumers.
In a bit of a retrospective, Washington Post tech columnist Shira Ovide looks back at the 5G hype and hopes that maybe, just maybe, somebody in industry will “learn their lesson” from the experience:
We and companies that make technology must acknowledge that not every new technology changes our lives — at least not in a way that makes for a compelling science fiction movie…5G was an incremental technical improvement that companies tried to tell us was a revolutionary leap. It wasn’t.
The sentiment of the piece is absolutely correct. Industry claims should be grounded in reality to ensure consumers, markets, investors, and the public have a realistic, fact-based understanding of a technology’s potential.
But in case you hadn’t noticed with NFT, crypto, AI, and every other technology hype cycle that rolls through, there’s no financial incentive for measured introspection of this type in the attention economy we’ve created. You don’t get the kind of headlines and attention companies and VC’s crave by explaining what a technology actually does, you increasingly get it by being monumentally full of shit.
That’s particularly true with a technology like 5G, that wasn’t a revolution so much as an evolution of existing tech. Not to say 5G doesn’t bring value, but faster, lower latency networks that are easier to maintain simply isn’t sexy, and to keep boosting marketing and investment returns in this increasingly unhinged attention economy, companies are routinely motivated to embrace the preposterous.
On April 13, a new YouTube video called the AI Dilemma was shared by Social Dilemma leading character, Tristan Harris. He encouraged his followers to “share it widely” in order to understand the likelihood of catastrophe. Unfortunately, like the Social Dilemma, the AI Dilemma is big on hype and deception, and not so big on accuracy or facts. Although it deals with a different tech (not social media algorithms but generative AI), the creators still use the same manipulation and scare tactics. There is an obvious resemblance between the moral panic techlash around social media and the one that’s being generated around AI.
As the AI Dilemma’s shares and views are increasing, we need to address its deceptive content. First, it clearly pulls from the same moral panic hype playbook as the Social Dilemma did:
1. The Social Dilemma argued that social media have godlike power over people (controlling users like marionettes). The AI Dilemma argues that AI has godlike power over people.
2. The Social Dilemma anthropomorphized the evil algorithms. The AI Dilemma anthropomorphizes the evil AI. Both are monsters.
3. Causation is asserted as a fact: Those technological “monsters” CAUSE all the harm. Despite other factors – confronting variables, complicated society, messy humanity, inconclusive research into those phenomena – it’s all due to the evil algorithms/AI.
4. The monsters’ final goal may be… extinction. “Teach an AI to fish, and it’ll teach itself biology, chemistry, oceanography, evolutionary theory … and then fish all the fish to extinction.” (What?)
5. The Social Dilemma argued that algorithms hijack our brains, leaving us to do what they want without resistance. The algorithms were played by 3 dudes in a control room, and in some scenes, the “algorithms” were “mad.” In the AI Dilemma, thisanthropomorphizing is taken to the next level:
Tristan Harris and Aza Raskin substituted the word AI for an entirely new term, “Gollem-class AIs.” They wrote “Generative Large Language Multi-Modal Model” in order to get to “GLLMM.” “Golem” in Jewish folklore is an anthropomorphic being created from inanimate matter. “Suddenly, this inanimate thing has certain emergent capabilities,” they explained. “So, we’re just calling them Gollem-class AIs.”
What are those Gollems doing? Apparently, “Armies of Gollem AIs pointed at our brains, strip-mining us of everything that isn’t protected by 19th-century law.”
If you weren’t already scared, this should have kept you awake at night, right?
We can summarize that the AI Dilemma is full of weird depictions of AI. According to experts, the risk of anthropomorphizing AI is that it inflates the machine’s capabilities and distorts the reality of what it can and can’t do — resulting in misguided fears. In the case of this lecture, that was the entire point.
6. The AI Dilemma creators thought they had “comic relief” at 36:45 when they showed a snippet from the “Little Shop of Horrors” (“Feed me!”). But it was actually at 51:45 when Tristan Harris stated, “I don’t want to be talking about the darkest horror shows of the world.”
A specific survey was mentioned 3 times throughout the AI Dilemma. It was about how “Half of” “over 700 top academics and researchers” “stated that there was a 10 percent or greater chance of human extinction from future AI systems” or “human inability to control future AI systems.”
It is a FALSE claim. My analysis of this (frequently quoted) survey’s anonymized dataset (Google Doc spreadsheets) revealed many questionable things that should call into question not just the study, but those promoting it:
1. The “Extinction from AI” Questions
The “Extinction from AI” question was: “What probability do you put on future AI advances causing human extinction or similarly permanent and severe disempowerment of the human species?”
The ”Extinction from human failure to control AI” question was: “What probability do you put on human inability to control future advanced AI systems causing human extinction or similarly permanent and severe disempowerment of the human species?”
There are plenty of vague phrases here, from the “disempowerment of the human species” (?!) to the apparent absence of a timeframe for this unclear futuristic scenario.
When the leading researcher of this survey, Katja Grace, was asked on a podcast: “So, given that there are these large framing differences and these large differences based on the continent of people’s undergraduate institutions, should we pay any attention to these results?” she said: “I guess things can be very noisy, and still some good evidence if you kind of average them all together or something.” Good evidence? Not really.
2. The Small Sample Size
AI Impacts contacted attendees of two ML conferences (NeurlPS & ICML), not a gathering of the broader AI community, in which only 17% responded to their survey in general, and a much smaller percentage were asked to respond to the specific “Extinction from AI” questions.
– Only 149 answered the “Extinction from AI” question.
That’s 20% of the 738 respondents.
– Only 162 answered the ”Extinction from human failure to control AI” question.
It’s quite a stretch to turn 81 people (some of whom are undergraduate and graduate students) into “half of all AI researchers” (which include 100s of thousands of researchers).
Who’s responsible for this survey (and its misrepresentation in the media)? Effective Altruism organizations that focus on “AI existential risk.” (Look surprised).
Who else? The notorious FTX Future Fund. In June 2022, it pledged “Up to $250k to support rerunning the highly-cited survey from 2016.” AI Impacts initially thanked FTX (“We thank FTX Future Fund for funding this project”). Then, their “Contributions” section became quite telling: “We thank FTX Future Fund for encouraging this project, though they did not ultimately fund it as anticipated due to the Bankruptcy of FTX.” So, the infamous crypto executive Sam Bankman-Fried wanted to support this as well, but, you know, fraud and stuff.
What is the background of AI Impacts’ researchers? Katja Grace, who co-founded the AI Impacts project, is from MIRI and the Future of Humanity Institute and believes AI “seems decently likely to literally destroy humanity (!!).” The two others were Zach Stein-Perlman, who describes himself as an “Aspiring rationalist and effective altruist,” and Ben Weinstein-Raun, who also spent years at Yudkowsky’s MIRI. As a recap, the AI Impacts team conducting research on “AI Safety” is like anti-vax activist Robert F. Kennedy Jr. conducting research on “Vaccine Safety.” The same inherent bias.
In 2022, Tristan Harris told “60 Minutes”: “The more moral outrageous language you use, the more inflammatory language, contemptuous language, the more indignation you use, the more it will get shared.”
Finally, we can agree on something. Tristan Harris took aim at social media platforms for what he claimed was their outrageous behavior, but it is actually his own way of operating: load up on outrageous, inflammatory language. He uses it around the dangers of emerging technologies to create panic. He didn’t invent this trend, but he profits greatly from it.
Moving forward, neither AI Hype nor AI Criti-Hype should be amplified.
There’s no need to repeat Google’s disinformation about its AI program learning Bengali it was never trained to know – since it was proven that Bengali was one of the languages it was trained on. Similarly, there’s no need to repeat the disinformation about “half of all AI researchers believe…” human extinction is coming. The New York Times should issue a correction in Yuval Harari, Tristan Harris, and Aza Raskin’s OpEd. Time Magazine should also issue a correction on Max Tegmark’s OpEd which makes the same claim multiple times. That’s the ethical thing to do.
There are real issues we need to be worried about regarding the potential impact of generative AI. For example, my article on AI-generated art tools in November 2022 raised the alarm about deepfakes and how this technology can be easily weaponized (those paragraphs are even more relevant today). In addition to spreading falsehood, there are issues with bias, cybersecurity risks, and lack of transparency and accountability.
Those issues are unrelated to “human extinction” or “armies of Gollems” controlling our brains. The sensationalism of the AI Dilemma distracts us from the actual issues of today and tomorrow. We should stay away from imaginary threats and God-like/monstrous depictions. The solution to AI-lust (utopia) or AI-lash (Apocalypse) resides in… AI realism.
TIME’s cover story decided to go even further and argued: “If future AIs gain the ability to rapidly improve themselves without human guidance or intervention, they could potentially wipe out humanity.” In this scenario, the computer scientists’ job is “making sure the AIs don’t wipe us out!”
Hmmm. Okay.
There’s a strange synergy now between people who hype AI’s capabilities and those who thereby create false fears (about those so-called capabilities).
The false fears part of this equation usually escalates to absurdity. Like headlines that begin with a “war” (a new culture clash and a total war between artists and machines), progress to a “deadly war” (“Will AI generators kill the artist?”), and end up in a total Doomsday scenario (“AI could kill Everyone”!).
I previously called this phenomenon – “Techlash Filter.” In a nutshell, while Instagram filters make us look younger and Lensa makes us hotter, Techlash filters make technology scarier.
It’s all overwhelming. But I’m here to tell you that none of this is new. By studying the media’s coverage of AI, we can see how it follows old patterns.
Since we are flooded with news about generative AI and its “magic powers,” I want to help you navigate the terrain. Looking at past media studies, I gathered the “Top 10 AI frames” (By Hannes Cools, Baldwin Van Gorp, and Michaël Opgenhaffen, 2022). They are organized from the most positive (pro-AI) to the most negative (anti-AI). Together, they encapsulate the media’s “know-how” for describing AI.
Following each title and short description, you’ll see how it is manifested in current media coverage of generative AI. My hope is that after reading this, you’ll be able to cut through the AI hype.
1. Gate to Heaven.
A win-win situation for humans, where machines do things without human interference. AI brings a futuristic utopian ideal. The sensationalism here exaggerates the potential benefits and positive consequences of AI.
The co-pilot theme. It focuses on AI assisting humans in performing tasks. It includes examples of tasks humans will not need to do in the future because AI will do the job for them. This will free humans up to do other, better, more interesting tasks.
Improvement process: how AI will herald new social developments. AI as a means of improving the quality of life or solving problems. Economic development includes investments, market benefits, and competitiveness at the local, national, or global level.
The capabilities of AI are dependent on human knowledge. It’s often linked to the responsibility of humans for how AI is shaped and developed. It focuses on policymaking, regulation, and issues like control, ownership, participation, responsiveness, and transparency.
A game among elites, a battle of personalities and groups, who’s ahead or behind / who’s winning or losing in the race to develop the latest AI technology.
AI poses an existential threat to humanity or what it means to be human. It includes the loss of human control (entire autonomy). It calls for action in the face of out-of-control consequences and possible catastrophes. The sensationalism here exaggerates the potential dangers and negative impacts of AI.
Interestingly, studies found that the frames most commonly used by the media when discussing AI are “a helping hand” and “social progress” or the alarming “Frankenstein’s monster/Pandora’s Box.” It’s unsurprising, as the media is drawn to extreme depictions.
If you think that the above examples represent the peak of the current panic, I’m sorry to say that we haven’t reached it yet. Along with the enthusiastic utopian promises, expect more dystopian descriptions of Skynet (Terminator), HAL 9000 (2001: A Space Odyssey), and Frankenstein’s monster.
We’ve noted for several years how the “race to 5G” was largely just hype by telecoms and hardware vendors eager to sell more gear and justify high U.S. mobile data prices. While 5G does provide faster, more resilient, and lower latency networks, it’s more of an evolution than a revolution.
But that’s not what telecom giants like Verizon, T-Mobile, and AT&T promised. Both routinely promised that 5G would change the way we live and work, usher forth the smart cities of tomorrow, and even revolutionize the way we treat cancer. None of those things wound up being true.
Two big claims by the wireless industry was that 5G was going to revolutionize self-driving vehicle automation and be a key player in the “metaverse” (Facebook’s idiotic term for all future interactive online technologies that involve virtual spaces). But again, that didn’t happen either:
Specifically, metaverse proponent Meta (formerly Facebook) lost more than $700 billion in value during 2022, with shares tumbling further this week on news that CEO Mark Zuckerberg will continue investing in metaverse services into 2023. Separately, Tesla, Ford and General Motors have all notched notable setbacks in their pursuit of autonomous cars, a concept that has received an estimated $100 billion in research and development so far. One autonomous driving pioneer recently bemoaned the fact that the technology “has delivered so little.”
Of course, the Zuckerverse and full self driving falling on their faces weren’t 5G’s fault. But again, 5G was supposed to be a driving force for these evolutions, yet simply didn’t deliver on any of the promises we were subjected to over the last half a decade. It didn’t even fully deliver (yet) on its most basic of promises: affordable next-generation connectivity.
US 5G performance was significantly worse than most overseas deployments due to a dearth of middle-band spectrum. Less talked about (because it’s a preferred outcome for the industry and the policymakers who love them) is the fact U.S. wireless data prices continue to be some of the highest in the developed world, something that only tends to increase with market consolidation.
Getting excited about innovative new technologies is one thing, but the massive chasm that continues to grow between marketing hype and reality in America is something else altogether. Unrealistic claims may drive stock valuations and Elon Musk’s ego on Twitter, but it eventually puts a bad taste in the mouth of actual consumers, and in 5G’s case associated the wireless standard with hype and bluster.
Late last year, we noted how the FAA and the FCC (the agency that actually knows how spectrum works) had gotten into a bit of an ugly tussle over the FAA’s claim that 5G could harm air travel safety.
The FAA claimed that deploying 5G in the 3.7 to 3.98 GHz “C-Band” would cause interference with certain radio altimeters. But the FCC conducted its own study showing minimal issues, and pointed to the more than 40 countries have deployed 5G in this bandwidth with no evidence of harm. Lifelong wireless spectrum policy experts like Harold Feld also blogged about how this was a an overheated controversy, and any real harm could be mitigated.
It didn’t much matter. It didn’t take long before the news wires were filled with reports about how 5G was going to be a diabolical public safety menace when it came to air travel. In part, thanks to folks at the FAA, who leaked scary stories to outlets like the Wall Street Journal.
Researchers found that 5G transmissions stay safely within their assigned frequencies and mostly don’t point signals skyward where aircraft operate, according to the report released Tuesday, the first of several from the government on the new high-speed mobile phone service.
There is a “low level of unwanted 5G emissions” in frequencies used by so-called radar altimeters — which calculate a plane’s distance from the ground and are critical to landing in low visibility — the National Telecommunications and Information Administration said in the report.
The findings offer the the strongest indication to date that the patches being applied to some aircraft models should work well to protect them.
The fact that this always was a minor, fixable problem probably won’t get anywhere near the coverage you saw last year when countless news outlets proclaimed that airliners could soon start falling from the sky thanks to 5G. This was also a weird instance where the FAA failed to cooperatively heed the insights of the FCC, the one regulator specifically tasked with understanding how wireless spectrum actually works.
It will never stop being bizarre to me that a social media app tried to claim ownership of VR, AR, and effectively every next-gen, Internet-related technology under the “Metaverse” brand… and the entirety of the tech press just simply… went along with it. As a result, we’ve spent the better part of the last few years mired in an endless ocean of unhinged hyperbole about “the Metaverse vision” and what it means.
While the press and investors have spent countless hours propping up Zuckerberg’s ego on this subject, the actual end product isn’t much to write home about. Employees have found Meta’s flagship VR social network, Horizon Worlds, to be a buggy mess they don’t enjoy using:
“Since launching late last year, we have seen that the core thesis of Horizon Worlds — a synchronous social network where creators can build engaging worlds — is strong,” [Meta’s VP of Metaverse, Vishal] Shah wrote in a memo last month. “But currently feedback from our creators, users, playtesters, and many of us on the team is that the aggregate weight of papercuts, stability issues, and bugs is making it too hard for our community to experience the magic of Horizon. Simply put, for an experience to become delightful and retentive, it must first be usable and well crafted.”
At the same time, Zuckerberg’s ego has resulted in all Metaverse marketing utilizing the image of a CEO whose outward-facing charm is muted at best. Despite having an unlimited marketing budget and access to the best marketing talent in the world, most Metaverse marketing looks like it was barfed out of a 2007-era Xbox promotional demo, with Zuckerberg’s pasty visage bizarrely the singular focus.
The new Meta Quest Pro VR headset, released this week, could possibly be a huge evolutionary leap, but again, you’d never really know it because Meta’s update this week featured a gobsmacking and bizarrely heavy dose of poorly rendered simulacrums of an already charisma-challenged CEO.
“Mr. Zuckerberg’s zeal for the metaverse has been met with skepticism by some Meta employees. This year, he urged teams to hold meetings inside Meta’s Horizon Workrooms app, which allows users to gather in virtual conference rooms. But many employees didn’t own V.R. headsets or hadn’t set them up yet, and had to scramble to buy and register devices before managers caught on, according to one person with knowledge of the events.
In a May poll of 1,000 Meta employees conducted by Blind, an anonymous professional social network, only 58 percent said they understood the company’s metaverse strategy.
The foundational idea that Zuckerberg can convince the entirety of Facebook’s aging populace to migrate to a sometimes vomit-inducing walled garden of sweaty plastic headsets never made coherent sense. But because Zuckerberg is so wealthy, absolute legions of yes men and women have lined up in service to his ego. So far that’s not working out great, with Meta stock seeing a 60 percent drop in the last year alone.
In the U.S. there’s long been a steadily growing chasm between marketing and reality, and the Metaverse personifies this dominant American cultural trait. Marketing could go a long way toward covering the warts of Horizon Worlds, but there’s absolutely nothing about the current marketing that screams cutting edge or futuristic, and Zuckerberg’s mandated presence is just… odd.
Such terrible marketing can’t obscure the fact that Meta can’t seem to innovate its way around competitors like TikTok. Nor has it proven (at any point, really) that it can be innovative enough to become the kind of next-generation AR/VR global town square it envisions itself becoming.
Facebook has never really been known as an innovative company on the kind of scale we reserve for companies like Apple, but the Metaverse hype and investment train requires that everybody pretend otherwise in a strange, greedy, mass delusion. And with the FTC finally (for now) cracking down on the company’s longstanding catch and kill strategies, Meta can’t M&A its way to AR/VR dominance either.
Meta could still possibly succeed if it removed Zuckerberg’s ego (and possibly Zuckerberg himself) from the management equation, stopped using a man with the charisma of a damp walnut in absolutely all Metaverse marketing, and gained a little humility after the last few years of regulatory, political, and market headaches. But there’s scant evidence that any of that seems likely anytime soon.
Despite Elon Musk’s disdain for the press, his legend wouldn’t exist without the media’s need to hyperventilate over every last thing that comes out of the billionaire’s mouth. We’re at the point where the dumbest offhand comment by Musk becomes its own three week news cycle (see the entire news cycle based on Musk’s comments on a baseless story about somebody cheating at chess with anal beads).
Of course it’s even worse if Musk says something that actually sounds important. Like when Musk recently proclaimed he’d be offering Starlink satellite broadband service in Iran in a heroic bid to help protesting Iranians avoid government surveillance and censorship. It was literally a two word tweet, but the claim, as usual, resulted in lots of ass kissing and a week long news cycle about how Musk was heroically helping Iranians.
But the announcement was hollow. Not that you’d know this by perusing press stories. Only a few outlets, like Al Jazeera and The Intercept, could be bothered to dig behind the claims to discover the announcement didn’t actually accomplish much of anything real.
Iran quickly banned the Starlink website, and the only way actual Iranians would be able to use the service is if somebody smuggled Starlink dishes (aka “terminals”) into the country in the middle of a massive wave of violent unrest, something that’s likely impossible at any real scale. There’s also the issue of no ground stations tying connectivity together in Iran:
Musk’s plan is further complicated by Starlink’s reliance on ground stations: communications facilities that allow the SpaceX satellites to plug into earthbound internet infrastructure from orbit. While upgraded Starlink satellites may no longer need these ground stations in the near future, the network of today still largely requires them to service a country as vast as Iran, said Humphreys, the University of Texas professor. Again, Iran is unlikely to approve the construction within its borders of satellite installations owned by an American defense contractor.
So even if Musk wanted to offer struggling Iranians broadband access they’re extremely unlikely to be able to get dishes. And even if they could get dishes, they probably couldn’t use them because the necessary infrastructure wasn’t in place. Of course Musk knew this. But Musk also knows that any random bullshit that comes out of his mouth creates several weeks of free press because the ad-based U.S. press has steadily devolved into a billionaire-coddling bullshit clickbait and controversy machine.
The Intercept found it didn’t take much for large swaths of the Internet to believe that the billionaire had dramatically changed things in Iran with a tweet. Musk fandom is often a fan fiction based community, where truth is fairly negotiable:
That’s not to say that Starlink can’t help people in countries where emergency connectivity is needed, such as in Ukraine. Or rural Kentucky (assuming they can afford the $710 first month bill). But it is to say that turning your brain off every single time Elon Musk opens his mouth because you’ve convinced yourself he’s some kind of deity is violently annoying to people still living in reality.
And while Musk loves to whine and cry about the unfairness of the press, his legend literally wouldn’t exist without the endless supply of clickbait-seeking editors who are completely uninterested in the actual truth behind any and every claim the man makes, whether it’s the capabilities of “full self driving” or Starlink’s potential.