We’ve long noted how 5G wireless is more of an evolution than a revolution. Yes, it results in faster, better networks, but it’s not a technology that’s truly transformative.
Knowing this, the wireless industry spent years coming up with all kinds of outlandish claims about how 5G can cure cancer or solve climate change in a bid to drum up interest and sales. My favorite type of this marketing involves taking something that doesn’t actually need 5G to work, and pretending that only 5G innovation made it possible. Then watching as a lazy press just regurgitates the claims.
Like when T-Mobile got a bunch of credulous press coverage for a robot that could give remote tattoos over 5G (which could have been done over 4G, or Wi-Fi, or even DSL). Or when a Korean coffee brand got oodles of free press for a “5G powered robot barista” (which could have been done over Wi-Fi). Or when the industry claimed that 5G and AR would revolutionize fashion by letting folks watch fashion shows in AR or VR (which could have been done… you get the point).
Mindless 5G medical hype has been a particularly healthy niche. Like when Verizon hyped “5G-powered” medical gear that not only didn’t actually require 5G to work, but wasn’t likely to be used by actual medical professionals who generally prefer fiber, Ethernet, and gigabit Wi-Fi due to the less reliable nature of cellular.
There’re just endless examples of this kind of marketing symbiosis between wireless carriers and a lazy, gullible tech press.
The latest and potentially greatest example of this art form involves the claim that 5G helped conduct a remote surgery on a banana between London and Los Angeles. A video purportedly showing the procedure has been making the rounds for a few years, often resulting in clickbait stories all over the internet about how this was only made possible by the low-latency, innovative potential of 5G!
More recently, The Verge’s Nilay Patel did some very basic due diligence and found that the entire thing was bullshit. So much bullshit, in fact, that played absolutely no role in what was shown:
“This video does not in any way show a robotic surgery being done over 5G. The video was first posted to TikTok during the pandemic by Dr. Kais Rona, who is a bariatric and robotic surgeon at Smart Dimensions Weight Loss in Southern California, and he’s been actively telling people that it’s not 5G ever since.”
Usually, a company like Verizon or Huawei will conduct an elaborate marketing scheme involving doing medical procedures over 5G to pretend that it’s the 5G making it all possible. Press outlets, some of them reputable, will then regurgitate the claims without noting that 5G isn’t actually making this possible, or that the procedure just as easily could have been done over Wi-Fi, or preferably, fiber optics and Ethernet.
This kind of media gullibility is helpful to a wireless industry keen on obscuring pesky facts like Americans pay some of the highest prices in the world for 5G that’s a half-cooked mess when compared to overseas deployments. It’s hard to find many stories about how U.S. wireless is expensive and mediocre due to monopolization, but you’ll find no shortage of “news” reports lauding 5G’s overstated or outright fraudulent innovation potential.
In this case the 5G bullshit didn’t even need the industry’s involvement. All that was required was a single fake claim on a posted video for the hype to resonate across AI-generated clickbait mills for all of eternity. A pump primed years earlier thanks to uncritical telecom trade mags, and lazy, underpaid reporters who can’t be bothered to ask basic questions or pick up the phone.
We’ve noted for several years how the “race to 5G” was largely just hype by telecoms and hardware vendors eager to sell more gear and justify high U.S. mobile data prices. While 5G does provide faster, more resilient, and lower latency networks, it’s more of an evolution than a revolution.
But that’s not what telecom giants like Verizon, T-Mobile, and AT&T promised. Both routinely promised that 5G would change the way we live and work, usher forth the smart cities of tomorrow, and even revolutionize the way we treat cancer. None of those things wound up being true (I enjoyed talking to one medical professional who basically laughed in my face about the cancer claim).
When 5G did arrive, it didn’t even live up to its basic promise, really. U.S. implementations were decidedly slower, spottier, and more expensive than many overseas networks, thanks to the usual industry consolidation and U.S. regulatory fecklessness. The end result: wireless carriers associated a promising but not world-changing technological improvement with hype and bluster in the mind of consumers.
In a bit of a retrospective, Washington Post tech columnist Shira Ovide looks back at the 5G hype and hopes that maybe, just maybe, somebody in industry will “learn their lesson” from the experience:
We and companies that make technology must acknowledge that not every new technology changes our lives — at least not in a way that makes for a compelling science fiction movie…5G was an incremental technical improvement that companies tried to tell us was a revolutionary leap. It wasn’t.
The sentiment of the piece is absolutely correct. Industry claims should be grounded in reality to ensure consumers, markets, investors, and the public have a realistic, fact-based understanding of a technology’s potential.
But in case you hadn’t noticed with NFT, crypto, AI, and every other technology hype cycle that rolls through, there’s no financial incentive for measured introspection of this type in the attention economy we’ve created. You don’t get the kind of headlines and attention companies and VC’s crave by explaining what a technology actually does, you increasingly get it by being monumentally full of shit.
That’s particularly true with a technology like 5G, that wasn’t a revolution so much as an evolution of existing tech. Not to say 5G doesn’t bring value, but faster, lower latency networks that are easier to maintain simply isn’t sexy, and to keep boosting marketing and investment returns in this increasingly unhinged attention economy, companies are routinely motivated to embrace the preposterous.
On April 13, a new YouTube video called the AI Dilemma was shared by Social Dilemma leading character, Tristan Harris. He encouraged his followers to “share it widely” in order to understand the likelihood of catastrophe. Unfortunately, like the Social Dilemma, the AI Dilemma is big on hype and deception, and not so big on accuracy or facts. Although it deals with a different tech (not social media algorithms but generative AI), the creators still use the same manipulation and scare tactics. There is an obvious resemblance between the moral panic techlash around social media and the one that’s being generated around AI.
As the AI Dilemma’s shares and views are increasing, we need to address its deceptive content. First, it clearly pulls from the same moral panic hype playbook as the Social Dilemma did:
1. The Social Dilemma argued that social media have godlike power over people (controlling users like marionettes). The AI Dilemma argues that AI has godlike power over people.
2. The Social Dilemma anthropomorphized the evil algorithms. The AI Dilemma anthropomorphizes the evil AI. Both are monsters.
3. Causation is asserted as a fact: Those technological “monsters” CAUSE all the harm. Despite other factors – confronting variables, complicated society, messy humanity, inconclusive research into those phenomena – it’s all due to the evil algorithms/AI.
4. The monsters’ final goal may be… extinction. “Teach an AI to fish, and it’ll teach itself biology, chemistry, oceanography, evolutionary theory … and then fish all the fish to extinction.” (What?)
5. The Social Dilemma argued that algorithms hijack our brains, leaving us to do what they want without resistance. The algorithms were played by 3 dudes in a control room, and in some scenes, the “algorithms” were “mad.” In the AI Dilemma, thisanthropomorphizing is taken to the next level:
Tristan Harris and Aza Raskin substituted the word AI for an entirely new term, “Gollem-class AIs.” They wrote “Generative Large Language Multi-Modal Model” in order to get to “GLLMM.” “Golem” in Jewish folklore is an anthropomorphic being created from inanimate matter. “Suddenly, this inanimate thing has certain emergent capabilities,” they explained. “So, we’re just calling them Gollem-class AIs.”
What are those Gollems doing? Apparently, “Armies of Gollem AIs pointed at our brains, strip-mining us of everything that isn’t protected by 19th-century law.”
If you weren’t already scared, this should have kept you awake at night, right?
We can summarize that the AI Dilemma is full of weird depictions of AI. According to experts, the risk of anthropomorphizing AI is that it inflates the machine’s capabilities and distorts the reality of what it can and can’t do — resulting in misguided fears. In the case of this lecture, that was the entire point.
6. The AI Dilemma creators thought they had “comic relief” at 36:45 when they showed a snippet from the “Little Shop of Horrors” (“Feed me!”). But it was actually at 51:45 when Tristan Harris stated, “I don’t want to be talking about the darkest horror shows of the world.”
A specific survey was mentioned 3 times throughout the AI Dilemma. It was about how “Half of” “over 700 top academics and researchers” “stated that there was a 10 percent or greater chance of human extinction from future AI systems” or “human inability to control future AI systems.”
It is a FALSE claim. My analysis of this (frequently quoted) survey’s anonymized dataset (Google Doc spreadsheets) revealed many questionable things that should call into question not just the study, but those promoting it:
1. The “Extinction from AI” Questions
The “Extinction from AI” question was: “What probability do you put on future AI advances causing human extinction or similarly permanent and severe disempowerment of the human species?”
The ”Extinction from human failure to control AI” question was: “What probability do you put on human inability to control future advanced AI systems causing human extinction or similarly permanent and severe disempowerment of the human species?”
There are plenty of vague phrases here, from the “disempowerment of the human species” (?!) to the apparent absence of a timeframe for this unclear futuristic scenario.
When the leading researcher of this survey, Katja Grace, was asked on a podcast: “So, given that there are these large framing differences and these large differences based on the continent of people’s undergraduate institutions, should we pay any attention to these results?” she said: “I guess things can be very noisy, and still some good evidence if you kind of average them all together or something.” Good evidence? Not really.
2. The Small Sample Size
AI Impacts contacted attendees of two ML conferences (NeurlPS & ICML), not a gathering of the broader AI community, in which only 17% responded to their survey in general, and a much smaller percentage were asked to respond to the specific “Extinction from AI” questions.
– Only 149 answered the “Extinction from AI” question.
That’s 20% of the 738 respondents.
– Only 162 answered the ”Extinction from human failure to control AI” question.
It’s quite a stretch to turn 81 people (some of whom are undergraduate and graduate students) into “half of all AI researchers” (which include 100s of thousands of researchers).
Who’s responsible for this survey (and its misrepresentation in the media)? Effective Altruism organizations that focus on “AI existential risk.” (Look surprised).
Who else? The notorious FTX Future Fund. In June 2022, it pledged “Up to $250k to support rerunning the highly-cited survey from 2016.” AI Impacts initially thanked FTX (“We thank FTX Future Fund for funding this project”). Then, their “Contributions” section became quite telling: “We thank FTX Future Fund for encouraging this project, though they did not ultimately fund it as anticipated due to the Bankruptcy of FTX.” So, the infamous crypto executive Sam Bankman-Fried wanted to support this as well, but, you know, fraud and stuff.
What is the background of AI Impacts’ researchers? Katja Grace, who co-founded the AI Impacts project, is from MIRI and the Future of Humanity Institute and believes AI “seems decently likely to literally destroy humanity (!!).” The two others were Zach Stein-Perlman, who describes himself as an “Aspiring rationalist and effective altruist,” and Ben Weinstein-Raun, who also spent years at Yudkowsky’s MIRI. As a recap, the AI Impacts team conducting research on “AI Safety” is like anti-vax activist Robert F. Kennedy Jr. conducting research on “Vaccine Safety.” The same inherent bias.
In 2022, Tristan Harris told “60 Minutes”: “The more moral outrageous language you use, the more inflammatory language, contemptuous language, the more indignation you use, the more it will get shared.”
Finally, we can agree on something. Tristan Harris took aim at social media platforms for what he claimed was their outrageous behavior, but it is actually his own way of operating: load up on outrageous, inflammatory language. He uses it around the dangers of emerging technologies to create panic. He didn’t invent this trend, but he profits greatly from it.
Moving forward, neither AI Hype nor AI Criti-Hype should be amplified.
There’s no need to repeat Google’s disinformation about its AI program learning Bengali it was never trained to know – since it was proven that Bengali was one of the languages it was trained on. Similarly, there’s no need to repeat the disinformation about “half of all AI researchers believe…” human extinction is coming. The New York Times should issue a correction in Yuval Harari, Tristan Harris, and Aza Raskin’s OpEd. Time Magazine should also issue a correction on Max Tegmark’s OpEd which makes the same claim multiple times. That’s the ethical thing to do.
There are real issues we need to be worried about regarding the potential impact of generative AI. For example, my article on AI-generated art tools in November 2022 raised the alarm about deepfakes and how this technology can be easily weaponized (those paragraphs are even more relevant today). In addition to spreading falsehood, there are issues with bias, cybersecurity risks, and lack of transparency and accountability.
Those issues are unrelated to “human extinction” or “armies of Gollems” controlling our brains. The sensationalism of the AI Dilemma distracts us from the actual issues of today and tomorrow. We should stay away from imaginary threats and God-like/monstrous depictions. The solution to AI-lust (utopia) or AI-lash (Apocalypse) resides in… AI realism.
TIME’s cover story decided to go even further and argued: “If future AIs gain the ability to rapidly improve themselves without human guidance or intervention, they could potentially wipe out humanity.” In this scenario, the computer scientists’ job is “making sure the AIs don’t wipe us out!”
Hmmm. Okay.
There’s a strange synergy now between people who hype AI’s capabilities and those who thereby create false fears (about those so-called capabilities).
The false fears part of this equation usually escalates to absurdity. Like headlines that begin with a “war” (a new culture clash and a total war between artists and machines), progress to a “deadly war” (“Will AI generators kill the artist?”), and end up in a total Doomsday scenario (“AI could kill Everyone”!).
I previously called this phenomenon – “Techlash Filter.” In a nutshell, while Instagram filters make us look younger and Lensa makes us hotter, Techlash filters make technology scarier.
It’s all overwhelming. But I’m here to tell you that none of this is new. By studying the media’s coverage of AI, we can see how it follows old patterns.
Since we are flooded with news about generative AI and its “magic powers,” I want to help you navigate the terrain. Looking at past media studies, I gathered the “Top 10 AI frames” (By Hannes Cools, Baldwin Van Gorp, and Michaël Opgenhaffen, 2022). They are organized from the most positive (pro-AI) to the most negative (anti-AI). Together, they encapsulate the media’s “know-how” for describing AI.
Following each title and short description, you’ll see how it is manifested in current media coverage of generative AI. My hope is that after reading this, you’ll be able to cut through the AI hype.
1. Gate to Heaven.
A win-win situation for humans, where machines do things without human interference. AI brings a futuristic utopian ideal. The sensationalism here exaggerates the potential benefits and positive consequences of AI.
The co-pilot theme. It focuses on AI assisting humans in performing tasks. It includes examples of tasks humans will not need to do in the future because AI will do the job for them. This will free humans up to do other, better, more interesting tasks.
Improvement process: how AI will herald new social developments. AI as a means of improving the quality of life or solving problems. Economic development includes investments, market benefits, and competitiveness at the local, national, or global level.
The capabilities of AI are dependent on human knowledge. It’s often linked to the responsibility of humans for how AI is shaped and developed. It focuses on policymaking, regulation, and issues like control, ownership, participation, responsiveness, and transparency.
A game among elites, a battle of personalities and groups, who’s ahead or behind / who’s winning or losing in the race to develop the latest AI technology.
AI poses an existential threat to humanity or what it means to be human. It includes the loss of human control (entire autonomy). It calls for action in the face of out-of-control consequences and possible catastrophes. The sensationalism here exaggerates the potential dangers and negative impacts of AI.
Interestingly, studies found that the frames most commonly used by the media when discussing AI are “a helping hand” and “social progress” or the alarming “Frankenstein’s monster/Pandora’s Box.” It’s unsurprising, as the media is drawn to extreme depictions.
If you think that the above examples represent the peak of the current panic, I’m sorry to say that we haven’t reached it yet. Along with the enthusiastic utopian promises, expect more dystopian descriptions of Skynet (Terminator), HAL 9000 (2001: A Space Odyssey), and Frankenstein’s monster.
We’ve noted for several years how the “race to 5G” was largely just hype by telecoms and hardware vendors eager to sell more gear and justify high U.S. mobile data prices. While 5G does provide faster, more resilient, and lower latency networks, it’s more of an evolution than a revolution.
But that’s not what telecom giants like Verizon, T-Mobile, and AT&T promised. Both routinely promised that 5G would change the way we live and work, usher forth the smart cities of tomorrow, and even revolutionize the way we treat cancer. None of those things wound up being true.
Two big claims by the wireless industry was that 5G was going to revolutionize self-driving vehicle automation and be a key player in the “metaverse” (Facebook’s idiotic term for all future interactive online technologies that involve virtual spaces). But again, that didn’t happen either:
Specifically, metaverse proponent Meta (formerly Facebook) lost more than $700 billion in value during 2022, with shares tumbling further this week on news that CEO Mark Zuckerberg will continue investing in metaverse services into 2023. Separately, Tesla, Ford and General Motors have all notched notable setbacks in their pursuit of autonomous cars, a concept that has received an estimated $100 billion in research and development so far. One autonomous driving pioneer recently bemoaned the fact that the technology “has delivered so little.”
Of course, the Zuckerverse and full self driving falling on their faces weren’t 5G’s fault. But again, 5G was supposed to be a driving force for these evolutions, yet simply didn’t deliver on any of the promises we were subjected to over the last half a decade. It didn’t even fully deliver (yet) on its most basic of promises: affordable next-generation connectivity.
US 5G performance was significantly worse than most overseas deployments due to a dearth of middle-band spectrum. Less talked about (because it’s a preferred outcome for the industry and the policymakers who love them) is the fact U.S. wireless data prices continue to be some of the highest in the developed world, something that only tends to increase with market consolidation.
Getting excited about innovative new technologies is one thing, but the massive chasm that continues to grow between marketing hype and reality in America is something else altogether. Unrealistic claims may drive stock valuations and Elon Musk’s ego on Twitter, but it eventually puts a bad taste in the mouth of actual consumers, and in 5G’s case associated the wireless standard with hype and bluster.
Late last year, we noted how the FAA and the FCC (the agency that actually knows how spectrum works) had gotten into a bit of an ugly tussle over the FAA’s claim that 5G could harm air travel safety.
The FAA claimed that deploying 5G in the 3.7 to 3.98 GHz “C-Band” would cause interference with certain radio altimeters. But the FCC conducted its own study showing minimal issues, and pointed to the more than 40 countries have deployed 5G in this bandwidth with no evidence of harm. Lifelong wireless spectrum policy experts like Harold Feld also blogged about how this was a an overheated controversy, and any real harm could be mitigated.
It didn’t much matter. It didn’t take long before the news wires were filled with reports about how 5G was going to be a diabolical public safety menace when it came to air travel. In part, thanks to folks at the FAA, who leaked scary stories to outlets like the Wall Street Journal.
Researchers found that 5G transmissions stay safely within their assigned frequencies and mostly don’t point signals skyward where aircraft operate, according to the report released Tuesday, the first of several from the government on the new high-speed mobile phone service.
There is a “low level of unwanted 5G emissions” in frequencies used by so-called radar altimeters — which calculate a plane’s distance from the ground and are critical to landing in low visibility — the National Telecommunications and Information Administration said in the report.
The findings offer the the strongest indication to date that the patches being applied to some aircraft models should work well to protect them.
The fact that this always was a minor, fixable problem probably won’t get anywhere near the coverage you saw last year when countless news outlets proclaimed that airliners could soon start falling from the sky thanks to 5G. This was also a weird instance where the FAA failed to cooperatively heed the insights of the FCC, the one regulator specifically tasked with understanding how wireless spectrum actually works.
It will never stop being bizarre to me that a social media app tried to claim ownership of VR, AR, and effectively every next-gen, Internet-related technology under the “Metaverse” brand… and the entirety of the tech press just simply… went along with it. As a result, we’ve spent the better part of the last few years mired in an endless ocean of unhinged hyperbole about “the Metaverse vision” and what it means.
While the press and investors have spent countless hours propping up Zuckerberg’s ego on this subject, the actual end product isn’t much to write home about. Employees have found Meta’s flagship VR social network, Horizon Worlds, to be a buggy mess they don’t enjoy using:
“Since launching late last year, we have seen that the core thesis of Horizon Worlds — a synchronous social network where creators can build engaging worlds — is strong,” [Meta’s VP of Metaverse, Vishal] Shah wrote in a memo last month. “But currently feedback from our creators, users, playtesters, and many of us on the team is that the aggregate weight of papercuts, stability issues, and bugs is making it too hard for our community to experience the magic of Horizon. Simply put, for an experience to become delightful and retentive, it must first be usable and well crafted.”
At the same time, Zuckerberg’s ego has resulted in all Metaverse marketing utilizing the image of a CEO whose outward-facing charm is muted at best. Despite having an unlimited marketing budget and access to the best marketing talent in the world, most Metaverse marketing looks like it was barfed out of a 2007-era Xbox promotional demo, with Zuckerberg’s pasty visage bizarrely the singular focus.
The new Meta Quest Pro VR headset, released this week, could possibly be a huge evolutionary leap, but again, you’d never really know it because Meta’s update this week featured a gobsmacking and bizarrely heavy dose of poorly rendered simulacrums of an already charisma-challenged CEO.
“Mr. Zuckerberg’s zeal for the metaverse has been met with skepticism by some Meta employees. This year, he urged teams to hold meetings inside Meta’s Horizon Workrooms app, which allows users to gather in virtual conference rooms. But many employees didn’t own V.R. headsets or hadn’t set them up yet, and had to scramble to buy and register devices before managers caught on, according to one person with knowledge of the events.
In a May poll of 1,000 Meta employees conducted by Blind, an anonymous professional social network, only 58 percent said they understood the company’s metaverse strategy.
The foundational idea that Zuckerberg can convince the entirety of Facebook’s aging populace to migrate to a sometimes vomit-inducing walled garden of sweaty plastic headsets never made coherent sense. But because Zuckerberg is so wealthy, absolute legions of yes men and women have lined up in service to his ego. So far that’s not working out great, with Meta stock seeing a 60 percent drop in the last year alone.
In the U.S. there’s long been a steadily growing chasm between marketing and reality, and the Metaverse personifies this dominant American cultural trait. Marketing could go a long way toward covering the warts of Horizon Worlds, but there’s absolutely nothing about the current marketing that screams cutting edge or futuristic, and Zuckerberg’s mandated presence is just… odd.
Such terrible marketing can’t obscure the fact that Meta can’t seem to innovate its way around competitors like TikTok. Nor has it proven (at any point, really) that it can be innovative enough to become the kind of next-generation AR/VR global town square it envisions itself becoming.
Facebook has never really been known as an innovative company on the kind of scale we reserve for companies like Apple, but the Metaverse hype and investment train requires that everybody pretend otherwise in a strange, greedy, mass delusion. And with the FTC finally (for now) cracking down on the company’s longstanding catch and kill strategies, Meta can’t M&A its way to AR/VR dominance either.
Meta could still possibly succeed if it removed Zuckerberg’s ego (and possibly Zuckerberg himself) from the management equation, stopped using a man with the charisma of a damp walnut in absolutely all Metaverse marketing, and gained a little humility after the last few years of regulatory, political, and market headaches. But there’s scant evidence that any of that seems likely anytime soon.
Despite Elon Musk’s disdain for the press, his legend wouldn’t exist without the media’s need to hyperventilate over every last thing that comes out of the billionaire’s mouth. We’re at the point where the dumbest offhand comment by Musk becomes its own three week news cycle (see the entire news cycle based on Musk’s comments on a baseless story about somebody cheating at chess with anal beads).
Of course it’s even worse if Musk says something that actually sounds important. Like when Musk recently proclaimed he’d be offering Starlink satellite broadband service in Iran in a heroic bid to help protesting Iranians avoid government surveillance and censorship. It was literally a two word tweet, but the claim, as usual, resulted in lots of ass kissing and a week long news cycle about how Musk was heroically helping Iranians.
I believe that @elonmusk and @spacex Starlink should win the Nobel Prize for Peace.
But the announcement was hollow. Not that you’d know this by perusing press stories. Only a few outlets, like Al Jazeera and The Intercept, could be bothered to dig behind the claims to discover the announcement didn’t actually accomplish much of anything real.
Iran quickly banned the Starlink website, and the only way actual Iranians would be able to use the service is if somebody smuggled Starlink dishes (aka “terminals”) into the country in the middle of a massive wave of violent unrest, something that’s likely impossible at any real scale. There’s also the issue of no ground stations tying connectivity together in Iran:
Musk’s plan is further complicated by Starlink’s reliance on ground stations: communications facilities that allow the SpaceX satellites to plug into earthbound internet infrastructure from orbit. While upgraded Starlink satellites may no longer need these ground stations in the near future, the network of today still largely requires them to service a country as vast as Iran, said Humphreys, the University of Texas professor. Again, Iran is unlikely to approve the construction within its borders of satellite installations owned by an American defense contractor.
So even if Musk wanted to offer struggling Iranians broadband access they’re extremely unlikely to be able to get dishes. And even if they could get dishes, they probably couldn’t use them because the necessary infrastructure wasn’t in place. Of course Musk knew this. But Musk also knows that any random bullshit that comes out of his mouth creates several weeks of free press because the ad-based U.S. press has steadily devolved into a billionaire-coddling bullshit clickbait and controversy machine.
The Intercept found it didn’t take much for large swaths of the Internet to believe that the billionaire had dramatically changed things in Iran with a tweet. Musk fandom is often a fan fiction based community, where truth is fairly negotiable:
That’s not to say that Starlink can’t help people in countries where emergency connectivity is needed, such as in Ukraine. Or rural Kentucky (assuming they can afford the $710 first month bill). But it is to say that turning your brain off every single time Elon Musk opens his mouth because you’ve convinced yourself he’s some kind of deity is violently annoying to people still living in reality.
And while Musk loves to whine and cry about the unfairness of the press, his legend literally wouldn’t exist without the endless supply of clickbait-seeking editors who are completely uninterested in the actual truth behind any and every claim the man makes, whether it’s the capabilities of “full self driving” or Starlink’s potential.
The Washington Post dropped what it pretended was a bit of a bombshell. In the story, Google software engineer Blake Lemoine implied that Google’s Language Model for Dialogue Applications (LaMDA) system, which pulls from Google’s vast data and word repositories to generate realistic, human-sounding chatbots, had become fully aware and sentient.
He followed that up with several blog posts alleging the same thing:
Over the course of the past six months LaMDA has been incredibly consistent in its communications about what it wants and what it believes its rights are as a person.
That was accompanied by a more skeptical piece over at the Economist where Google VP Blaise Aguera y Arcas still had this to say about the company’s LaMDA technology:
“I felt the ground shift under my feet … increasingly felt like I was talking to something intelligent.”
That set the stage for just an avalanche of aggregated news stories, blog posts, YouTube videos (many of them automated clickbait spam), and Twitter posts — all hyping the idea that HAL9000 had been born in Mountain View, California, and that Lemoine was a heroic whistleblower saving a fledgling new lifeform from a merciless corporate overlord:
Google engineer thinks its LaMDA #AI has come to life – This is the most fascinating story with enormous implications, & @Google must fully restore Blake Lemoine’s employment & ability to publicly discuss his findings. @washingtonpost https://t.co/Lrkn8DkB1K
The problem? None of it was true. Google had achieved a very realistic simulacrum with its LaMDA system, but almost nobody who actually works in AI thinks that the system is remotely self-aware. That includes scientist and author Gary Marcus, whose blog post on the fracas is honestly the only thing you should probably bother reading on the subject:
Nonsense. Neither LaMDA nor any of its cousins (GPT-3) are remotely intelligent.1 All they do is match patterns, draw from massive statistical databases of human language. The patterns might be cool, but language these systems utter doesn’t actually mean anything at all. And it sure as hell doesn’t mean that these systems are sentient.
Which doesn’t mean that human beings can’t be taken in. In our book Rebooting AI, Ernie Davis and I called this human tendency to be suckered by The Gullibility Gap — a pernicious, modern version of pareidolia, the anthromorphic bias that allows humans to see Mother Theresa in an image of a cinnamon bun.
That’s not to say that what Google has developed isn’t very cool and useful. If you’ve created a digital assistant so realistic even your engineers are buying into the idea it’s a real person, you’ve absolutely accomplished something with practical application potential. Still, as Marcus notes, when truly boiled down to its core components Google has built a complicated “spreadsheet for words,” not a sentient AI.
The old quote “a lie can travel halfway around the world before the truth can get its boots on” is particularly true in the modern ad-engagement based media era, in which hyperbole and controversy rule and the truth (especially if it’s complicated or unsexy) is automatically devalued (I’m a reporter focused on complicated telecom policy and consumer rights issues, ask me how I know).
That again happened here, with Marcus’ debunking likely seeing a tiny fraction of the attention of stories hyping the illusion.
Criticism of the Post came fast and furiously, many noting that the paper lent credibility to a claim that just didn’t warrant it (which has been a positively brutal tendency of the political press the last decade):
What if I went around UVa yelling that “Jefferson’s ghost haunts my office” and became so disruptive that I got suspended? Would reporters write credulous stories about Jefferson’s ghost and compel “experts” to deny the existence of said ghost?
This tends to happen a lot with AI, which as a technology is absolutely nowhere near sentience, but is routinely portrayed in the press as just a few clumsy steps from Skynet or Hal9000 — simply because the truth doesn’t interest readers. “New technology is very scary” gets hits, so that was the angle pursued by the Post, which some media professors and critics thought was journalistic malpractice:
But the Post was just too eager for another #moralpanic story about strange things happening in these black boxes. They might as well be covering UFOs. Is this the reportorial diligence they bring to covering Donald Trump?
But what I don't understand, as a media analyst, is how the journalist from the Washington Post could look at that and go: "OMG yes, we need to write about chat AI being potentially sentient beings".
I mean, it's so far from reality there that it's insane to even publish it.
In short the Post amplified an inaccurate claim from an unreliable narrator because it knew that a moral panic about emerging technology would grab more reader eyeballs than a straight debunking (or obviously the correct approach of not covering it at all). While several outlets did push debunking pieces after a few days, they likely received a fraction of the attention of the original hype.
Which means you’ll almost certainly now be running into misinformed people at parties who think Google AI is sentient for years to come.
Fifth-generation wireless (5G) was supposed to change the world. According to carriers, not only was it supposed to bring about the “fourth industrial revolution,” it was supposed to revolutionize everything from smart cities to cancer treatment. Simultaneously, conspiracy theorists and internet imbeciles declared that 5G was responsible for everything from COVID-19 to your migraines.
Unfortunately for both sets of folks, data continues to indicate that 5G is nowhere near that interesting.
A number of recent studies have already shown that U.S. 5G is notably slower than most overseas deployments (thanks in part to government’s failure to make more mid-band spectrum available for public use). Several other studies have shown that initial deployments in many cases are actually slower than existing 4G networks. That’s before you get to the fact that U.S. consumers already pay more for wireless than a long list of developed nations thanks to sector consolidation.
While 5G is important, and will improve over time, it’s pretty clear that the technology is more of a modest evolution than a revolution, and 5G hype overkill (largely driven by a desperate desire to rekindle lagging smartphone sales) is a far cry from reality.
That’s not stopping us from already hyping 6G, though. Nokia CEO Pekka Lundmark says that 6G will hit the market sometime around 2030. And, as we saw with 5G, 6G is already being heralded as near-magical and transformative by the folks looking to sell phones and network hardware:
“Right now, we’re all building 5G networks, as we know, but by the time quantum computing is maturing for commercial applications, we’re going to be talking about 6G,” Lundmark said. “By then, [2030], definitely the smartphone as we know it today will not anymore be the most common interface.”
According to Lundmark, the “physical world and the digital world will grow together.” The eventual result could involve a user going into a VR world, flipping a switch or turning a dial, and changing something in the real, physical world.
An industrial metaverse “could include models similar to comprehensive, detailed digital twins of objects that exist in reality,” according to trade magazine Industry Week.
So again, like 5G, a faster, more resilient wireless network isn’t good enough because it’s just not sexy enough. As a result, executives in the telecom space like to reach into a hat full of random buzzwords in a bid to make wireless network evolution sound almost like a miracle. In this case, that includes random references to quantum computing, an industrial metaverse, or a complete re-imagining of reality itself.
To be clear, AR, VR, and other technologies will evolve regardless of 5G and 6G, not because of it. Most of these technologies already work over gigabit WiFi. And while faster, more resilient wireless connections will certainly be of benefit, they’re not driving the innovation in and of itself. And the claim that the smartphone will be effectively dead by 2030 is just…silly.
Yeah, somebody will develop an amazing VR and AR experience (maybe it’s Apple, maybe it’s somebody nobody has heard of). Maybe they’ll even fix the simulation sickness problem and see widespread adoption. But 5G and 6G will supplement those efforts, not forge them, and the idea that a traditional smartphone will just cease to exist six or so years from now is just kind of silly.
You would think that after 5G landed with a big thud in the United States, wireless carriers and telecom executives would be wary of associating the standards’ branding with empty hype and bluster. But given we’re not keen on learning much from experience or history, the cycle of unrealistic hype and unfulfilled promises appears set to repeat itself all over again.