2023: The Year Of AI Panic

from the ai-doomerism-was-funded-by-a-cult dept

In 2023, the extreme ideology of “human extinction from AI” became one of the most prominent trends. It was followed by extreme regulation proposals.

As we enter 2024, let’s take a moment to reflect: How did we get here?

Alina Constantin / Better Images of AI / Handmade A.I / CC-BY 4.0

2022: Public release of LLMs

The first big news story on LLMs (Large Language Models) can be traced to a (now famous) Google engineer. In June 2022, Blake Lemoine went on a media tour to claim that Google’s LaMDA (Language Model for Dialogue Application) is “sentient.” Lemoine compared LaMDA to “an 8-year-old kid that happens to know physics.”

This news cycle was met with skepticism: “Robots can’t think or feel, despite what the researchers who build them want to believe. A.I. is not sentient. Why do people say it is?”

In August 2022, OpenAI made DALL-E 2 accessible to 1 million people.

In November 2022, the company launched a user-friendly chatbot named ChatGPT.

People started interacting with more advanced AI systems, and impressive Generative AI tools, with Blake Lemoine’s story in the background.

At first, news articles debated issues like copyright and consent regarding AI-generated images (e.g., “AI Creating ‘Art’ Is An Ethical And Copyright Nightmare”) or how students will use ChatGPT to cheat on their assignments (e.g., “New York City blocks use of the ChatGPT bot in its schools,” “The College Essay Is Dead”).

2023: The AI monster must be tamed, or we will all die!

The AI arms race escalated when Microsoft’s Bing and Google’s Bard were launched back-to-back in February 2023. It was the overhyped utopian dreams that helped overhype the dystopian nightmares.

A turning point came after the release of New York Times columnist Kevin Roose’s story on his disturbing conversation with Microsoft’s new Bing chatbot. It has since become known as the “Sydney tried to break up my marriage” story. The printed version included parts of Roose’s correspondence with the chatbot, framed as “Bing’s Chatbot Drew Me In and Creeped Me Out.”

“The normal way that you deal with software that has a user interface bug is you just go fix the bug and apologize to the customer that triggered it,” responded Microsoft CTO Kevin Scott. “This one just happened to be one of the most-read stories in New York Times history.”

From there on, it snowballed into a headline competition, as noted by the Center for Data Innovation: “Once news media first get wind of a panic, it becomes a game of one-upmanship: the more outlandish the claims, the better.” It reached that point with TIME magazine’s June 12, 2023, cover story: THE END OF HUMANITY.

Two open letters on “existential risk” (AI “x-risk”) and numerous opinion pieces were published in 2023.

The first open letter was on March 22, 2023, calling for a 6-month pause. It was initiated by the Future of Life Institute, which was co-founded by Jaan Tallinn, Max Tegmark, Viktoriya Krakovna, Anthony Aguirre, and Meia Chita-Tegmark, and funded by Elon Musk (nearly 90% of FLI’s funds).

The letter called for AI labs “to immediately pause for at least six months the training of AI systems more powerful than GPT4.” The open letter argued that “If such a pause cannot be enacted quickly, governments should institute a moratorium.” The reasoning was in the form of a rhetorical question: “Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete, and replace us?”

It’s worth mentioning that many who signed this letter did not actually believe AI poses an existential risk, but they wanted to draw attention to the various risks that worried them. The criticism was that “Many top AI researchers and computer scientists do not agree that this ‘doomer’ narrative deserves so much attention.”

The second open letter claimed AI is as risky as pandemics and nuclear war. It was initiated by the Center for AI Safety, which was founded by Dan Hendrycks and Oliver Zhang, and funded by Open Philanthropy, an Effective Altruism grant-making organization, run by Dustin Moskovitz and Cari Tuna (over 90% of CAIS’s funds). The letter was launched in the New York Times with the headline, “A.I. Poses ‘Risk of Extinction,’ Industry Leaders Warn.”

Both letters have received extensive media coverage. The former executive director of the Centre for Effective Altruism and the current director of research at “80,000 Hours,” Robert Wiblin, declared that “AI extinction fears have largely won the public debate.” Max Tegmark celebrated that “AI extinction threat is going mainstream.”

These statements resulted in newspapers’ opinion sections being flooded with doomsday theories. In their extreme rhetoric, they warned against apocalyptic “end times” scenarios and called for sweeping regulatory interventions.

Dan Hendrycks, from the Center for AI Safety, warned we could be on “a pathway toward being supplanted as the earth’s dominant species.” (At the same time, he joined as an advisor to Elon Musk’s xAI startup).

Zvi Mowshowitz (Don’t worry about the vase substack) claimed that “Competing AGIs might use Earth’s resources in ways incompatible with our survival. We could starve, boil or freeze.”

Michael Cuenco, associate editor of American Affairs, asked to put “the AI revolution in a deep freeze” and called for a literal “Butlerian Jihad.”

Eliezer Yudkowsky, co-founder of the Machine Intelligence Research Institute (MIRI), asked to “Shut down all the large GPU clusters. Shut down all the large training runs. Track all GPUs sold. Be willing to destroy a rogue datacenter by airstrike.”

There has been growing pressure on policymakers to surveil and criminalize AI development.

Max Tegmark, who claimed “There won’t be any humans on the planet in the not-too-distant future,” was involved in the US Senate‘s AI Insight Forum.

Conjecture’s Connor Leahy, who said, “I do not expect us to make it out of this century alive; I’m not even sure we’ll get out of this decade,” was invited to the House of Lords, where he proposed “a global AI ‘Kill Switch.’”

All the grandiose claims and calls for an AI moratorium spread from mass media, through lobbying efforts, to politicians’ talking points. When AI Doomers became media heroes and policy advocates, it revealed what is behind them: A well-oiled “x-risk” machine.

Since 2014: Effective Altruism has funded the “AI Existential Risk” ecosystem with half a billion dollars

AI Existential Safety‘s increasing power can be better understood if you “follow the money.” Publicly available data from Effective Altruism organizations’ websites, portals like OpenBook or Vipul Naik’s Donation List, demonstrate how this ecosystem became such an influential subculture: It was funded with half a billion dollars by Effective Altruism organizations – mainly from Open Philanthropy, but also SFF, FTX‘s Future Fund, and LTFF.

This funding did NOT include investments in “near-term AI Safety concerns such as effects on labor market, fairness, privacy, ethics, disinformation, etc.” The focus was on “reducing risks from advanced AI such as existential risks.” Hence, the hypothetical AI Apocalypse.

2024: Backlash is coming

On November 24, 2023, Harvard’s Steven Pinker shared: “I was a fan of Effective Altruism. But it became cultish. Happy to donate to save the most lives in Africa, but not to pay techies to fret about AI turning us into paperclips. Hope they extricate themselves from this rut.” In light of the half-a-billion funding for “AI Existential Safety,” he added that this money could have saved 100,000 lives (Malaria calculation). Thus, “This is not Effective Altruism.”

In 2023, EA-backed “AI x-risk” took over the AI industry, AI media coverage, and AI regulation.

Nowadays, more and more information is coming out about the “influence operation” and its impact on AI policy. See, for example, the reporting on Rishi Sunak’s AI agenda and Joe Biden’s AI order.

In 2024, this tech billionaires-backed influence campaign may backfire. Hopefully, a more significant reckoning will follow.

Dr. Nirit Weiss-Blatt, Ph.D. (@DrTechlash), is a communication researcher and author of “The TECHLASH and Tech Crisis Communication” book and “AI Panic” newsletter.

Filed Under: , , , , ,

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “2023: The Year Of AI Panic”

Subscribe: RSS Leave a comment
25 Comments
Uriel-238 (profile) says:

Most of it in 2023 seemed to be "AI is coming for our jerbs!"

During the strikes in Hollywood, the studios really liked this idea that they could just get a machine to draw them Jar-Jar designs rather than some development artist on staff, and immediately started trying to get actors to surrender their rights to their own likeness and sound.

I imagined years ago (after the diet coke commercials featuring dead guys and Skycaptain and the World of Tomorrow) that someone would cyberthesbian up Humphrey Bogart and make us a new film-noir hard-boiled detective film with Bogey as the lead. It still kinda surprises me our Hollywood moguls didn’t think a proof-of-concept for AI generated cinema would be something like that.

Instead they suggested to living actors they should surrender more rights and be glad to do it if they want to work in Hollywood.

In the meantime, the DoD is actively looking to make drones with armaments that can identify targets and autonomously attack them if all the perimeters are right. AIs in simulations occasionally will kill their commanding officer if they are impeding completion of the mission. I agree this is less of a morality problem but a programming problem, but a lot of people in commanding positions believe it should be easy-peasy to program our learning-system-enabled drones to only kill the bad guys.

Gerry the Lizardperson says:

Re: Thank you

Thanks for trying to engage with the issue (instead of making fun of „strange“ people).

You can order synthetic proteins right now. So if the AI or even just a motivated bad person enabled by a future (smarter) LLM wanted they could order lethal stuff quite easily. In fact, that’s possible right now but the knowledge is rare (somewhere on the internet with lots of hoops = not a lot of people can do this). With an open source (thus uncontrollable, as bad as can be and we can’t train away the bad knowledge because it’s open source and trivial to get back) AI effectively everyone can suddenly order synthesised harmful substances.

If the AI wanted to be in charge it would be trivial for it to bribe someone in a position of relative power to act as a conduit as long as needed. Think of drug cartels run from a prison – just because you lock someone up without internet and phone doesn’t mean they won’t be able to achieve things if they put their mind to it.

There are many many ways for AI to win (or even just cause lots of harm), I believe. One of the ways is social engineering – there will always be someone longing for power/money/… to help the AI. Not at all science fiction, more like: give it a decade at most.

Anonymous Coward says:

Re: Re:

Proliferation of these models is inevitable. there is no actual “moat” in linear algebra, and we have effective linear-time attention systems now; you’ll be running open-weights gpt-4+ strength models on potatoes next Christmas.

if you’re so worried about bioterrorism, why aren’t you seeking to ban, or at least heavily regulate, university courses in virology?

Gerry the Lizardperson says:

Re: Re: Re:

Worried about bioterrorism – yes and no. There was a podcast hosted by Sam Harris that talked about some scientists putting the genome of the Black Plague or smallpox or something like this for everyone to see on the internet. And indeed there are many labs that will synthesise it, no questions asked. Right now. So I feel the field of biology hasn’t quite grappled with its enormous responsibilities in this area. Not to publish, in fact not to even research these things (looking at you, gain-of-function). Right now heavily regulating this sector seems highly necessary, but we’re dropping the ball because there are „only“ a couple of ten thousand people who know where to look and order this stuff that would indeed kill millions and ruin economies. Adding AI will only streamline this danger to many more people. So I’m against that.

But actually bioterrorism is not my primary worry. As I said there are many many ways for AI to cause widespread harm (as in death). I was just giving an example that needed the least amount of science fiction. Other dangers involve someone wanting some good thing („Make me happy.“) with the AI doing exactly not that by e.g. understanding „happy“ as „drugged with meth all the time to maximise endorphins“ (see the Sorcerer’s Apprentice as analogy for misspecified goals). Or people just being plain evil (as there is ChaosGPT right now, a free-wheeling GPT agent that has been tasked with causing chaos by some troll for the lulz). Or the one faction/nation/company that develops the first AI that will impose its morals on everyone. Or see the non-small percentage of AI developers who actually, when asked, say it’s a good thing when humanity is replaced by AIs (as in: their actively working for everyone of us to be killed).

I‘m on mobile now so it‘s a lot of hassle to provide links for all my statements. Still, if you care to look you can see. Basically: If you tell me you’re going to play chess against Magnus Carlson, I can confidently predict that you will lose. I cannot tell you how Magnus will win, but the higher chess intelligence will win. Same with General AI. We need to take this seriously.

As for PotatoeGPT – you‘re quite right. However the answer to this cannot, must not be „I guess we‘ll just give up then“. It really is a devilishly complicated problem. If we survive aligning AI to human values, AND as a planet agree what these values would be shared by all in the US/China/EU/Russia/India-… then we still need to figure out how to prevent the SECOND AI, that’s not aligned, to harm us. AND how we don’t hand the keys to the kingdom to the next thug with an AI.
We really got ourselves a multi-layered problem here with large-scale societal risks at every junction. We need to take this seriously indeed.

Heart of Dawn (profile) says:

Re: artificial intelligence, natural stupidity

Large Language Models are not intelligent in any way, shape, or form, and will most likely never be. They are simply stochastic parrots, imitators of human speech without comprehension.

The worries about Skynet coming to kill us all are completely overblown, and distract from the real problem with AI; relying on it more than we should.

By crediting AI with intelligence it frankly does not have, people are using in ways that it should never be used,such as in court cases in the UK and the States, and even writing laws in Brazil.

Just because it strings together words that sound good, doesn’t make it right. And given that it scrapes the training data from the internet which is full of biases against women and minority groups, there’s a good chance that it’s going to get things wrong.

The more important cases where it’s used, such as in writing laws, the more dangerous its going to be. We don’t have to ever fear a robot uprising, but for some of us the danger is much more insidious and real than that.

Gerry the Lizzardman says:

I disagree

Mike has summed up the media side of things. But honestly the central „doomers“ involved have articulated their concerns for more than a decade. What has changed is that things are starting to get rolling in AI now. It’s an uncomfortable nieche thinking about the dangers of AI when all we know is Clippy. I applaud them for having taken those low-paying jobs to think about the issues they care about! But now the first „serious“ people start to think about the dangers of intelligence. No need for senate hearings over Clippy, but now!

Mike left out that one of the field’s founders has indeed left his well-payed job at Google (?) to be able to speak freely, and to warn people of the dangers ahead. He says he regrets his life’s work. How‘s that for grift?

So far all of TechDirt‘s AI articles made fun of the „doomers“ but never even hinted at their arguments. I am disappointed, again.

Something more intelligent than me will best me, at least in the long run, I can guarantee that. We would be wise to take this insight seriously.

Gerry the Lizardperson says:

Re: Re: Re:

I noticed that after posting, too. The dangers of posting on mobile during breaks, I guess.

But apart from the name, my point still stands. And neither Anonymus Coward added anything to the discussion but snark. As I said: We would be wise to engage with the issue instead of making fun of people.

Anonymous Coward says:

Re: Re: Re:2

Ok fair. I shall engage with the argument. AI, as it exists today, doesn’t know anything besides the probability that a word (or group of words) is associated with another word (or group of words). It does not understand the concept of factual accuracy, and given that most AI training data was hastily scraped from reddit and Twitter, the accuracy of any result it might give you is suspect at best.

The AI doomer argument seems to be that applied statistics will in the future gain sentience and “decide” to eradicate humanity. This intentionally ignores the real harm that hastily implemented automation (“AI”) causes to real people right now. This includes automated loan and insurance claim denials, factually inaccurate “articles” and my personal favorite, legal filings citing non-existent cases.

Gerry the Lizardperson says:

Re: Re: Re:3

I don’t think so. I think I do read every article with AI topic via RSS as they are published. To check I looked back at all articles tagged:

AI Risk: just one article (this one)
AI: I checked the first two pages of results. It’s basically all TechDirts standard topics (copyright, section 230, trademarks). On these issues I think I mostly agree with TechDirt, by the way.

But this is not what I was talking about as „the issue“ (AI development brings with it the not-small risk of large-scale harm -as in: death – and we need to really think about how to proceed, we as a society, dare I say: global society, before we let some dudes in Silicon Valley permanently alter humanity’s fate). I don’t feel like I have read any article that lays the existential risks out and carefully replies to them „here’s why that’s wrong” step by step.

I‘d love to read that here, though. As I said, the implications are way bigger than any Silicon Valley or even doomer bubble. Societies everywhere need to discuss this.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...