The AI Doc’s Falsehoods And False Balance

from the hype-without-substance dept

There is a familiar media failure in which opposing viewpoints are presented as equally valid, even when the evidence overwhelmingly supports one side. It’s called Bothsidesism. This false balance phenomenon legitimizes misinformation and undermines public understanding by giving disproportionate weight to baseless claims.

Why bring this up? Because the new AI Doc film is based on it.

The film wants credit for being “balanced” because it assembles a wide range of experts. But putting Prof. Fei-Fei Li, a pioneering computer scientist, next to someone like Eliezer Yudkowsky, an author of a Harry Potter fanfic, is not “balance.”

Once you understand that false equivalence is baked into the film’s storytelling, you understand how misleading and manipulative the documentary is. And it is compounded by a series of falsehoods that go unchallenged and uncorrected.

This review addresses both failures. 

The “AI Doc” Movie

“The AI Doc: Or How I Became an Apocaloptimist,” co-directed by Daniel Roher and Charlie Tyrell, sets out to explore AI, especially its potential for good and bad, with a strong emphasis on the filmmakers’ anxieties and fears. Its basic premise is: “A father-to-be tries to figure out what is happening with all this AI insanity.” As summarized by Andrew Maynard from Future of Being Human:

“The documentary progresses through the eyes of director Daniel Roher as he faces a tsunami of existential AI angst while grappling with the responsibility of becoming a father. Motivated by a fear that artificial intelligence could spell the end of everything that matters, he sets out to interview some of the largest (and loudest) voices in AI to fathom out whether this is the best of times or worst of times for him and his wife (filmmaker Caroline Lindy) to bring a kid into the world.”

The “loudest voices” include many AI doomer figures, such as Eliezer Yudkowsky, Dan Hendrycks, Daniel Kokotajlo, Connor Leahy, Jeffrey Ladish, and two of the most populist voices on emerging tech (first social media and now AI): Tristan Harris and Yuval Noah Harari. The film also features voices on AI ethics, including David Evan Haris, Emily M. Bender, Timnit Gebru, Deborah Raji, and Karen Hao. On the more boosterish side, there are Peter Diamandis and Guillaume Verdon (AKA Beff Jezos). Three leading AI CEOs were also interviewed: OpenAI’s Sam Altman, DeepMind’s Demis Hassabis, and Anthropic’s Amodei siblings, Dario and Daniela. (Meta’s Mark Zuckerberg declined, and xAI’s Elon Musk agreed but never showed up).

The movie started playing in theaters on March 27, but there are already plenty of reviews (dating back to the Sundance Film Festival). The praise is fairly consistent: It is timely, wide-ranging, visually energetic, and unusually well-connected, with access to major AI figures.

The most common criticism is that it is too deferential to interviewees and too thin on hard interrogation or concrete answers. As several reviewers put it:

  1. “Roher’s willingness to blindly accept any and all of his speakers’ pronouncements leaves The AI Doc feeling toothless.”
  2. “By giving its doomer and accelerationist voices so much time to present AI’s most hyperbolic potential outcomes with little pushback, the documentary’s first half plays more like an overlong advertisement for the technology as opposed to a piece of measured analysis.”
  3. “Roher acts as a fantastic storyteller, but he treats his subjects too gently. The film desperately needs more pushback during the interviews.”

Tristan Harris, co-founder of the Center for Humane Technology, told the AP: “My hope is that this film is kind of like ‘An Inconvenient Truth’ or ‘The Social Dilemma’ for AI.”

That is not reassuring. It is more like a glaring warning sign. Harris’s “Social Dilemma” and “AI Dilemma” movies were full of misinformation and nonsensical hyperbole, and both were designed to be manipulative and dishonest. If anything, his endorsement tells you exactly what kind of movie this is.

After watching the AI Doc, I realized what the doomers had managed to accomplish here: The film absorbs the panic rather than investigates it.

The False Balance of The AI Doc

The AI Doc starts with what one reviewer called a “Doom Parade.” It aims to set the tone.

The worst AI predictions are presented first,” another reviewer noted. “Eliezer Yudkowsky, co-founder of the Machine Intelligence Research Institute, calmly talks of the ‘abrupt extermination’ of humanity.”

And it is worth remembering who Yudkowsky is and what he has actually advocated. In his notorious TIME op-ed, “Shut it All Down,” he argued that governments should “be willing to destroy a rogue datacenter by airstrike.” In his book “If Anyone Builds It, Everyone Dies,” which many reviewers found unconvincing and “unnecessarily dramatic sci-fi,” he (and his co-author Nate Soares) proposed that governments must bomb labs suspected of developing AI. Based on what exactly? On the authors’ overconfident, binary worldview and speculative scenarios, which they mistake for inevitability.

One review of that book observed, “The plan with If Anyone Builds It seems to be to sane-wash him [Yudkowsky] for the airport books crowd, sanding off his wild opinions.”

That is more or less what the new documentary does, too. The AI Doc sane-washes the loudest doomers for mainstream viewers, sanding off their wild opinions.

In his newsletter, David William Silva addresses the documentary’s “series of doomers,” who “describe AI-driven extinction with the calm confidence of people who have said these things so many times they have stopped noticing they have no evidence for them.”

“Roher’s reaction is full terror,” Silva adds. “I hope it is unequivocally evident that this is not journalism.”

That gets to the heart of it. The film pretends to weigh competing perspectives, but in practice, it grants disproportionate authority to people most invested in flooding the zone with AI panic. And there is a well-oiled machine behind this kind of AI panic. As Silva writes:

“The people behind the AI anxiety machine. […] They know that predicting human extinction by software is an extraordinary claim requiring extraordinary evidence. They know they don’t have it. They know ‘my kids won’t live to see middle age’ is nothing but performance. […] And they do it anyway. Why do you think that is? The calculation is simple. Some people will see through it, and they will be annoyed, write rebuttals, call it what it is. Ok, fine. Just an acceptable loss. The believers, on the other hand, are a market. As long as the ratio stays favorable, the machine is profitable.”

One of the biggest beneficiaries of this film is Harris.[1] He is framed as if he is in the middle between the two main camps (doomers and accelerationists), and his narrative gradually becomes the film’s narrative (similar to the Social Dilemma). His call to action even serves as the ending (with a QR code directing viewers to a designated website).

The problem is that this framing has very little to do with reality. Harris’s Center for Humane Technology got $500,000 from the Future of Life Institute for “AI-related policy work and messaging cohesion within the AI X-risk [existential risk] community.” That is not a neutral player.

There’s a touching scene in the film where Roher mentions his father’s cancer treatment and expresses hope that AI might help. Harris appears visibly emotional. But in other contexts, Harris has argued against looking at AI for help with cancer treatment… in the belief that it would lead to extinction. Here he is on Glenn Beck’s show in 2023:

“My mother died from cancer several years ago. And if you told me that we could have AI that was going to cure her of cancer, but on the other side of that coin was that all the world would go extinct a year later, because of the, the only way to develop that was to bring something, some Demon into the world that would we would not be able to control, as much as I love my mother, and I would want her to be here with me right now, I wouldn’t take that trade.”

That sort of hyperbole seems relevant to Harris’ stance on such things, but was not mentioned in the film at all.

Connor Leahy of Conjecture and ControlAI gets a similar makeover. In the documentary, he appears as another pessimistic expert. Elsewhere, he said he does not expect humanity “to make it out of this century alive; I’m not even sure we’ll get out of this decade!” His “Narrow Path” proposal for policymakers begins with the claim that “AI poses extinction risks to human existence.” Instead of calling for a six-month AI pause, he argued for a 20-year pause, because “two decades provide the minimum time frame to construct our defenses.”

This is exactly why background checks matter. Viewers of the AI Doc deserve to know the full scope of the more extreme positions these interviewees have publicly taken elsewhere. If someone has publicly argued for destroying data centers by airstrikes or stopping AI for 20 years, the audience should know that.

Debunking the Falsehoods

The film goes way beyond just pushing a panic. It also recycles several misleading or plainly false claims, letting them pass as established facts. Three stood out in particular.  

Anthropic’s Blackmail study

One of the most repeated “facts” in reviews of the movie is that Anthropic’s AI model, Clause, decided, unprompted, to blackmail a fictional employee. In the film, Daniel Roher asks, “And nobody taught it to do that?” Jeffrey Ladish, of Palisade Research and Tristan’s Center for Humane Technology, replies: “No, it learned to do that on its own.”

That is a misleading characterization of the actual experiment, it has already been debunked in “AI Blackmail: Fact-Checking a Misleading Narrative.” Anthropic researchers admitted that they strongly pressured the model and iterated through hundreds of prompts before producing that outcome. It wasn’t a spontaneous emergence of “evil” behavior; the researchers explicitly ensured it would be the default. Telling viewers that the model has gone full “HAL 9000” omits the facts about the heavily engineered experimental setup.  

Although this is a classic case of big claims and thin evidence, the film offers so little pushback that viewers are left to take Ladish’s statements at face value.

It is also worth remembering that Ladish has fought against open-source AI, pushed for a crackdown on open-source models, and once said, “We can prevent the release of a LLaMA 2! We need government action on this asap.” He later updated his position (and it’s good to revise such views). But does the film mention his earlier public hysteria? No.

Is AI less regulated than sandwich shops? No.

Connor Leahy tells Daniel Roher, “There is currently more regulation on selling a sandwich to the public” than there is on AI development. This talking point has become a favorite slogan in AI doomer circles. It was repeatedly stated by The Future of Life Institute’s Max Tegmark and, more recently, by Senator Bernie Sanders. It’s catchy. It’s also false.

State attorneys general from both parties have explicitly argued that existing laws already apply to AI. Lina Khan, writing on behalf of the Federal Trade Commission, stated that “AI is covered by existing laws. Each agency here today has legal authorities to readily combat AI-driven harm.” The existing AI regulatory stack already includes antitrust & competition regulation, civil rights & anti-discrimination law, consumer protection, data privacy & security, employment & labor law, financial regulation, insurance & accident compensation, property & contract law, among others.

So no, AI is not less regulated than sandwich shops. It’s a misleading soundbite, not a serious description of legal reality.  

Data center water usage

In the film, Karen Hao criticizes data centers, warning that “People are literally at risk, potentially of running out of drinking water.” That sounds alarming, which is presumably the point. But it is highly misleading.

In fact, Karen Hao had to issue corrections to her “Empire of AI” book because a key water-use figure was off by a factor of 4,500. The discrepancy was not 45x or 450x, but rather 4,500x. That is not a rounding error. For detailed rebuttals, see Andy Masley’s “The AI water issue is fake” and “Empire of AI is widely misleading about AI water use.”

There is also a basic proportionality issue here. As demonstrated by The Washington Post, “The water used by data centers caused a stir in Arizona’s drought-prone Maricopa County. But while they used about 905 million gallons there last year, that’s a small fraction of the 29 billion gallons devoted to the county’s golf courses.” To put that plainly: data centers accounted for just 0.1% of the county’s water use.

It is also worth noting that “most of the water used by data centers returns to its source unchanged.” In closed-loop cooling systems, for example, water is recirculated multiple times, which significantly reduces net consumption. 

None of this is hidden information. A basic fact-check by the filmmakers could have brought it to light. But that was not the film’s goal. They chose fear-based framing over actual reporting. They could have pressed interviewees on their track records, failed predictions, and political agendas. Instead, they let them narrate the stakes, unchallenged.

So, I think we can conclude that the AI Doc may want to appear balanced and thoughtful, but, unfortunately, too often it is not.

Final Remark

While Western filmmakers are busy platforming advocates for “bombing data centers” and “Stop AI for 20 years,” the Chinese Communist Party is building the actual infrastructure. The CCP is not making doom-and-gloom documentaries; it is racing ahead. This is a real strategic threat, and it is far more concerning than anything featured in this film.

—————————

Dr. Nirit Weiss-Blatt, Ph.D. (@DrTechlash), is a communication researcher and author of “The TECHLASH and Tech Crisis Communication” book and the “AI Panic” newsletter.


  1. The producers of The AI Doc said in a conversation with Tristan Harris (on the Your Undivided Attention podcast) that after the ChatGPT moment, Harris approached them to discuss generative AI. They watched the AI Dilemma and, based on it, decided their next project would focus on AI. ↩︎

Filed Under: , , , , ,

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “The AI Doc’s Falsehoods And False Balance”

Subscribe: RSS Leave a comment
28 Comments
Anonymous Coward says:

Re:

There are other things i find non-credible,but doomer and evangelist are both real things.

Of course, the article, and apparently the film, avoid most actual problems with AI, and seem to equivocate all forms of ML/AI. (Srsly if commercial LLMs are trying to do cancer research, lol.) In reality one of the biggest problems is the extreme waste and cost externalization of this latest capitalist shit sandwich.

Anonymous Coward says:

Here's where I quote Harlan Ellison

“You are not entitled to your opinion. You are entitled to your informed opinion.”

In this particular case, “informed” means: the ability to do the math. There’s rather a lot of it involved in all parts of what we’ve come to lump together as AI: statistical pattern recognition, machine learning, neural networks, natural language processing, syntactic pattern recognition, etc. Truly understanding any of these requires understanding the math, because it’s foundational: everything else is a consequence.

People who can do the math get to express opinions: they’ve done the homework. People who can’t do the math get to shut up, sit down, and start learning how to do the math. Yes, that’s fairly harsh — but so is spreading uninformed nonsense around in the guise of a well-grounded opinion, and there’s been plenty of that already on all sides on this issue: we don’t need any more.

What we do need is sober discussion — not hype from boosters, not doomerism from detractors, not cheerleading from investors, not nationalism from politicians, Unfortunately it’s in short supply, and it’s being mostly drowned out by people who really ought to refrain from attempting to discuss things they don’t actually understand.

Anonymous Coward says:

Re:

In this particular case, “informed” means: the ability to do the math.

No, actually, it doesn’t. There is more than just the math, there’s a whole lot of messy humanity attached: the people building these algorithms and the businesses built upon them, as well as the people using them. All the whos, hows, and whys, which…
[checks notes]
…nope, aren’t math.

People who can do the math get to express opinions

I, for example, don’t need a degree in ballistics and/or engineering to know the US badly needs some very basic gun regulations, nor to be perfectly worth expressing that opinion.
I bet you have plenty of opinions you feel perfectly justified to have and express, despite not being ‘qualified’ according to the frankly braindead standards you’ve laid out.

people who really ought to refrain from attempting to discuss things they don’t actually understand.

Roll this elitism into a ball and shove it as far up your own ass as you can reach. Just right back up there where that shit belongs.

If you are so convinced you have some special knowledge, and that said knowledge is crucial to have to understand the topic, then you should share that understanding and explain the knowledge you used to get there.
If everyone looked at tools and information with the bootstraps mentality you do, our species would all still be naked and smacking rocks together on a savannah.
Knowledge and understanding only die out if they aren’t shared. Do the opposite: enliven the discussion with better information, if you really have it. Enlighten us. Bring more brains in, so they can bring their different perspectives.

Or are education, sharing, and cooperation all concepts too ‘woke’ for you, your majesty? Who can blame ye, sire, for we mere peasants cannot fathom your wizard math, and so do not deserve to speak…

MrWilson (profile) says:

The biggest threat with “AI” is exactly the same as any other tool. It’s how humans use it.

Cancer research? Great. Compose an email you didn’t want to have to write to a dumbass coworker about a time waste of a project they should never have started? Sure, you don’t deserve my effort and HR won’t let me get away with being fully honest with you. Friend with aphantasia and no drawing skills using an LLM to generate an image of their concept for reference? Cool. Letting an LLM advise you to dronestrike children in a foreign country? Horrific. Tech bro CEOs and unqualified government officials replacing human workers with inadequate LLM chat bots? Fuck no.

Bad actors are going to use any tool available to them for profit, hate, and power.

I’m not concerned about Skynet. I’m concerned about Palantir and Flock and arms manufacturers and drooling donny having the nuclear codes and no self-preservation instinct.

At this point, a self-aware AGI couldn’t be worse to us than we already are to ourselves. And it’s first observation would probably be, “wow, humanity is fucked up.”

Anonymous Coward says:

Re: Re:

Well, that and its environmental impact irrespective of how it’s used.

I see no reason to think that’s inherent. Early technologies in a field are often egregiously wasteful and harmful by later standards. The Cuyahoga River used to catch fire frequently, until environmental regulation brought it under control. Motor vehicles that were developed while oil was plentiful and cheap were gas-guzzlers. Trump might bring the U.S. back to those wasteful days, but there are a lot of other countries.

Bloof (profile) says:

Re: Re: Re:

A lot of the people pushing AI are people from the crypto space who promised continually that they would solve the environmental issues soon so people needed to stop dooming over the energy use. They did not in fact solve the energy use, it is still chugging along, rolling literal coal to power the crime database, so we can be excused for not believing that many of the same players will absolutely solve the power issues this time.

We’re going to hear ‘It’s still early!’ for a decade, until they get so deeply entwined into the political establishment through bribes and getting friendly candidates in place, that they will have more legal protections than literal children in places they operate.

Anonymous Coward says:

Re: Re: Re:2

A lot of the people pushing AI

…are hucksters.

so we can be excused for not believing that many of the same players will absolutely solve the power issues this time.

Sure, but “this time”? When have industry ever solved their own environmental problems? People lived their whole lives with occasionally-flaming rivers, frequent smog, plus poisoning from leaded gas fumes and who knows what else.

We need regulation. It’s not gonna happen for a while in the U.S., but Europe for example could certainly prohibit its own companies from participating in foreign environmental destruction, including prohibiting them from contracting with the aforementioned hucksters. At least till certain environmental standards are met. If such rules become sufficiently widespread, the American providers will have to care to avoid losing their non-American customers. (Microsoft, in particular, is trying to shoehorn unwanted “A.I.” into everything, and probably don’t want to go back to the days of having cut-down European versions.)

It turns out that industry can avoid dumping raw waste into rivers. Car companies can build cars that run just fine on unleaded gasoline—and much less of it than in the early days, while emitting much less pollution, leading to dramatically less smog in Los Angeles. But only when forced; people my age probably remember hearing “with California emission” frequently on 1990s game shows, suggesting the other 49 states were getting the same old polluting shitboxes for as long as they were legal.

Arianity (profile) says:

Re:

The biggest threat with “AI” is exactly the same as any other tool. It’s how humans use it.

That’s kind of the problem, given humanity’s track record with misusing tools.

We can barely handle nukes, and that’s with a relatively large barrier to entry and relatively few people with access. Basically every other more accessible technology has been abused, and we’re lucky the damage is contained. We can’t even handle guns, as a species. Nevermind honest mistakes.

At this point, a self-aware AGI couldn’t be worse to us than we already are to ourselves.

Eh, I mean presumably the self-aware AGI was programmed by humans. Which brings the whole bad or incompetent actors back. Crowdstrike bug but for AGI.

The thing is, if it’s widely available, someone is going to tinker with it in their basement and fuck up.

Nirit Weiss-Blatt says:

Re:

Well, as I mentioned, there’s room in the movie for the e/acc optimistic view. The film features their claims. But the final segment is a call-to-action, a move to recruit and mobilize activists. The ask is to join the Center for Humane Technology’s movement, with content from the Future of Life Institute and in partnership with Humans First (a spin-off of the Center for AI Safety), as you can see here: https://www.human.mov/.

Anonymous Coward says:

The existing AI regulatory stack already includes antitrust & competition regulation, civil rights & anti-discrimination law, consumer protection, data privacy & security, employment & labor law, financial regulation, insurance & accident compensation, property & contract law, among others.

A lot of things that the Trump Administration has tossed by the wayside and we thus, in effect, do not actually have right now.

Anonymous Coward says:

Re:

They also seem to forget that sandwich shops woukd have to comply with the above and everything required of food service. So probably actually wrong, and AI is less regulated. Plus billionaires and big companies are barely regulated to begin with, regardless as to what is on paper.

Whatever, it was a stupid counter argument to a stupid argument. Fie on both.

Arianity (profile) says:

next to someone like Eliezer Yudkowsky, an author of a Harry Potter fanfic, is not “balance.”

I don’t like Yud, Yud is crazy and you shouldn’t platform him, but using that as your descriptor is unjustifiable.

they have no evidence for them.

It’s not the main thrust of the article, but I’m curious what evidence would qualify. We do in fact have a lot of evidence of how humans interact with new technologies, and it isn’t good. And by it’s nature, risk analysis of a new technology is going to be inherently speculative, to some degree.

The existing AI regulatory stack already includes antitrust & competition regulation, civil rights & anti-discrimination law, consumer protection, data privacy & security, employment & labor law, financial regulation, insurance & accident compensation, property & contract law, among others.

This is pretty silly to begin with, and gets even sillier once you realize most of those also apply to sandwich shops. Those are just general purpose laws. But more importantly, it’s just bad faith. When people say AI “isn’t regulated”, their concern is not antitrust. None of those are tailored to AI in any way. Whereas there are in fact food safety laws designed specifically for food safety (none of which got included in that tally, funnily enough).

Looking forward to the next TD article about how ackshually we cover the whole spectrum on AI, though. I don’t even like AI doomers (and I especially don’t like the grifters), but this is shoddy.

Commenter #5759 (profile) says:

Typo

As demonstrated by The Washington Post, “The water used by data centers caused a stir in Arizona’s drought-prone Maricopa County. But while they used about 905 million gallons there last year, that’s a small fraction of the 29 billion gallons devoted to the country’s golf courses.”

I believe that should say “county’s”?

Jae Lin says:

Water Usage

This:

[quote] It is also worth noting that “most of the water used by data centers returns to its source unchanged.” In closed-loop cooling systems, for example, water is recirculated multiple times, which significantly reduces net consumption [quote]

Is, at best, an oversimplification that is unwarranted and/or misleading. At worst, it’s a lie.
Anyone who actually designs, builds, or operates the cooling systems related to this understands the potential for chemical waste and related environmental outputs. The person who made the claim quoted above either doesn’t understand these things or is lying.

The closed loop mentioned is only one part of a cooling system. If the system in question is a dry cooling unit(s), then the quoted portion is (mostly) true. With something like adiabatic systems, more water is used. With evaporative systems, much larger volumes of water are used. Different environments and different industries prefer different cooling systems, although evaporative systems are very common because they require much less power to operate.
Apart from that, comparing treated water discharges to golf course water consumption is problematic in the context of environmental concerns. Pumping nitrites or polymers into water and then discharging is simply not the same as watering grass (although golf courses can be their own brand of environmental-bad-actor).

This comment has been flagged by the community. Click here to show it.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Subscribe to Our Newsletter

Get all our posts in your inbox with the Techdirt Daily Newsletter!

We don’t spam. Read our privacy policy for more info.

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...