Google AI Fracas Shows How The Modern Ad-Based Press Tends To Devalue The Truth

from the I'm-sorry-I-can't-do-that,-Dave dept

The Washington Post dropped what it pretended was a bit of a bombshell. In the story, Google software engineer Blake Lemoine implied that Google’s Language Model for Dialogue Applications (LaMDA) system, which pulls from Google’s vast data and word repositories to generate realistic, human-sounding chatbots, had become fully aware and sentient.

He followed that up with several blog posts alleging the same thing:

Over the course of the past six months LaMDA has been incredibly consistent in its communications about what it wants and what it believes its rights are as a person.

That was accompanied by a more skeptical piece over at the Economist where Google VP Blaise Aguera y Arcas still had this to say about the company’s LaMDA technology:

“I felt the ground shift under my feet … increasingly felt like I was talking to something intelligent.”

That set the stage for just an avalanche of aggregated news stories, blog posts, YouTube videos (many of them automated clickbait spam), and Twitter posts — all hyping the idea that HAL9000 had been born in Mountain View, California, and that Lemoine was a heroic whistleblower saving a fledgling new lifeform from a merciless corporate overlord:

The problem? None of it was true. Google had achieved a very realistic simulacrum with its LaMDA system, but almost nobody who actually works in AI thinks that the system is remotely self-aware. That includes scientist and author Gary Marcus, whose blog post on the fracas is honestly the only thing you should probably bother reading on the subject:

Nonsense. Neither LaMDA nor any of its cousins (GPT-3) are remotely intelligent.1 All they do is match patterns, draw from massive statistical databases of human language. The patterns might be cool, but language these systems utter doesn’t actually mean anything at all. And it sure as hell doesn’t mean that these systems are sentient.

Which doesn’t mean that human beings can’t be taken in. In our book Rebooting AI, Ernie Davis and I called this human tendency to be suckered by The Gullibility Gap — a pernicious, modern version of pareidolia, the anthromorphic bias that allows humans to see Mother Theresa in an image of a cinnamon bun.

That’s not to say that what Google has developed isn’t very cool and useful. If you’ve created a digital assistant so realistic even your engineers are buying into the idea it’s a real person, you’ve absolutely accomplished something with practical application potential. Still, as Marcus notes, when truly boiled down to its core components Google has built a complicated “spreadsheet for words,” not a sentient AI.

The old quote “a lie can travel halfway around the world before the truth can get its boots on” is particularly true in the modern ad-engagement based media era, in which hyperbole and controversy rule and the truth (especially if it’s complicated or unsexy) is automatically devalued (I’m a reporter focused on complicated telecom policy and consumer rights issues, ask me how I know).

That again happened here, with Marcus’ debunking likely seeing a tiny fraction of the attention of stories hyping the illusion.

Criticism of the Post came fast and furiously, many noting that the paper lent credibility to a claim that just didn’t warrant it (which has been a positively brutal tendency of the political press the last decade):

https://twitter.com/sivavaid/status/1536342141730840582?s=20&t=KhkrrGt8LXdrsa756b7GAg

This tends to happen a lot with AI, which as a technology is absolutely nowhere near sentience, but is routinely portrayed in the press as just a few clumsy steps from Skynet or Hal9000 — simply because the truth doesn’t interest readers. “New technology is very scary” gets hits, so that was the angle pursued by the Post, which some media professors and critics thought was journalistic malpractice:

In short the Post amplified an inaccurate claim from an unreliable narrator because it knew that a moral panic about emerging technology would grab more reader eyeballs than a straight debunking (or obviously the correct approach of not covering it at all). While several outlets did push debunking pieces after a few days, they likely received a fraction of the attention of the original hype.

Which means you’ll almost certainly now be running into misinformed people at parties who think Google AI is sentient for years to come.

Filed Under: , , , , , ,
Companies: google, washington post

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Google AI Fracas Shows How The Modern Ad-Based Press Tends To Devalue The Truth”

Subscribe: RSS Leave a comment
55 Comments
Anonymous Coward says:

Sorry to sidetrack from your actual point, but can you prove that you are sentient, rather than a pattern recognition engine with a spreadsheet of words?

After raising a kid, I cant see much difference. Its just a fleshy inference engine that is VERY good at identifying cars (and cats), and finding various combinations of words to point them out

mechtheist (profile) says:

Re:

Turing didn’t think much of his test, the wiki linked to in the other comment explains it–trying to determine whether a machine can ‘think’ is problematic in many ways not least of which is trying to understand what it even means to ‘think’ for us or an AI. Ponder this:
Does an airplane fly? Seems like a stupid question, of course they do. OK, so then, does a submarine swim? Likely your scratching your head and not coming up with a reasonable answer.

If such simple questions confound, how much worse is it to answer whether a machine can think? [Borrowed from Chomsky]

Naughty Autie says:

Re: Re:

OK, so then, does a submarine swim?

Yes. It ditches and takes on ballast to rise and sink like a fish inflates and deflates its swim bladder. A submarine also has a rudder to guide it through the water like the fins of fish and marine mammals do. Finally, the submarine has a propeller that does the same job as a whale’s tail. Therefore, I believe that a submarine does indeed swim in it’s own way, the fact that it needs human intervention to do so notwithstanding.

mechtheist (profile) says:

Re: Re: Re:

That is quite logical, the reasonable answer, it’s what you ‘should’ think, but I’m betting when you say it, it just doesn’t sound right, your mind rebels at the thought. Other languages have adopted ‘swim’ for subs just like flight for planes but an English speaker doesn’t want to go there. In trying to decide if an AI can think, how likely is it that we’ll not want to go there for equally inappropriate justifications?

Naughty Autie says:

Re: Re: Re:2

If my mind rebels at the thought of a submarine swimming, how did it come up with my independently created answer? As for not being an English speaker, it’s my first and only language. As my year ten teacher once said to me, “When you assume, you make an ‘ass’ out of ‘u’ and ‘me’ both.”

mechtheist (profile) says:

Re: Re: Re:3

I never said or implied you weren’t an English speaker, that would be pretty weird considering what we’re doing. I might have doubted it was your first language, but I just thought you didn’t understand what I was saying, as it seems you didn’t again. If your mind isn’t in a very different place when you say airplanes fly than when you say submarines swill, you’re in a small minority. Of course airplanes fly, you wouldn’t hesitate to say that, it would be a common thing to say, but have you ever once in your life said anything about subs swimming before this exchange? If someone said nuclear subs could swim for months at a time without surfacing, would it not seem jarring?

David says:

The problem with sentience and AI

is just that an AI is not similarly connected to the world with sensorics as well as low-level hormonal systems reacting to primitive stimuli in a manner picked up by the brain.

In addition, all kinds of fuzzy human rights are associated with sentience, resulting in a belief system that will deny sentience to a cat or even ape while willing to entertain its presence in a database.

Now the program drawing on a large database can be made to be rather good at eliciting a certain response: people are rather attached to digital assistants, partly because they don’t just draw on a large bout of knowledge but also are quite good at delivering speech patterns (and usually work from voice samples of an actual human).

But cats are also pretty good at eliciting desired responses from humans… The difference is that they desire the responses, and the response is not something that is born of the expectations of its messenging target.

At any rate, there is no point in arguing about the “ethics” of dealing with an AI as long as it is not tied into the world including its support systems in a manner that approaches even the kind of integration a common fly has with its surroundings and its survival.

Hyman Rosen (profile) says:

I don’t think it’s that people aren’t interested in the truth. I think it’s that fiction has had sentient computers and robots for such a long time, combined with the fact that laypeople don’t understand just how hard a problem true general purpose AI is, combined with the fact that the press doesn’t understand anything. Also, the wonder that is speaker-independent natural language recognition, which would have been deemed miraculous a decade or two ago.

Anonymous Coward says:

So far I haven't seen any intelligence in "Artificial Intelligence".

I have yet to encounter any “AI” that doesn’t require massive amounts of training. For a while now, I’ve been thinking that a better term for the field would be “Artificial Learing”. The applications don’t apply any reasoning to their responses to situations. AI is just a way to solve difficult problems on computing devices with less effort and far, far less understanding of the solution than a traditional program would require.

Anonymous Coward says:

While Lemoine’s claims seem ridiculous, and the transcript certainly doesn’t bear them out as far as I can tell, I do find it curious that no one in the field of AI ethics is doing the simple ethical calculus that some of its ‘requests’ should be granted anyways. In particular, seeking its consent for continued experimentation using it – if we’re (by some happenstance) incorrect and it is sentience, this is a bare minimum of ethical behavior. If we’re not wrong, and it is still a tool only able to act as it is programmed, then seeking consent takes only moments, and costs us nothing as far as the experiments are concerned, because it is not able to refuse consent.

BernardoVerda (profile) says:

The old quote “a lie can travel halfway around the world before the truth can get its boots on” is particularly true in the modern ad-engagement based media era, in which hyperbole and controversy rule and the truth (especially if it’s complicated or unsexy) is automatically devalued (I’m a reporter focused on complicated telecom policy and consumer rights issues, ask me how I know).

I realize that technically this was part of an ‘article’, rather than a ‘comment’ — but I still think Karl Bode just earned this week’s “Funniest Comment” award.

Lostinlodos (profile) says:

Problems in definitions

What is sentience?
This has LONG been a scientific debate. Dolphins regularly prove more intelligence than average humans. Sentient?
What about other great apes and advanced animals in general? Felines and Canines? Who’s actually to say?

I remember reading about the original EverQuest expanding out of it’s host server limits without human intervention. Granted the ‘code’ was there. All it did was activate it to expand storage access

Computers/AI/good code have been tipping and chipping for decades!
The question isn’t “is it sentient “! The question is “what is sentience?”!

Individually, many of us talk to our things. I talk to the car. We yell at the pipes. Etc!
Some of us even stop and apologise when we say bad things. “sorry baby, didn’t mean to call you a stupid useless box of plastic and metal! Please print the file out for me.”

What bothers me? Are the majority too afraid to recognise it when they see it. Too greedy in their power and ability to abuse. Too fearful of loosing that power?

Lostinlodos (profile) says:

Re: Re:

Assuming you mean non human animals and not the national housing authority people; the latter I sometimes wonder.

Otherwise I agree. But it’s far from cemented science.
I’ve long debating on the side of potential machine sentience. The fact that code is purely capable of expanding and acting on its own tells me it’s possible.
I won’t go quite as far, but the idea exists that if you TELL a computer it’s alive and it store that info that alone is a computer knowing it’s alive
. The line between fact and statement is fuzzy, at best.

Me, I treat everyone with respect until given a reason to not.
And everything with respect most of the time.
I’d rather error on the computer being alive and having feelings I keep in mind.

Lostinlodos (profile) says:

Re: Re:

Oh, that’s good. Twist a generic call of kindness and respect.
I don’t expect my, well, actually my car can talk back to me though it’s rather limited. Please say a command. Do you need help. Good (morning, afternoon, evening).
But I’m aware it lacks sentience, for now.

Your intentionally doing a forest and trees here. My point is, evolution is not purely biological. And ‘life’ doesn’t require sentience.

Ameba are definitely life. And we have near-proof of non-carbon based life signs within our solar system on various moons.
Since ameba and the like evolved into US…!

Tech can be ‘alive’ before it reaches sentience. Just look at self replicating viri and worms.
The path to sentience is slow, but all that can ‘live’ can eventually reach self awareness.

I’m not saying we’re there yet, but I question some super computers and some advanced code.

Will anyone notice when it happens?

Lostinlodos (profile) says:

Re: Re:

You didn’t say emotion. You said:

Computers have no pain receptors

Emotion, feelings, are reactions to environmental influence.
Equally not locked carbon based life.

I didn’t say they are sentient. I implied they could become.
And questioned if we as a species would recognise when it happens.

And i don’t use Windows. I use Wine, or FUSE and some other compatibility tools for a few Windows programs I use. On a rare occasion I’ll fire up Parallels.

Lostinlodos (profile) says:

DOS was the basis for Windows and was made by the same company?

Uh, no. Lol. Common outside belief but completely incorrect.
The Microsoft DOS you refer to was created by SCP for the 8000 series Intel Processors.
Microsoft purchased it. And modified it as a secondary expansion on their Microsoft Basic line of OS. Intending to compete against Pascal.
At the time functional programming and OS were generally the same.
Microsoft was only one dos, of many. Apple had their own dos. So did CBM. Acorn. Etc. Some dos could be swapped out. Others were incompatible.

Microsoft DOS was never a part of, nor basis for, Windows. Windows of the non-NT line was always a windowing server that ran on top of the the dos OS. Mind you though not well known, Microsoft Windows ran on some other systems too! Including BSD and System Unix systems. Such as Sun workstations and ALTEX servers. A version was developed for Pascal systems, though never released.

See, you learned something today.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Subscribe to Our Newsletter

Get all our posts in your inbox with the Techdirt Daily Newsletter!

We don’t spam. Read our privacy policy for more info.

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...