The Unperson Of 2023

from the sentences-do-not-imply-sentience dept

2023 is over. Taylor Swift was Time’s Person of the Year, beating out candidates like Jerome Powell, who may have stuck the economic soft-landing, but can’t hit the high notes. Only a fool would challenge the decision, but I would like to nominate 2023’s Unperson of the year – ChatGPT; the neural-network based, Large Language Model which launched only 13 months ago and took the world by storm. My claim is not that we need to pay more attention to it; from jeremiads about risks ranging from plagiarism and mass unemployment to the annihilation of the human species, we haven’t been able to shut up. Instead, we need to pay a different kind of attention. Something important just happened and I am not sure we noticed. 

For the first time in history, humans had a world-altering fact forced on them; sentences do not imply sentience. The crown-jewel of our species, the quality that supposedly entitled us to our special moral status: the ability fluently to manipulate highly complex, abstract language about a near-infinite number of subjects is, undeniably, no longer confined to humans. ChatGPT did what parrots with large vocabularies and chimps that have learned ASL could not. It produced language that might pass as human-created. It did so not in a philosophy hypothetical or computer lab, but in real-time, to hundreds of millions of people. 

Most of us believe that being human confers a special moral status, even if we disagree how much that allows us to prefer our interests over other living things. But why? Some believe it is because of a divine command that gave us the world and its inhabitants in sole and complete dominion. But those seeking a secular justification had to root it in our capabilities – something we have or do that makes us unique. There have been many candidates, from tool-use, to a conception of past and future, to notions of morality and beauty; all challenged by studies of non-human animals suggesting they are far more capable than we once imagined. But the ability we principally focus on is language, or at least on the kind of consciousness that language seems to show. More than 2300 years ago Aristotle laid out the basic argument. Language, he claimed, allows reasoning about expediency: how best to achieve our goals? But it also enables reasoning about justice: which goals are right and just? This is why he believed only the human species has morality. From that capacity comes the particular type of morally-freighted associations so important to Greek philosophers, the family and the polis, but also the state’s version of morality – the law. The human being is a moral, social being. And language is the root of it all. 

More than two millennia later, when Alan Turing tried to answer the question, “can machines think?” he turned to the same capability for his answer. The “imitation game,” popularly known as the Turing Test, argued the best assessment of whether a machine was conscious would be its ability to converse fluently enough to fool a human. That was seventy years ago, and we now know he was wrong. Large Language Models like ChatGPT can do exactly that, yet they are “predict the next word” machines — masters of syntax, not semantics. They are not conscious. (This does not mean machine consciousness is impossible.) Our deeply-ingrained reflex to impute consciousness to language-users will tell us otherwise. Blake Lemoine, a Google engineer, was the first to succumb. He became convinced Google’s chatbot was conscious. (Google fired him.) Most saw it as just a funny story, but it was a harbinger. Chatbots feature most prominently in our lives as writing aids, research assistants and cheating tools. But their significance has eluded us — it is the fundamental challenge to the unquestioned species-exceptionalism on which our vision of the world depends.

How do we respond? Four approaches present themselves: Refinement, denial, humility, and reflection.  

Refinement: First, we could refine our vision of consciousness — insisting on semantic, not just syntactical, comprehension. Humans are still special, but the grounds for their specialness have changed. Maybe an “embodied intelligence,” for example, an AI incarnated as a robot that learns meaning by interaction with the world, would pass a Turing-plus test. Maybe an image-generator that “grew up” experiencing the world in multiple ways, not merely scanning pictures of the world, would cross our threshold for art. The psychologists Lakoff and Johnson make a decent case that human consciousness depends on exactly such an “embodied mind” and some computer scientists are pursuing its machine-analogues, trying to develop robots that learn from interaction with the world as children do. Not convinced? Maybe there is some other characteristic that we have, that machines have not yet achieved. There is a nagging worry though. Are we just nervously redrawing the boundaries of our species-island again and again, as the encroaching tides creep higher? We’ve done that before; for example, the assimilation of evolution into religious ideas about humanity.

Denial: There is an easier way to retain the special status of the human species of course. The second approach is simple definitional denial that anything non-biological could ever be conscious, coupled to a claim that only our species has that capacity in full measure. The philosopher John Searle produced a sophisticated version of this argument with his Chinese Room thought experiment, intended as a response to Turing’s imitation game. Imagine a person who does not speak Mandarin. They are inside a sealed room and they receive slips of paper on which messages in Mandarin have been written. They have also been given an elaborate rule-set which tells them to respond to messages that contain certain Chinese characters with notes of their own, carrying just the right set of ideograms to give the illusion of communication. The person receiving these notes would imagine a Chinese-speaking consciousness inside the room, but no such consciousness exists. So far, Searle’s argument is perfectly reasonable. In fact, he has described with remarkable prescience the reasons that ChatGPT’s “predict the next word” neural networks do not equate to understanding of the meaning in the apparently cogent messages generated. (The same argument works for AI image generators.)  His mistake comes when he tries to argue that all machine intelligences must be of this kind, doomed always to syntactical imitation, not semantic comprehension. 

How could simple 1 or 0 binary circuits ever yield consciousness!? This may sound similar to a favorite argument of those who denied human evolution, a version of the fallacy of composition; single celled organisms aren’t conscious. Therefore no conscious being could evolve from such simple beginnings! Of course that argument was wrong. Might this one be also? Not all AI’s are going to have a chatbot’s architecture. Searle’s response is disappointing; he simply makes the oracular pronouncement that “[c]onsciousness is a biological phenomenon like photosynthesis, digestion or mitosis.” As an explanation why we should believe that consciousness is irreducibly biological this is disappointing. Assuming your conclusion is fun; nice work if you can get it, but it does not substitute for actual argument. If you admit that our consciousness has a material basis – arising from physical processes in the brain – then it takes chutzpah to believe that only our biological brains could ever produce such processes. Denial doesn’t seem like a great option. 

Humility: Third, we could embrace humility. Maybe much of our own quotidian consciousness is more like a chatbot’s imitation than we like to think, a mindless invocation of repetitive patterns without intentionality. In a moment of devastating bathos, Stephen Wolfram said that we had discovered that language was “computationally shallower” than we had thought. One imagines a New Yorker cartoon of two robots gathered around humanity’s grave; “We found them to be computationally shallow.” What an epitaph!

Reflection: The fourth and final option might be the hardest but also the most promising. We could use the happenings of last year as a spur to reflection – combining some of the insights of refinement and of humility. Machine learning could teach us more about ourselves — the mirror looking back at us. This could prompt anxious reappraisal of our species-exceptionalism. It might make us focus more on new scientific ideas about consciousness, like Global Neuronal Workspace Theory. It might make us worry whether we are treating the great apes and the cetaceans correctly. It might lead us to assess what rights, if any, we should confer on artificially created beings—even if those rights were granted as a matter of convenience, not moral kinship. (Looking at you, corporations.) Better to think about that now than when Hal is knocking on our doors.

There is a fifth approach, of course. We could just ignore it all—a truly human capability. We could use our chatbots to write scripts about crabs fighting hot dogs on the moon, or just to cheat on exams, and forget the rest. We could, in other words, just shake it off. With no disrespect to Ms. Swift, that would be a shame. The Unperson of 2023 has lessons to teach us.

James Boyle is the William Neal Reynolds Professor of Law at Duke Law School. His new book, The Line: AI and the Future of Personhood, will be published under a Creative Commons License by MIT Press in 2024.  Preprints of the introduction and first two chapters can be found here.

Filed Under: , , , , , ,

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “The Unperson Of 2023”

Subscribe: RSS Leave a comment
17 Comments
Anonymous Coward says:

If you look at the conversations Lemonie posted with the internal AI at Google, he was probably right. He wasn’t saying ChatGPT was sentient, he was talking about the internal version of an AI that Google still hasn’t released (it’s not Bard). The system he was working with had historical memory as opposed to simply being a fresh copy of the most recent model. They fired him for violating his NDA and contradicting official company positions. It doesn’t mean he was wrong.

Anonymous Coward says:

We do what we want to do because we want to do it. We don’t need any more justification than that. We don’t need proof of human uniqueness or superiority. We have evolved systems on an axis of selfishness and altruism because both are useful for survival and so have survived. AI doesn’t change any of that, regardless of what it can do.

Drew Wilson (user link) says:

Why do people care about the Person of the Year?

I have to ask: is there a reason why Time Magazine is given as much credibility as it has?

There’s so many articles out there that imply that the Time Magazine Person of the Year is anything other than a complete waste of time. I think of it as any other dime a dozen top 10 lists that are just cheaply thrown together after whoever wrote it spent an hour punching in keywords into Google.

What’s more, I can’t think of anything that Time Magazine has really contributed in the world of journalism in the last decade. Yet, for reasons that I don’t get, we put Time Magazine on a pedestal as if they are some sort of icon in culture and journalism.

This comment has been flagged by the community. Click here to show it.

Anonymous Coward says:

The Unperson of the Year?

Hot take: the Unperson(s) of the Year is probably going to be the chucklenuts dragged over the Israel-Gaza border on Oct 7.

Imagine being a hostage whose life value is so low, the hostage takers have no incentive to keep you alive, the rescuers manage to blow you up in fear of an ambush, the country mounting the rescue gets more flak for attempting the rescue to start with and there’s so much turmoil around you even existing the world would be far better off if everyone just shrugged, said “Meh” and left you to your own devices.

If that doesn’t make someone an “unperson” I don’t know what does…

This comment has been flagged by the community. Click here to show it.

This comment has been flagged by the community. Click here to show it.

Anonymous Coward says:

Re: Re:

I wouldn’t go so far to claim that, but that’s the geopolitical reality on the ground. Even in their own home, close to nobody save for the immediately affected families thinks that the conflict is remotely worth dragging out. Meanwhile Hamas has absolutely no incentive to keep their hostages alive or healthy. The US’s reputation has also taken an almighty beating by refusing to compromise on its ally. Whatever move the US or Israel makes? They lose.

Netanyahu’s got better odds of growing a pair of tits than he’s got dismantling Hamas at this point.

This comment has been flagged by the community. Click here to show it.

This comment has been flagged by the community. Click here to show it.

Anonymous Coward says:

Re: Re: Re:2

You’re not wrong, there, and that’s precisely what Hamas has effectively weaponized for their own benefit.

Imagine being able to waltz over a border, unalive a thousand people and kidnap a couple hundred, more or less don’t maintain their vital functions and leaving either the elements or the rescue team to do the dirty work for you. Imagine doing all that and still come out of that looking like the heroes or freedom fighters in the equation. Why would Hamas agree to anything that would deviate from the current status quo?

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...