How Will China Answer The Hardest AI Question Of All?

from the I’m-sorry-chatbot,-I-can’t-let-you-say-that dept

There have been numerous stories about the new generation of AI chatbots lying when asked questions. This is rightly perceived as a big issue for the technology if it is to become routinely used and trusted by members of the public, as some intend. But in China, the problem is not that chatbots lie, but that they tell the truth. As an article in The Atlantic explained:

Even if a Chinese chatbot is trained on a limited set of politically acceptable information, it can’t be guaranteed to generate politically acceptable outcomes. Furthermore, chatbots can be “tricked” by determined users into revealing dangerous information or stating things they have been trained not to say, a phenomenon that has already occurred with ChatGPT.

Chinese regulators have just released draft rules designed to head off this threat. Material generated by AI systems “needs to reflect the core values of socialism and should not subvert state power” according to a story published by CNBC. The results of applying that approach can already be seen in the current crop of Chinese chatbot systems. Bloomberg’s Sarah Zheng tried out several of them, with rather unsatisfactory results:

In Chinese, I had a strained WeChat conversation with Robot, a made-in-China bot built atop OpenAI’s GPT. It literally blocked me from asking innocuous questions like naming the leaders of China and the US, and the simple, albeit politically contentious, “What is Taiwan?” Even typing “Xi Jinping” was impossible.

In English, after a prolonged discussion, Robot revealed to me that it was programmed to avoid discussing “politically sensitive content about the Chinese government or Communist Party of China.” Asked what those topics were, it listed out issues including China’s strict internet censorship and even the 1989 Tiananmen Square protests, which it described as being “violently suppressed by the Chinese government.” This sort of information has long been inaccessible on the domestic internet.

One Chinese chatbot began by warning: “Please note that I will avoid answering political questions related to China’s Xinjiang, Taiwan, or Hong Kong.” Another simply refused to respond to questions touching on sensitive topics such as human rights or Taiwanese politics.

Those rather clumsy efforts to prevent chatbots from telling the truth work to a degree, even if they are fairly blatant in their censorship. But there is a price to be paid for achieving this control. In effect, chatbots are being throttled to prevent them from operating freely and thus dangerously. That is not a recipe for producing the best or even good AI systems.

The Chinese government recognizes that chatbots and generative AI are likely to be key technologies for the future, and wants China to be one of the leaders there. But to achieve that means allowing engineers and entrepreneurs to explore this space as much as possible, an approach fraught with political dangers. The article in The Atlantic points out that there is a precedent for China’s rulers taking a chance for the sake of encouraging innovation:

The explosion of social media in China has also posed risks to the state, as it offers Chinese citizens the power to widely share unauthorized information – videos of protests, for instance – faster than censors can suppress it. Yet the authorities have accepted this downside in order to allow new technologies to flourish.

The world of chatbots and generative AI is already exciting, with major new developments every few weeks, and sometimes every few days. In China, things look likely to be even more interesting, as the country’s leaders grapple with the hard question of how much freedom to allow the developers of AI systems. Perhaps they should ask a chatbot.

Follow me @glynmoody on Mastodon.

Filed Under: , , , , , , , , , ,
Companies: openai

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “How Will China Answer The Hardest AI Question Of All?”

Subscribe: RSS Leave a comment
20 Comments
Thad (profile) says:

Re:

The Hardest AI Question Of All

You’re in a desert walking along in the sand when all of a sudden, you look down and you see a tortoise crawling toward you. You reach down; you flip the tortoise over on its back. The tortoise lays on its back, its belly baking in the hot sun, beating its legs trying to turn itself over, but it can’t, not without your help. But you’re not helping. Why is that?

Anonymous Coward says:

Re: Re:

The tortoise lays on its back, its belly baking in the hot sun, beating its legs trying to turn itself over, but it can’t, not without your help. But you’re not helping. Why is that?

The answer is of course: because it’s not a black tortoise[1].

[1] This accusation is of course a terrible union of US based racism, and local lore/mythology.

Thad (profile) says:

Re:

People seem to be able to tolerate that from political types, but programs aren’t able to use persuasion tricks to cover for themselves.

To the contrary, I’d say the recent implementations of Spicy Autocomplete are pretty much specifically designed to confidently state things that are completely wrong, in exactly the sort of way that credulous people will find convincing.

“Persuasion trick” may not be an accurate description since it is, after all, just a machine repeating patterns of communication it’s seen without any sort of cognition involved. But it can mimic persuasion tricks in a way that I expect will be very convincing, at least to people who are already suggestible.

Nemo says:

Re: Re:

I think you mistake the scale I was using. One-on-one, an AI likely can do the things you describe, but it can’t make them work on everyone. That means that word quickly gets around that a particular AI will try to lie to you.

There are some applications for one of those liars, and video games is only the most obvious – but it’s like a movie then, you know going in that what you’re experiencing isn’tr real, it’s entertainment.

You cannot lie to all people all the time, and neither can an AI. It’s like Masnick’s Moderate at Scale.

Anonymous Coward says:

Re: Re: Re:

“a model can be made to prove anything.”

A model proves nothing.
A model can be made to model anything.
A model is created to understand and predict.
A model is only as accurate as you make it.

Demonstrating one has a good model that can be used to predict outcomes is challenging and is an ongoing process.

Apparently, just making up shit is much easier.

Ninja (profile) says:

I’m inclined to think China is not dumb. As the article say they know any censorship will not be perfect and they are ok with it. Their intention with the great firewall and other censoring mechanisms isn’t the censorship per se but to avoid weaponization of information flow like it is happening everywhere in the world and like it has happened in the past and used to implement and keep authoritarian regimes in place in the past despite what the population needs or wants.

At this point I need to make one thing very clear for the ones already up in arms with my comment: I do not think the Chinese government is right in censoring and having a tight controlling grip over the citizens but I believe that it’s more complex than “China evil, West good” simplification. And, considering how misinformation campaigns have caused catastrophes for ages now and the internet amplified the reach and the speed info spreads, I can understand why they’d choose that path. The last 4 years here in Brazil have been hell because of these information wars tactics. I could go on about how much disinformation is spread about other places like Cuba or even historical events (Americans really think the US won WWII and had a major role in it!) that are harmful in the long term.

I’m not entirely sure what would be the solution and I believe it doesn’t need to be via this censorship route but we do have a problem that China managed to avoid the blunt of it for now.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Subscribe to Our Newsletter

Get all our posts in your inbox with the Techdirt Daily Newsletter!

We don’t spam. Read our privacy policy for more info.

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...