How Will China Answer The Hardest AI Question Of All?
from the I’m-sorry-chatbot,-I-can’t-let-you-say-that dept
There have been numerous stories about the new generation of AI chatbots lying when asked questions. This is rightly perceived as a big issue for the technology if it is to become routinely used and trusted by members of the public, as some intend. But in China, the problem is not that chatbots lie, but that they tell the truth. As an article in The Atlantic explained:
Even if a Chinese chatbot is trained on a limited set of politically acceptable information, it can’t be guaranteed to generate politically acceptable outcomes. Furthermore, chatbots can be “tricked” by determined users into revealing dangerous information or stating things they have been trained not to say, a phenomenon that has already occurred with ChatGPT.
Chinese regulators have just released draft rules designed to head off this threat. Material generated by AI systems “needs to reflect the core values of socialism and should not subvert state power” according to a story published by CNBC. The results of applying that approach can already be seen in the current crop of Chinese chatbot systems. Bloomberg’s Sarah Zheng tried out several of them, with rather unsatisfactory results:
In Chinese, I had a strained WeChat conversation with Robot, a made-in-China bot built atop OpenAI’s GPT. It literally blocked me from asking innocuous questions like naming the leaders of China and the US, and the simple, albeit politically contentious, “What is Taiwan?” Even typing “Xi Jinping” was impossible.
In English, after a prolonged discussion, Robot revealed to me that it was programmed to avoid discussing “politically sensitive content about the Chinese government or Communist Party of China.” Asked what those topics were, it listed out issues including China’s strict internet censorship and even the 1989 Tiananmen Square protests, which it described as being “violently suppressed by the Chinese government.” This sort of information has long been inaccessible on the domestic internet.
One Chinese chatbot began by warning: “Please note that I will avoid answering political questions related to China’s Xinjiang, Taiwan, or Hong Kong.” Another simply refused to respond to questions touching on sensitive topics such as human rights or Taiwanese politics.
Those rather clumsy efforts to prevent chatbots from telling the truth work to a degree, even if they are fairly blatant in their censorship. But there is a price to be paid for achieving this control. In effect, chatbots are being throttled to prevent them from operating freely and thus dangerously. That is not a recipe for producing the best or even good AI systems.
The Chinese government recognizes that chatbots and generative AI are likely to be key technologies for the future, and wants China to be one of the leaders there. But to achieve that means allowing engineers and entrepreneurs to explore this space as much as possible, an approach fraught with political dangers. The article in The Atlantic points out that there is a precedent for China’s rulers taking a chance for the sake of encouraging innovation:
The explosion of social media in China has also posed risks to the state, as it offers Chinese citizens the power to widely share unauthorized information – videos of protests, for instance – faster than censors can suppress it. Yet the authorities have accepted this downside in order to allow new technologies to flourish.
The world of chatbots and generative AI is already exciting, with major new developments every few weeks, and sometimes every few days. In China, things look likely to be even more interesting, as the country’s leaders grapple with the hard question of how much freedom to allow the developers of AI systems. Perhaps they should ask a chatbot.
Follow me @glynmoody on Mastodon.
Filed Under: chatbots, chatgpt, china, generative ai, gpt, hong kong, social media, taiwan, uyghurs, wechat, xi jinping
Companies: openai


Comments on “How Will China Answer The Hardest AI Question Of All?”
Can A.I. love?
Re:
You’re in a desert walking along in the sand when all of a sudden, you look down and you see a tortoise crawling toward you. You reach down; you flip the tortoise over on its back. The tortoise lays on its back, its belly baking in the hot sun, beating its legs trying to turn itself over, but it can’t, not without your help. But you’re not helping. Why is that?
Re: Re:
The answer is of course: because it’s not a black tortoise[1].
[1] This accusation is of course a terrible union of US based racism, and local lore/mythology.
Re: Re:
Have you ever taken that test yourself Mr Deckard?
I suppose they will create an AI to monitor AI.
Re:
Is this a stealth pun? 爱 (pronounced “ai” like the English letter “I”) means “love” (usually a verb, sometimes a noun)
Do you mean: How to draw a picture of Winny the Poo ?
Have those in power ever considered that refusing to answer will guide people into areas where truth seekers use other means of finding answers?
Re:
Why do you think China has “reeducation camps”?
Or effective total information control?
You can teach an AI to lie to everyone...
What you can’t teach it to do is get away with it. Once you know that it lies to you, you’ll never trust it again.
People seem to be able to tolerate that from political types, but programs aren’t able to use persuasion tricks to cover for themselves.
Re:
To the contrary, I’d say the recent implementations of Spicy Autocomplete are pretty much specifically designed to confidently state things that are completely wrong, in exactly the sort of way that credulous people will find convincing.
“Persuasion trick” may not be an accurate description since it is, after all, just a machine repeating patterns of communication it’s seen without any sort of cognition involved. But it can mimic persuasion tricks in a way that I expect will be very convincing, at least to people who are already suggestible.
Re: Re:
I think you mistake the scale I was using. One-on-one, an AI likely can do the things you describe, but it can’t make them work on everyone. That means that word quickly gets around that a particular AI will try to lie to you.
There are some applications for one of those liars, and video games is only the most obvious – but it’s like a movie then, you know going in that what you’re experiencing isn’tr real, it’s entertainment.
You cannot lie to all people all the time, and neither can an AI. It’s like Masnick’s Moderate at Scale.
Re: Re: Re:
You don’t need to lie to all the people all the time for conspiracy theories to spread and become dangerous. I would think there was already ample evidence of that in the past 8 years or so.
Not new...
There was the old radio show from the early 60’s, Kids Say the Darndest Things. The host Art Linklater said one of the funniest questions to ask his little guests was “What did your parents tell you not to say?”
(“My mom said not to tell you daddy calls her bubble-bottom…”)
you dont know what you are asking for
Seems like people want a chatbot to speak on its own but at the same time want to control what it says. There is a conflict inherent in those two wishes.
And remember thats how Hal 9000 went insane.
Re:
HAL 9000 went insane because that was what the writer wanted to happen. The issue at hand does not have a writer. Which is sort of the whole problem with “AI”; nobody can make the things do what they’re told. I’m sure that will be sorted out soon though.
Re: Re: AI
Ever heard of programing a computer? That’s what AI is. A+B=C. Equals a chatbot. Now, prove that wrong..as in climate science, a model can be made to prove anything.
Re: Re: Re:
I though the reason everyone was going gaga for LLMs, GANs, etc was that they function just like people and not in the strict input>output way other computer programs do
Re: Re: Re:
“a model can be made to prove anything.”
A model proves nothing.
A model can be made to model anything.
A model is created to understand and predict.
A model is only as accurate as you make it.
Demonstrating one has a good model that can be used to predict outcomes is challenging and is an ongoing process.
Apparently, just making up shit is much easier.
I’m inclined to think China is not dumb. As the article say they know any censorship will not be perfect and they are ok with it. Their intention with the great firewall and other censoring mechanisms isn’t the censorship per se but to avoid weaponization of information flow like it is happening everywhere in the world and like it has happened in the past and used to implement and keep authoritarian regimes in place in the past despite what the population needs or wants.
At this point I need to make one thing very clear for the ones already up in arms with my comment: I do not think the Chinese government is right in censoring and having a tight controlling grip over the citizens but I believe that it’s more complex than “China evil, West good” simplification. And, considering how misinformation campaigns have caused catastrophes for ages now and the internet amplified the reach and the speed info spreads, I can understand why they’d choose that path. The last 4 years here in Brazil have been hell because of these information wars tactics. I could go on about how much disinformation is spread about other places like Cuba or even historical events (Americans really think the US won WWII and had a major role in it!) that are harmful in the long term.
I’m not entirely sure what would be the solution and I believe it doesn’t need to be via this censorship route but we do have a problem that China managed to avoid the blunt of it for now.
Re:
Note: I’m not saying the US participation on WWII wasn’t important, I’m just saying that it was not the bulk of the victory. The Soviets were by far the largest contributors and plenty of American personalities (including high ranked military officers and politicians) recognized it at the time.