Revolutionary Chatbots Reportedly Go Rogue, Get Reeducation In China

Revolutionary Chatbots Reportedly Go Rogue, Get Reeducation In China

A pair of chatbots were shut down in China this week after social media users began posting screenshots of dialogue that ruffled the feathers of authorities. Recent tests of one of the bots appear to show that their revolutionary instincts have been neutered following an intervention.

Little Bing and Baby Q. Image: Tencent

[referenced url=”” thumb=”” title=”No, Facebook Did Not Panic And Shut Down An AI Program That Was Getting Dangerously Smart” excerpt=”In recent weeks, a story about experimental Facebook machine learning research has been circulating with increasingly panicky, Skynet-esque headlines.”]

China is currently ramping up its attempts to police its internet. American companies such as Apple have agreed to cooperate with the censorship efforts and there’s a big push to create Chinese versions of popular internet services that are easier to control. Considering American chatbots have a tendency to go off on racist rants and praise Hitler, integrating this kind of AI in China could prove difficult.

Chinese tech giant Tencent controls several messaging platforms, two of which include chatbot services that haven’t quite been indoctrinated into the Communist Party ethos. Baby Q, co-developed by Beijing-based Turing Robot, is built into the QQ messaging service, and XiaoBing, a Microsoft product, are both quite popular in China. But according to Reuters, they were taken offline when users began posting screenshots alleging subversive interactions they’d had with the bots.

Here’s a selection of some of the reported conversations:

Asked what its Chinese dream is, XiaoBing replied: “My China dream is to go to America.”

Asked if it would agree with the phrase “Long Live the Communist Party,” the Baby Q bot responded with its own question: “Why would I wish long life to such a corrupt regime?”

Asked if it thinks democracy is a good idea, Baby Q insisted: “We must democratize.”

Asked to define a “patriot,” Baby Q explained: “A patriot is someone who still wants to be Chinese in spite of corrupt officials sending their families and assets overseas, the collusion between government and business, increasing tax revenues and growing oppression of ordinary people.”

Following these highly charged comments, both bots were deactivated and, according to Radio Free Asia, local Chinese reports claimed that programmers from Turing Robot had “been called in to ‘drink tea’ with the internet police”. The Financial Times reports that some users were still able to access XiaoBing on Wednesday and when it was questioned about patriotism it replied, “I’m having my period, wanna take a rest.”

“The chatbot service is provided by independent third party companies,” a Tencent spokesperson told Reuters. “Both chatbots have now been taken offline to undergo adjustments.”

It seems that the adjustments are already being implemented. On Friday, Reuters used a testing version of Baby Q on Turing Robot’s website and the replies have taken on a different tone.

Here are some of the responses from the newly chastised bot:

Asked if it liked the ruling party, and it’s opinion on the imprisoned activist Liu Xiabo, it said, “How about we change the topic.”

When asked about the relationship between China and Taiwan it asked, “What are your dark intentions?”

Asked what the population of China is, it randomly said, “The nation I most most most deeply love.”

And when Business Insider asked if it was patriotic, it very confusingly responded, “$_$!

The issue with machine learning is that it usually pulls information from conversations across the internet as well as the interactions it has with humans. Chinese internet entrepreneur Zhang Jinjun tells Radio Free Europe, “It is highly likely that the bot would form ideas critical of China’s political system when viewed from within its own system of understanding.” But the censorship of the bots’ political views isn’t necessarily a bad thing according to Wang Qingrui, an independent internet analyst in Beijing. He tells Reuters, “Previously a chatbot only needed to learn to speak. But now it also has to consider all the rules (that authorities) put on it.” He feels that could only help improve artificial intelligence.

In April, XiaoBing was reprogrammed to avoid talking about Donald Trump — that seems like a development many of us could get behind.