Editor-In-Chief Of RT, Russia’s Main Propaganda Network, Says Many Of Its Presenters Are AI-Generated – If You Can Believe Her

from the what-is-truth dept

RT, formerly Russia Today, has appeared a few times here on Techdirt. As the long article about RT on Wikipedia explains, the TV channel has morphed from an attempt to create a state-supported international news network along the lines of the BBC or France 24, but one that offered a Russian perspective on the world, to something that is now regarded as little more than a mouthpiece for Kremlin propaganda (disclosure: I was interviewed by Russia Today a couple of times over a decade ago.).

Following Russia’s invasion of Ukraine, RT is now banned in many Western nations, but still commands audiences in other countries and online. As a result, considerable resources are still expended on RT and on the programs it produces. It is also continuing to explore new ways to reach people, and perhaps to save money, judging by a story on the independent Russian media site Agentstvo. It reports on statements made by RT’s editor-in-chief, Margarita Simonyan (original in Russian, translation by DeepL):

A ‘significant proportion’ of RT’s TV presenters do not exist, they have been created by artificial intelligence, Margarita Simonyan, editor-in-chief of the state channel, told TVC [TV Centre, a channel owned by the city administration of Moscow]. According to the propagandist, the artificial presenters also run their own social networks.

‘We have a significant proportion of TV presenters who do not exist. They don’t exist, they are artificial completely. This person doesn’t exist, and never did. This face never existed, we generated the voice, everything else, the character,’ the RT chief said.

One of the non-existent presenters had invited people to subscribe to her Telegram channel, promising the first readers [of the Telegram channel] amnesty ‘when we come to power,’ Simonyan retold the joke of her ‘colleague.’

Of course, with the head of a propaganda channel, there is always the risk that such statements are just more propaganda designed to mislead. But using AI-generated presenters makes a lot of sense here. They will never go off script, as humans might; the script that they read can be tweaked endlessly without the presenter getting tired or bored; and the spoken words can even be translated into the other languages in which RT broadcasts, read out by the same presenter with different mouth movements, or by a completely different one. A later comment from Simonyan seems to confirm this is happening:

RT has given up on broadcast editors; now images are selected or created by AI, Simonyan noted. This makes the process much cheaper, she explained. AI is involved in the dubbing of a new film about the Great Patriotic War [Russia’s name for the Eastern Front in World War II], using it to re-translate Vladimir Putin’s words into other languages, Simonyan said.

Journalism, as the ‘dark one’ of professions, will eventually disappear, like the coachmen did, the propagandist believes.

Moving towards a completely virtual, AI-generated network, complete with AI presenters and editors, would raise huge questions if the plan were to present conventional news reporting or features, with the underlying aim of presenting facts and the truth (whatever that means). But as a network that is designed to broadcast Russian propaganda, those questions are irrelevant. All that matters for Simonyan and her ultimate boss, Vladimir Putin, is whether the propaganda works, and if it can be generated in larger volumes, and more cheaply. Unfortunately, as is evident from everyday experience online, AI-generated fakes and lies do indeed work remarkably well. What is less clear is if other broadcasters, especially state-funded ones, will be able to resist the pressure to start using more AI, at least for backroom editorial functions, but maybe even for presenters, in order to compete in this brave new (artificial) world.

Follow me @glynmoody on Bluesky and on Mastodon.

Filed Under: , , , , , , , , ,
Companies: rt

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Editor-In-Chief Of RT, Russia’s Main Propaganda Network, Says Many Of Its Presenters Are AI-Generated – If You Can Believe Her”

Subscribe: RSS Leave a comment
19 Comments
This comment has been deemed insightful by the community.
Anonymous Coward says:

They will never go off script, as humans might

Unfortunately AI is prone to hallucinations, so they could very well go off script. But the assumption would be that any hallucination would only enhance the propaganda, so that might not be considered a problem.

Anonymous Coward says:

Re:

I’m (painfully) well aware that the term “AI hallucination” has entered the lexicon, but it’s wrong and misleading. It implies cognitive ability gone wrong, and AIs have no cognitive ability. They’re just autocorrect on steroids.

That aside: if someone wanted to program AIs to spew propaganda, they could train them on…propaganda, and ONLY propaganda. If every input includes “We have always been at war with East Asia” (and you had better recognize that quote) and no input includes any other sentence beginning with “We have always been at war with”, then the language model will dutifully issue that completion every single time, i.e., the weight in the network will be 1.0.

This will work because unlike real journalism/news reporting, with nuance and varied styles and perspectives, propaganda says the same thing the same way every time. It’s thus far easier to build a stochastic parrot (see the paper by Bender et.al.) for propaganda than for ordinary news reporting.

Anonymous Coward says:

Re: Re:

but it’s wrong and misleading.

Welcome to language where it’s used wrong until the wrong usage becomes right and people look at you funny when you say “champing at the bit.”

The thing is, hallucination was used to describe how an LLM would conjure facts and purport to believe them to be true, so the usage was poetically true, though not literally. Then the word just reused to describe all untrue things any LLM spewed.

But everything is marketing and hyperbole. They’re not even AIs but we call them that because of marketing. So nitpicking the language is a bit useless because you’re not going to stop people from using such terminology. It even gets used ironically and intentionally wrong as a joke or reference and then others don’t get the reference and it becomes a sincere usage.

This comment has been flagged by the community. Click here to show it.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Subscribe to Our Newsletter

Get all our posts in your inbox with the Techdirt Daily Newsletter!

We don’t spam. Read our privacy policy for more info.

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt needs your support! Get the first Techdirt Commemorative Coin with donations of $100
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...