Microsoft’s Use Of ‘AI’ In Journalism Has Been An Irresponsible Mess
from the I'm-sorry-I-can't-do-that,-Dave dept
We’ve noted repeatedly how early attempts to integrate “AI” into journalism have proven to be a comical mess, resulting in no shortage of shoddy product, dangerous falsehoods, and plagiarism. It’s thanks in large part to the incompetent executives at many large media companies, who see AI primarily as a way to cut corners, assault unionized labor, and automate lazy and mindless ad engagement clickbait.
The folks rushing to implement half-cooked AI at places like Red Ventures (CNET) and G/O Media (Gizmodo) aren’t competent managers to begin with. Now they’re integrating “AI” with zero interest in whether it actually works or if it undermines product quality. They’re also often doing it without telling staffers what’s happening, revealing a widespread disdain for their own employees.
Things aren’t much better over at Microsoft, where the company’s MSN website had already been drifting toward low-quality clickbait and engagement gibberish for years. They’re now busy automating a lot of the content at MSN with half-baked language learning models, and it’s… not going great.
The company recently came under fire after MSN reprinted a Guardian story about the murder of a young Australian woman, including a tone deaf AI-generated poll some felt made light of the death. But as CNN notes, MSN has also been rife with a flood of “news” that’s either weirdly heartless or just false. Even in instances where it’s simply republishing human-written content from other outlets:
“In August, MSN featured a story on its homepage that falsely claimed President Joe Biden had fallen asleep during a moment of silence for victims of the catastrophic Maui wildfire.
The next month, Microsoft republished a story about Brandon Hunter, a former NBA player who died unexpectedly at the age of 42, under the headline, “Brandon Hunter useless at 42.”
Then, in October, Microsoft republished an article that claimed that San Francisco Supervisor Dean Preston had resigned from his position after criticism from Elon Musk.”
It’s a pretty deep well of dysfunction. One of my personal favorites was when an automated article on Ottawa tourism recommended that tourists prioritize a trip to a local food bank. When caught, Microsoft often tries to pretend the problem isn’t lazily implemented automation, deletes the article, then just continues churning out automated clickbait gibberish.
While Microsoft executives have posted endlessly about the responsible use of AI, that apparently doesn’t include their own news website. MSN is routinely embedded as the unavoidable default launch page at a lot of enterprises and companies, ensuring this automated bullshit sees fairly widespread distribution even if users don’t actually want to read any of it.
Microsoft, for its part, says it will try to do better:
“As with any product or service, we continue to adjust our processes and are constantly updating our existing policies and defining new ones to handle emerging trends. We are committed to addressing the recent issue of low quality articles contributed to the feed and are working closely with our content partners to identify and address issues to ensure they are meeting our standards.”
Again though, MSN, like so many outlets, had been drifting toward garbage clickbait long before language learning models came around. AI has just supercharged existing bad tendencies. Most of these execs see AI as a money-saving shortcut to creating automated ad-engagement machines that effectively shit money — without the pesky need to pay human editors or reporters a living wage.
With an army of well-funded authoritarian hacks keen on using propaganda to befuddle the masses at unprecedented scale, quality, ethical journalism is more important than ever. But instead of fixing the sector’s key shortcomings or paying our best reporters and editors a living wage, we’re seemingly dead set on ignoring their input and doubling down on — and automating — all of the sector’s worst habits.
While the AI will certainly improve, there’s little indication the executives making key decisions will. U.S. journalism has been on a very unhealthy trajectory for a long while due to these same execs who will dictate most of what happens next. Without really consulting (or in many instances even telling) any of the employees who actually understand how the industry actually works.
What could possibly go wrong?
Filed Under: ai, artificial intelligence, disinformation, failures, journalism, language learning models, misinformation, propaganda
Companies: microsoft




Comments on “Microsoft’s Use Of ‘AI’ In Journalism Has Been An Irresponsible Mess”
.. and now I read that AI is being used to generate health insurance denials. This is crazy.
And MS integrates this “””news””” into the Windows start menu!
Aren’t we being a bit overly pessimistic? Or underly optimistic? AI should be capable of acting as a journalist about as well as it does at being a lawyer.
Re: The law is no news.
I do think AI’s can make better lawyers than journalists. Lawyers only have to combine historic data with the inputs of their clients. Journalists have to actively search for news, information that is not yet widely available. And the best journalist pieces are about facts that the involved like to keep secret.
Re: Re:
Watching too many tv lawyer shows huh
Re: Re:
That is the least informed definition of a Lawyer’s work that I’ve ever read.
Re:
We have good examples of AI working every bit as good as some professional journalists (say, the Gaza hospital bombing article) and lawyers (those cases we heard about were not filed by the AI, they were filed by the lawyers).
And hey, AI can even provide realistic-sounding non-apologies, too!
Re: Re:
It’s as good as the worst a human can do so it aint bad – huh.
Well, that’s good news. Nothing to see here, move along.
Re:
Think of it this way: How would you handle a new hire? The work of a new journalist or lawyer should be reviewed by someone more senior(or experienced).
After all, the output they publish/submit will refect on the reputation of the organization involved. You wouldn’t want to that to get impacted by the new guy.
GO Media’s journalism has always been tabloid-tier sensationalism and them trying to lazily integrate AI is just proof of that. I’m actually very sure that, used right, ChatGPT could generate a better and more informative article about a topic than your average Kotaku or Gizmodo writer.
“As with any product or service, we continue to adjust our processes and are constantly updating our existing policies and defining new ones to handle emerging trends.”
I read that as “Introducing our NEW AND IMPROVED bait-clickbait! Now with 50% more bait in your clickbait!”
Does anyone read it differently?
Maybe Microsoft use AI for its AI’s QA. Not that the results are any worse that most of Microsoft products.
Garbage In Garbage Out
I suspect the AI doesn’t do anything on its own and depends on input and authorization to post output. That said output was likely approved by a human being who also fed it the data.
Garbage in, Garbage out