Silicon Valley Starts Hiring Poets To Fix Shitty Writing By Undercooked “AI”

from the I'm-sorry-I-can't-do-that,-Dave dept

When it comes to the early implementation of “AI,” it’s generally been the human beings that are the real problem.

Case in point: the fail-upward incompetents that run the U.S. media and journalism industries have rushed to use language learning models (LLMs) to cut corners and attack labor. They’ve made it very clear they’re not at all concerned about the fact that these new systems are mistake and plagiarism prone, resulting in angry employees, a lower-quality product, and (further) eroded consumer trust.

While AI certainly has many genuine uses for productivity, many VC hustlebros see AI as a way to create an automated ad engagement machine that effectively shits money and undermines already underpaid labor. The actual underlying technology is often presented as akin to science fiction or magic; the ballooning server costs, environmental impact, and $2 an hour developing world labor powering it are obscured from public view whenever possible.

But however much AI hype-men would like to pretend AI makes human beings irrelevant, they remain essential for the underlying illusion and reality to function. As such, a growing number of Silicon Valley companies are increasingly hiring poets, English PHDs, and other writers to write short stories for LLMs to train on in a bid to improve the quality of their electro-mimics:

“A string of job postings from high-profile training data companies, such as Scale AI and Appen, are recruiting poets, novelists, playwrights, or writers with a PhD or master’s degree. Dozens more seek general annotators with humanities degrees, or years of work experience in literary fields. The listings aren’t limited to English: Some are looking specifically for poets and fiction writers in Hindi and Japanese, as well as writers in languages less represented on the internet.”

LLMs like Chat GPT have struggled to accurately replicate poetry. One study found that after being presented with 17 poem examples, the technology still couldn’t accurately write a poem in the style of Walt Whitman. While Whitman’s poems are often less structured, Chat GPT kept trying to produce poems in traditional stanzas, even when explicitly being told not to do that. The problem got notably worse in languages other than English, driving up the value, for now, of non-English writers.

So it’s clear we still have a long way to go before these technologies actually get anywhere close to matching both the hype and employment apocalypse many predicted. LLMs are effectively mimics that create from what already exists. Since it’s not real artificial intelligence, it’s still not actually capable of true creativity:

“They are trained to reproduce. They are not designed to be great, they try to be as close as possible to what exists,” Fabricio Goes, who teaches informatics at the University of Leicester, told Rest of World, explaining a popular stance among AI researchers. “So, by design, many people argue that those systems are not creative.”

That, for now, creates additional value for the employment of actual human beings with actual expertise. You need to hire humans to train models on, and you need editors to fix the numerous problems undercooked AI creates. The homogenized blandness of the resulting simulacrum also, for now, likely puts a premium on thinkers and writers who actually have something original to say.

The problem remains that while the underlying technology will continuously improve, the folks rushing to implement it without thinking likely won’t. Most seem dead set on using AI primarily as a bludgeon against labor in the hopes the public won’t notice the drop in quality, and professional writers, editors, and creatives won’t mind increasingly lower pay and tenuous position in the food chain.

Filed Under: , , , , ,

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Silicon Valley Starts Hiring Poets To Fix Shitty Writing By Undercooked “AI””

Subscribe: RSS Leave a comment
10 Comments
Anonymous Coward says:

and professional writers, editors, and creatives won’t mind increasingly lower pay and tenuous position in the food chain.

Or they can use the Internet to form cooperatives and sell their own work direct to the public. It is not as if publishing now needs the support of industrial copy and logistics systems, or even commercial office space.

A limited number of trusted electronic spaces to find a pay for content means people are more likely to pay. That is the advantage to Amazon and Ebay, they deal with the payments, reducing the risks of using a credit card online.

Anonymous Coward says:

LLMs are effectively mimics that create from what already exists

And I think this is exactly why the copyright/deep learning art discussion is controversial.

The companies creating these programs are only using the pieces of the original in small, abstract ways, but there is no denying that the added work is valuable in a very real way. Until the algorithms are producing works that they can learn off of, (if they can) the books, pictures, music- whatever media they’re consuming is essential to making a better program.

Thank you for the article, I appreciate it.

Anonymous Coward says:

Re: Re:

Humans are good at extrapolating, but generally bad at interpreting large sets of data. We can do it, but balancing statistical risk/reward analysis requires training and education. It’s neither natural nor easy.

The models, then, are the opposite- they are very good at interpreting large sets of data, and not making extravagant jumps outside of it. That makes it incredibly useful tools, but fundamentally different from humans.

An example might be- seeing a tiger in grass. A human might wonder if there is a tiger in every grass patch. A deep learning model will apply some small correlation between tigers and grass patches, potentially none at all.

Anonymous Coward says:

Re: Re: Re: Tiger, tiger, lurking ... in the shadows of the grass

An example might be- seeing a tiger in grass. A human might wonder if there is a tiger in every grass patch. A deep learning model will apply some small correlation between tigers and grass patches, potentially none at all.

If only police had had help from an AI back then.

Will the AI advise the owner to keep his head down and say nuffin’?

Search terms used were

tiger southampton helicopter

Anonymous Coward says:

Re:

I’d argue that the “mimics” explanation is still misleading and leads to incorrect comparisons to biological intelligence.

Large Language Models are nothing more than translation sieves. You put input in the one side, and based on the weightings and the material the sieve has been exposed to, a translation comes out the other side.

It’s not mimicking anything, which is why we have issues like “mimic the works of Walt Whitman” not doing what’s expected; the LLM isn’t actually attempting to follow the imperative, it just takes the phrase, breaks it down, and then follows a weighted path through linked symbols that have been created based on a training set of works, until it arrives at the other end with something statistically relevant to the input symbols, in the context of the path created via the training set.

Humans are wonderful at inventing narrative though, and so take this complex sieve, and since it’s too complex for a human to comprehend all at once, think of what it’s doing in terms of what the human would do with the same input to arrive at the same output. And from there, we get significant misunderstandings about both capabilities and techniques.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...