from the I'm-sorry-I-can't-do-that,-Dave dept
When it comes to the early implementation of “AI,” it’s generally been the human beings that are the real problem.
Case in point: the fail-upward incompetents that run the U.S. media and journalism industries have rushed to use language learning models (LLMs) to cut corners and attack labor. They’ve made it very clear they’re not at all concerned about the fact that these new systems are mistake and plagiarism prone, resulting in angry employees, a lower-quality product, and (further) eroded consumer trust.
While AI certainly has many genuine uses for productivity, many VC hustlebros see AI as a way to create an automated ad engagement machine that effectively shits money and undermines already underpaid labor. The actual underlying technology is often presented as akin to science fiction or magic; the ballooning server costs, environmental impact, and $2 an hour developing world labor powering it are obscured from public view whenever possible.
But however much AI hype-men would like to pretend AI makes human beings irrelevant, they remain essential for the underlying illusion and reality to function. As such, a growing number of Silicon Valley companies are increasingly hiring poets, English PHDs, and other writers to write short stories for LLMs to train on in a bid to improve the quality of their electro-mimics:
“A string of job postings from high-profile training data companies, such as Scale AI and Appen, are recruiting poets, novelists, playwrights, or writers with a PhD or master’s degree. Dozens more seek general annotators with humanities degrees, or years of work experience in literary fields. The listings aren’t limited to English: Some are looking specifically for poets and fiction writers in Hindi and Japanese, as well as writers in languages less represented on the internet.”
LLMs like Chat GPT have struggled to accurately replicate poetry. One study found that after being presented with 17 poem examples, the technology still couldn’t accurately write a poem in the style of Walt Whitman. While Whitman’s poems are often less structured, Chat GPT kept trying to produce poems in traditional stanzas, even when explicitly being told not to do that. The problem got notably worse in languages other than English, driving up the value, for now, of non-English writers.
So it’s clear we still have a long way to go before these technologies actually get anywhere close to matching both the hype and employment apocalypse many predicted. LLMs are effectively mimics that create from what already exists. Since it’s not real artificial intelligence, it’s still not actually capable of true creativity:
“They are trained to reproduce. They are not designed to be great, they try to be as close as possible to what exists,” Fabricio Goes, who teaches informatics at the University of Leicester, told Rest of World, explaining a popular stance among AI researchers. “So, by design, many people argue that those systems are not creative.”
That, for now, creates additional value for the employment of actual human beings with actual expertise. You need to hire humans to train models on, and you need editors to fix the numerous problems undercooked AI creates. The homogenized blandness of the resulting simulacrum also, for now, likely puts a premium on thinkers and writers who actually have something original to say.
The problem remains that while the underlying technology will continuously improve, the folks rushing to implement it without thinking likely won’t. Most seem dead set on using AI primarily as a bludgeon against labor in the hopes the public won’t notice the drop in quality, and professional writers, editors, and creatives won’t mind increasingly lower pay and tenuous position in the food chain.