It’s Good That AI Tech Bros Are Thinking About What Could Go Wrong In The Distant Future, But Why Don’t They Talk About What’s Already Gone Wrong Today?

from the from-techlash-to-ailash dept

Just recently we had Annalee Newitz and Charlie Jane Anders on the Techdirt podcast to discuss their very own podcast mini-series “Silicon Valley v. Science Fiction.” Some of that discussion was about this spreading view in Silicon Valley, often oddly coming from AI’s biggest boosters, that AI is an existential threat to the world, and we need to stop it.

Charlie Jane and Annalee make some really great points about why this view should be taken with a grain of salt, suggesting the “out of control AI that destroys the world” scenario seems about as likely as other science fiction tropes around monsters coming down from the sky to destroy civilization.

The timing of that conversation was somewhat prophetic, I guess, as over the following couple of weeks there was an explosion of public pronouncements by the AI doom and gloom set, and suddenly it became a front page story, just days after we were talking about the same ideas percolating around Silicon Valley on the podcast.

In our discussion, I pointed out that I did think it was worth noting that the AI doom and gloomers are at least a change from the past, where we famously lived in the “move fast and break things” world, where the idea of thinking through the consequences of new technologies was considered quaint at best, and actively harmful at worst.

But, as the podcast guests noted, the whole discussion seems like a distraction. First, there are actual real world problems today with black box algorithms doing things like enhancing criminal sentences based on unknown inputs. Or, determining whether or not you’ve got a good social credit score in some countries.

Like there are tremendous legitimate issues that can be looked at today about blackbox algorithms, but none of the doom and gloomers seem all that interested in solving any of those.

Second, the doom and gloom scenarios all seem… static? I mean, sure, they all say that no one knows exactly how things will go wrong, and that’s part of the reason they’re urging caution. But, they also all seem to go back to the Nick Bostrom’s paperclip thought experiment, as if that story has any relevance at all to the real world.

Third, many people are now noticing and calling out that much of the doom and gloom seems to be the same sort of “be scared… but we’re selling the solution” kind of ghost stories we’ve seen in other industries.

So, it’s also good to see serious pushback on the narrative as well.

A bunch of other AI researchers and ethicists hit back with a response letter, that makes some of the points I made above, though much more concretely:

While there are a number of recommendations in the letter that we agree with (and proposed in our 2021 peer-reviewed paper known informally as “Stochastic Parrots”), such as “provenance and watermarking systems to help distinguish real from synthetic” media, these are overshadowed by fearmongering and AI hype, which steers the discourse to the risks of imagined “powerful digital minds” with “human-competitive intelligence.” Those hypothetical risks are the focus of a dangerous ideology called longtermism that ignores the actual harms resulting from the deployment of AI systems today. The letter addresses none of the ongoing harms from these systems, including 1) worker exploitation and massive data theft to create products that profit a handful of entities, 2) the explosion of synthetic media in the world, which both reproduces systems of oppression and endangers our information ecosystem, and 3) the concentration of power in the hands of a few people which exacerbates social inequities.

Others are speaking up about it as well:

“It’s essentially misdirection: bringing everyone’s attention to hypothetical powers and harms of LLMs and proposing a (very vague and ineffective) way of addressing them, instead of looking at the harms here and now and addressing those—for instance, requiring more transparency when it comes to the training data and capabilities of LLMs, or legislation regarding where and when they can be used,” Sasha Luccioni, a Research Scientist and Climate Lead at Hugging Face, told Motherboard. 

Arvind Narayanan, an Associate Professor of Computer Science at Princeton, echoed that the open letter was full of AI hype that “makes it harder to tackle real, occurring AI harms.” 

“Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?” the open letter asks. 

Narayanan said these questions are “nonsense” and “ridiculous.” The very far-out questions of whether computers will replace humans and take over human civilization are part of a longtermist mindset that distracts us from current issues. After all, AI is already being integrated into people’s jobs and reducing the need for certain occupations, without being a “nonhuman mind” that will make us “obsolete.” 

“I think these are valid long-term concerns, but they’ve been repeatedly strategically deployed to divert attention from present harms—including very real information security and safety risks!” Narayanan tweeted. “Addressing security risks will require collaboration and cooperation. Unfortunately the hype in this letter—the exaggeration of capabilities and existential risk—is likely to lead to models being locked down even more, making it harder to address risks.” 

In some ways, this reminds me of some of the privacy debate. After things like the Cambridge Analytica mess, there were all sorts of calls to “do something” regarding user privacy. But so many of the goals focused on actually handing more control over to the big companies that were the problem in the first place, rather than moving the control of the data to the end user.

That is, our response to privacy leaks and messes from the likes of Facebook… was to tell Facebook, hey why don’t you control more of our data, and just be better about it, rather than the actual solution of giving users control over their own data.

So, similarly, here, it seems that these discussions about the “scary” risks of AI are all about regulating the space in a manner that just hands the tools over to a small group of elite “trustworthy” AI titans, who will often talk up the worries and fears if the riff raff should ever be able to create their own AI. It’s the Facebook situation all over again, where their own fuckups lead to calls for regulation that just give them much greater power, and everyone else less power and control.

The AI landscape is a little different, but there’s a clear pattern here. The AI doom and gloom doesn’t appear to be about fixing existing problems with blackbox algorithms — just about setting up regulations that hand the space over to a few elite and powerful folks who promise that they, unlike the riff raff, have humanity’s best interests in mind.

Filed Under: , , ,

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “It’s Good That AI Tech Bros Are Thinking About What Could Go Wrong In The Distant Future, But Why Don’t They Talk About What’s Already Gone Wrong Today?”

Subscribe: RSS Leave a comment
13 Comments
Anonymous Coward says:

Thanks, Mike, for a dose of common sense. The harms that are likely to arise from AI already exist in many forms, e.g., algorithms that are used in the criminal justice system and for which there are no review processes or appeals. When Google et al began collecting massive quantities of personally-identifiable data, there were a few voices in the wilderness, all of which were ignored. Nothing useful can be done about any of these concerns because Congress has become dysfunctional and incapable of paying attention to an issue of this complexity for long enough to produce more than a sound bite. There’s also some question whether more than a couple of Congresspeople have the intellectual capacity to even understand the potential issues.

Gerry the lizardperson says:

I disagree

The article talks about the concerns as longterm risks and contrasts it with harmful algorithms today. However the „doomers“ do not believe these risks to be longterm. Instead they think AI systems may become very powerful indeed in just a few years, and without a warning. So they have to start organising now to be prevent doom in 10-20 years. That’s not longterm.

Yes, there are harms done right now. However both may be true at the same time, and worth contemplating: harm now and existential threat soon.

Also, shouldn’t it give pause to see that the people most knowledgeable about them, the „tech bros“ building them are concerned about the existential risks? Usually they hype, now they are (privately) concerned – is there a legitimate reason here?

Finally, this article doesn’t attempt to explain why these „doomers“ expect doom. That’s very convenient, so there is no need to address them, explain why doom would be improbable. Instead the concerns are ridiculed. I feel the issue is indeed important and the article makes no attempt at grappling with it. It just pokes fun at other people. That I believe is beneath TechDirt‘s standard.

Gerry the lizardman says:

Re:

Just to clarify: Maybe that thoughtful discussion was in the podcast. The gist of the article however doesn’t feel like a genuine thought about it was given. Maybe one of my two issues was the headline „… What Could Go Wrong In The Distant Future“ when the concerns are not exactly distant anymore. Like talking about climate change the way people did in the 90s („one day earth could …“ instead of today’s „oh fuck, things have changed already quite a bit, haven’t they?“).

Nick Blood says:

I’m just going to reiterate what others have essentially said because it’s important: this is a self-deceivingly flippant and shallow analysis–with no attempt made to justify your own confidence in your position. I think there’s no way you’ve actually read Bostrom’s book or understood what AI critics and “doomsayers” are actually talking about.

Anonymous Coward says:

Elon Musk

“The whole discussion seems like a distraction” – yes, it seems like their main goal

For some, it’s slowing down competitors while they try to catch up to the technology:

Elon Musk is moving forward with a new generative AI project at Twitter after purchasing thousands of GPUs

https://www.businessinsider.com/elon-musk-twitter-investment-generative-ai-project-2023-4

Benjamin Jay Barber says:

“No, look, if we have a choice of either our people go to prison, or we comply with the laws, we’ll comply with the laws. Same goes for the BBC,” Musk responds.

Clayton does not ask Musk for a clarification but moves on to a question that led Musk to say that he was not the CEO of Twitter, but that his dog was.

In February, a month after the release of the Modi documentary, Income Tax officials carried out surveys in two BBC offices at New Delhi and Mumbai, allegedly to investigate issues related to international taxation and transfer pricing of BBC subsidiary companies.

Press bodies in India and abroad condemned the raid. The UK government said it was closely monitoring the situation and BBC itself said it was prepared to fully comply.

https://thewire.in/tech/elon-musk-bbc-interview-india-it-rules

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Subscribe to Our Newsletter

Get all our posts in your inbox with the Techdirt Daily Newsletter!

We don’t spam. Read our privacy policy for more info.

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...