Hide Techdirt is off for the holidays! We'll be back soon, and until then don't forget to check out our fundraiser »

Sam Altman Wants The Government To Build Him A Moat

For my final post of last year, I wrote about the many reasons to be optimistic about a better future, one of which was that we were seeing the crumbling of some large, bureaucratic (enshittified) companies, and new competitive upstarts pushing the boundaries. One of those areas was in the artificial intelligence space. As I noted in that piece, a few years ago, if you spoke to anyone about AI, the widespread assumption was that there were only four companies who could possibly even have a chance to lead the AI revolution, as (we were told) it required so much data, and so much computing power, that only Google, Meta, Amazon or Microsoft could possibly compete.

But, by the end of last year, we were already seeing that that wasn’t true, and there were a bunch of new entrants, many of whom appeared to be doing a better job than the “big tech” players when it came to AI, and many of them offering their models in open source ways.

A few weeks back, an internal Google memo made this point even clearer by noting that none of the big companies had any real sustainable competitive advantage in AI, and that the open source players were doing much, much better:

Luke Sernau, a senior Google engineer, made that clear when he referenced one of Buffett’s most famous theories—the economic moat—in an internal document released Thursday by the consulting firm SemiAnalysis, titled “We have no moat. And neither does OpenAI.” In the document, which was published within Google in early April, Sernau claimed that the company is losing its artificial intelligence edge, not to the flashy, Microsoft-backed OpenAI—whose ChatGPT has become a huge hit since its release last November—but to open-source platforms like Meta’s LLaMa, a large language model that was leaked to the public in February.

“We’ve done a lot of looking over our shoulders at OpenAI… But the uncomfortable truth is, we aren’t positioned to win this arms race and neither is OpenAI. While we’ve been squabbling, a third faction has been quietly eating our lunch,” he wrote. “I’m talking, of course, about open source. Plainly put, they are lapping us.”

Honestly, this is partly why I’ve been pretty skeptical about the “AI Doomers” who keep telling fanciful stories about how AI is going to kill us all… unless we give more power to a few elite people who seem to think that it’s somehow possible to stop AI tech from advancing. As I noted last month, it is good that some in the AI space are at least conceptually grappling with the impact of what they’re building, but they seem to be doing so in superficial ways, focusing only on the sci-fi dystopian futures they envision, and not things that are legitimately happening today from screwed up algorithms.

And that takes us to last week’s testimony before Congress by OpenAI’s Sam Altman. Sam is very smart and very thoughtful, though it’s not clear to me he fully recognizes the policy implications of what he’s talking about, and that came across in his testimony.

But, of course, the most notable takeaway from the hearing was that the “industry” representatives appeared to call for Congress to regulate them. Senators pretended this was surprising, even though it’s actually pretty common:

Senator Dick Durbin, of Illinois, called the hearing “historic,” because he could not recall having executives come before lawmakers and “plead” with them to regulate their products—but this was not, in fact, the first time that a tech C.E.O. had sat in a congressional hearing room and called for more regulation. Most notably, in 2018, in the wake of the Cambridge Analytica scandal—when Facebook gave the Trump-aligned political-consultancy firm access to the personal information of nearly ninety million users, without their knowledge—the C.E.O. of Facebook, Mark Zuckerberg, told some of the same senators that he was open to more government oversight, a position he reiterated the next year, writing in the Washington Post, “I believe we need a more active role for governments and regulators.”

And, of course, various cryptocurrency companies have also called for regulations as well. Indeed, it’s actually kind of typical: when companies get big enough and fear newer upstart competition, they’re frequently quite receptive to regulations. They may make some superficial moves to look like they’re worried about them, but that’s generally for show, and to make lawmakers feel more powerful than they really are. But established companies often want those regulations in order to lock themselves in as the dominant players, and to saddle the smaller companies with impossible to meet compliance costs.

When looked at this way, and in combination with the Google memo about the lack of “moats,” it’s not hard to read last week’s testimony as Altman’s call for Congress to create a moat that protects his company from open source upstarts. Of course, he would never admit that publicly, and instead he can frame it as preventing “bad” actors from making nefarious use of the technology. But, it is self-serving all the same. And that seems pretty obvious to many observers (though it’s not clear if Congress recognizes this):

Figuring out how to assess harm or determine liability may be just as tricky as figuring out how to regulate a technology that is moving so fast that it is inadvertently breaking everything in its path. Altman, in his testimony, floated the idea of Congress creating a new government agency tasked with licensing what he called “powerful” A.I. models (though it is not clear how that word would be defined in practice). Although this is not, on its face, a bad idea, it has the potential to be a self-serving one. As Clem Delangue, the C.E.O. of the A.I. startup Hugging Face, tweeted, “Requiring a license to train models would . . . further concentrate power in the hands of a few.” In the case of OpenAI, which has been able to develop its large language models without government oversight or other regulatory encumbrances, it would put the company well ahead of its competitors, and solidify its first-past-the-post position, while constraining newer entrants to the field.

Were this to happen, it would not only give companies such as OpenAI and Microsoft (which uses GPT-4 in a number of its products, including its Bing search engine) an economic advantage but could further erode the free flow of information and ideas. Gary Marcus, the professor and A.I. entrepreneur, told the senators that “there is a real risk of a kind of technocracy combined with oligarchy, where a small number of companies influence people’s beliefs” and “do that with data that we don’t even know about.” He was referring to the fact that OpenAI and other companies have kept secret what data their large language models have been trained on, making it impossible to determine their inherent biases or to truly assess their safety.

That’s not to say that there should be no consideration for what might go wrong, or that there should be no rules at all. But it does mean that we should look a little skeptically on the latest round of tech CEOs begging Congress to regulate them, and assuming that their intentions and motives are to benefit humanity.

It’s more likely they really just want Congress to build them a moat.

Filed Under: , , , , , , ,
Companies: google, openai

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Sam Altman Wants The Government To Build Him A Moat”

Subscribe: RSS Leave a comment
13 Comments
Anonymous Coward says:

It should be entertaining to watch the technological Luddites in Congress attempt to even define AI, much less develop any regulations that make any sense. Better they amuse themselves by tangling with this ball of string rather than dealing with any of the serious issues facing this country, which they would simply make worse.

This comment has been deemed insightful by the community.
Anonymous Coward says:

Altman, in his testimony, floated the idea of Congress creating a new government agency tasked with licensing what he called “powerful” A.I. models (though it is not clear how that word would be defined in practice).

I’m not sure the First Amendment would allow the regulation.

It’s software. Telling I am not allowed to write a “power machine learning model” is prior restraint. I can’t see how this would fall into any of the few exceptions for prior restrain, and thus most likely unconstitutional.

While the legality of this likely untested, it seems unlikely it could be found to be constitutional for me to be unable to run (aka train) the software I wrote. That seems like it would likely be prior restrain as well. After all all there will be data outputs. Which can be considered speech.

I could see maybe there being some bans on specific usage of ML. However a blanket ban (or licensing requirement) on “power models” seems like it would run face first into multiple constitutional issues.

Anonymous Coward says:

Re: Re:

Then clarify for me a few things, BDAC or Jhon.

Firstly, assuming the work is original, how are artists, musicians, cooks, and content creators of all stripes affected when the copyright holders sue these creators for simply naming their inspirations? I’ve read enough of Techdirt to have a general understanding, so, do explain to me how are creators harmed when the holders ask for their pound of flesh?

Secondly, considering that AI companies have started up “non-profits” to hoover up datasets for their for-profit entities, do explain to me how would a big, rich rightsholder, like let’s say, Disney, who has access to the best lawyers money can buy AND a war chest large enough to drag a case on for DECADES if need be, harm these AI companies?

Thirdly, most of these AI companies are run by the same NFT grifters people chased out of the fucking Internet. Why shoudl we trust these assholes?

Bloof (profile) says:

It’s worth keeping in mind Sam Altman is quietly working on Worldcoin at the same time as the AI he’s pushing, and talking about giving a UBI to people in a currency he controls the supply of beyond the reach of any government and is currently looting the developing world for biometric data. AI absolutely needs to be regulated before any player becomes a monolith and able to dictate regulation to crush startups, but any suggestion that comes from someone like Altman who views themselves as a future benevolent dictator should be viewed with a mountain of suspicion.

Anonymous Coward says:

Re:

It’s worth keeping in mind Sam Altman is quietly working on Worldcoin at the same time as the AI he’s pushing, and talking about giving a UBI to people in a currency he controls the supply of beyond the reach of any government and is currently looting the developing world for biometric data.

Holy shit.

I knew corps would try to push for corp scrip as a way to deny true UBI, but crypto as the corp scrip?

That is so out of the left field and utterly vile…

Bloof (profile) says:

Re: Re:

He was a big supporter of Andrew Yang and his libertarian Trojan horse version of UBI, where people would be given a thousand bucks in exchange for giving up their access to the social safety net which would make it easier to destroy services the payment wouldn’t come close to covering.

He’s like SBF, he smiles, says the right things, but the more you learn about him, the more inclined you are to say f* that guy.

Anonymous Coward says:

Crypto doesn't want a moat

Crypto companies aren’t calling for regulation because they want to be protected from upstart competition: they’re demanding regulation (or more accurately “regulatory clarity”) so they can claim that because the regulation is “new,” they haven’t really been knowingly, intentionally, and obviously ignoring existing securities law and defrauding the public for years. And so maybe their founders can stay out of prison, and Shaq can stop dodging process servers.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Subscribe to Our Newsletter

Get all our posts in your inbox with the Techdirt Daily Newsletter!

We don’t spam. Read our privacy policy for more info.

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt needs your support! Get the first Techdirt Commemorative Coin with donations of $100
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...