Sam Altman Wants The Government To Build Him A Moat
For my final post of last year, I wrote about the many reasons to be optimistic about a better future, one of which was that we were seeing the crumbling of some large, bureaucratic (enshittified) companies, and new competitive upstarts pushing the boundaries. One of those areas was in the artificial intelligence space. As I noted in that piece, a few years ago, if you spoke to anyone about AI, the widespread assumption was that there were only four companies who could possibly even have a chance to lead the AI revolution, as (we were told) it required so much data, and so much computing power, that only Google, Meta, Amazon or Microsoft could possibly compete.
But, by the end of last year, we were already seeing that that wasn’t true, and there were a bunch of new entrants, many of whom appeared to be doing a better job than the “big tech” players when it came to AI, and many of them offering their models in open source ways.
A few weeks back, an internal Google memo made this point even clearer by noting that none of the big companies had any real sustainable competitive advantage in AI, and that the open source players were doing much, much better:
Luke Sernau, a senior Google engineer, made that clear when he referenced one of Buffett’s most famous theories—the economic moat—in an internal document released Thursday by the consulting firm SemiAnalysis, titled “We have no moat. And neither does OpenAI.” In the document, which was published within Google in early April, Sernau claimed that the company is losing its artificial intelligence edge, not to the flashy, Microsoft-backed OpenAI—whose ChatGPT has become a huge hit since its release last November—but to open-source platforms like Meta’s LLaMa, a large language model that was leaked to the public in February.
“We’ve done a lot of looking over our shoulders at OpenAI… But the uncomfortable truth is, we aren’t positioned to win this arms race and neither is OpenAI. While we’ve been squabbling, a third faction has been quietly eating our lunch,” he wrote. “I’m talking, of course, about open source. Plainly put, they are lapping us.”
Honestly, this is partly why I’ve been pretty skeptical about the “AI Doomers” who keep telling fanciful stories about how AI is going to kill us all… unless we give more power to a few elite people who seem to think that it’s somehow possible to stop AI tech from advancing. As I noted last month, it is good that some in the AI space are at least conceptually grappling with the impact of what they’re building, but they seem to be doing so in superficial ways, focusing only on the sci-fi dystopian futures they envision, and not things that are legitimately happening today from screwed up algorithms.
And that takes us to last week’s testimony before Congress by OpenAI’s Sam Altman. Sam is very smart and very thoughtful, though it’s not clear to me he fully recognizes the policy implications of what he’s talking about, and that came across in his testimony.
But, of course, the most notable takeaway from the hearing was that the “industry” representatives appeared to call for Congress to regulate them. Senators pretended this was surprising, even though it’s actually pretty common:
Senator Dick Durbin, of Illinois, called the hearing “historic,” because he could not recall having executives come before lawmakers and “plead” with them to regulate their products—but this was not, in fact, the first time that a tech C.E.O. had sat in a congressional hearing room and called for more regulation. Most notably, in 2018, in the wake of the Cambridge Analytica scandal—when Facebook gave the Trump-aligned political-consultancy firm access to the personal information of nearly ninety million users, without their knowledge—the C.E.O. of Facebook, Mark Zuckerberg, told some of the same senators that he was open to more government oversight, a position he reiterated the next year, writing in the Washington Post, “I believe we need a more active role for governments and regulators.”
And, of course, various cryptocurrency companies have also called for regulations as well. Indeed, it’s actually kind of typical: when companies get big enough and fear newer upstart competition, they’re frequently quite receptive to regulations. They may make some superficial moves to look like they’re worried about them, but that’s generally for show, and to make lawmakers feel more powerful than they really are. But established companies often want those regulations in order to lock themselves in as the dominant players, and to saddle the smaller companies with impossible to meet compliance costs.
When looked at this way, and in combination with the Google memo about the lack of “moats,” it’s not hard to read last week’s testimony as Altman’s call for Congress to create a moat that protects his company from open source upstarts. Of course, he would never admit that publicly, and instead he can frame it as preventing “bad” actors from making nefarious use of the technology. But, it is self-serving all the same. And that seems pretty obvious to many observers (though it’s not clear if Congress recognizes this):
Figuring out how to assess harm or determine liability may be just as tricky as figuring out how to regulate a technology that is moving so fast that it is inadvertently breaking everything in its path. Altman, in his testimony, floated the idea of Congress creating a new government agency tasked with licensing what he called “powerful” A.I. models (though it is not clear how that word would be defined in practice). Although this is not, on its face, a bad idea, it has the potential to be a self-serving one. As Clem Delangue, the C.E.O. of the A.I. startup Hugging Face, tweeted, “Requiring a license to train models would . . . further concentrate power in the hands of a few.” In the case of OpenAI, which has been able to develop its large language models without government oversight or other regulatory encumbrances, it would put the company well ahead of its competitors, and solidify its first-past-the-post position, while constraining newer entrants to the field.
Were this to happen, it would not only give companies such as OpenAI and Microsoft (which uses GPT-4 in a number of its products, including its Bing search engine) an economic advantage but could further erode the free flow of information and ideas. Gary Marcus, the professor and A.I. entrepreneur, told the senators that “there is a real risk of a kind of technocracy combined with oligarchy, where a small number of companies influence people’s beliefs” and “do that with data that we don’t even know about.” He was referring to the fact that OpenAI and other companies have kept secret what data their large language models have been trained on, making it impossible to determine their inherent biases or to truly assess their safety.
That’s not to say that there should be no consideration for what might go wrong, or that there should be no rules at all. But it does mean that we should look a little skeptically on the latest round of tech CEOs begging Congress to regulate them, and assuming that their intentions and motives are to benefit humanity.
It’s more likely they really just want Congress to build them a moat.
Filed Under: ai, artificial intelligence, competition, congress, luke sernau, moats, regulation, sam altman
Companies: google, openai