Filing Bogus Copyright & Patent Claims On Your Not-Actually-Sentient AI Is Not A Good Way To Market Your Not-Actually-Sentient AI
from the it's-alive!-though-not-really dept
We’ve covered the quixotic campaign of Stephen Thaler, who has filed lawsuits around the world arguing that he deserves to get copyrights and patents on writings and inventions created by DABUS, which Thaler claims is an AI that he created. He’s lost nearly every case as he attempts to do so, often embarrassingly, including one just a few weeks ago.
Wired’s Will Bedingfield has an amazing article (disclaimer: I spoke to Will while he was researching the article, and am quoted briefly), in which he interviews Thaler. What’s incredible is that Thaler more or less admits he doesn’t actually care about the copyrights and patents, but really sees this as more of a marketing campaign, and a chance to claim that DABUS is sentient (note: it is not).
DABUS has been around a lot longer than the lawsuits. Thaler describes it as an evolving system “at least 30 years in the making.” He has, he says over email, “created the most capable AI paradigm in the world, and through its sentience it is driven to invent and create.” Throughout our conversation, he seems exasperated that journalists have tended to focus on the legal aspects of his cases.
Organizations with “deep pockets” with a goal of “world conquest,” like Google, have kept debates focused on their machines, he says. The copyright and patent suits are one avenue to publicize DABUS’s sentience, as well as to provoke the public into thinking about the rights of this new species. “It’s basically Perry Mason versus Albert Einstein. Which do you want to read about?” Thaler says, arguing that people might be captivated by the courtroom dramas of a fictional lawyer, but they should care about the science.
“The real story is DABUS. And I’m proud to be part of Abbott’s efforts. He’s a sharp guy, and I think it’s a good cause,” he says. “But let’s think about the situation when it first materialized. Here I am building a system capable of sentience and consciousness, and he gave me the opportunity to tell the world about it.”
So, first of all, this suggests a pretty obnoxious abuse of the judicial system. Second, if you don’t want journalists to focus on the legal aspects, maybe (just a suggestion here) don’t file these highly questionable claims.
The article notes that the real villain here is not Thaler, who seems like a very naïve inventor, but a British law professor, Ryan Abbott, who convinced Thaler to make all these patent and copyright attempts, and is representing him pro bono.
Abbott has known Thaler for years, and when, in 2018, he decided to set up his Artificial Inventor Project—a group of intellectual property lawyers and an AI scientist working on IP rights for AI-generated “outputs”—he reached out to the inventor and asked him if he could help. Thaler agreed and directed DABUS to create two inventions. Abbott had the basis of his first case.
Abbott… seems to have a ridiculous understanding of intellectual property law, such that I feel bad for any students who learn about the law from him.
Abbott’s contention is that machine inventions should be protected to incentivize people to use AI for social good. It shouldn’t matter, he says, whether a drug company asked a group of scientists or a group of supercomputers to formulate a vaccine for a new pathogen: The result should still be patentable, because society needs people to use AI to create beneficial inventions. Old patent law, he says, is ill-equipped to deal with changing definitions of intelligence. “In the US, inventors are defined as individuals, and we argued there was no reason that was restricted to a natural person,” he says.
This is extremely confused on multiple levels. Yes, the purpose of patent and copyright law is to create incentives for the inventor or author, but it does so by giving them a limited time monopoly by which they can profit. But, AI machines don’t need to profit. And the argument that giving out patents and copyrights will somehow “incentivize people to use AI for social good” makes no sense. That’s not how any of this works. And, indeed, if you lock up the works and inventions, you limit the social good by making it so others are unable to make use of them.
There’s also this weird bit in which Abbott keeps insisting that DABUS’ creations are at the direction of Thaler, but Thaler says Abbott is wrong, and that misunderstanding is… kind of a big deal if Abbott is running around to various copyright and patent boards, and in various courts around the world, misrepresenting the reality of the situation that his client is in:
Abbott says the coverage of the cases—influenced by the district court’s vagueness—has been quite confused, with a misguided focus on DABUS’s autonomy. He emphasizes that he is not arguing that an AI should own a copyright, 3D printers—or scientists employed by a multinational, for that matter—create things, but don’t own them. He sees no legal difference between Thaler’s machine and someone asking Midjourney to “make me a picture of a squirrel on a bicycle.”
“The autonomous statement was that the machine was executing the traditional elements of authorship, not that it crawled out of a primordial ooze, plugged itself in, paid a ton of utility bills and dropped out of college to do art,” he says. “And that is the case with any number of commonly used generative AI systems now: The machine is autonomously automating the traditional elements of authorship.”
Thaler directly contradicts Abbott here. He says that DABUS is not taking any human input; it’s totally autonomous. “So I probably disagree with Abbott a little bit about bringing in all these AI tools, you know, text to image and so forth, where you’ve got a human being that is dictating and is hands on with the tool,” he says. “My stuff just sits and contemplates and contemplates and comes up with new revelations that can be, you know, along any sensory channel.”
Anyway, the article is really fascinating, and there are a bunch of good quotes in there from law professor Matthew Sag who is always good on this topic, including: “The bottom line is that we don’t need AI inventors to patent the outcomes of emergent processes.” And also: “I don’t even really know where to begin, other than to say, if there is a sentient AI on the planet currently, it’s definitely not this.”
Still, the real story here seems to be about a very confused British academic lawyer, who doesn’t understand how patents and copyrights work to incentivize things (and when they limit creativity and innovation), and an equally confused inventor who seems to think that filing bogus lawsuits that he doesn’t even really agree with is a good way to tell the world about his AI which he claims is sentient, even though it’s not.