Copyright Liability On LLMs Should Mostly Fall On The Prompter, Not The Service

from the who-is-responsible? dept

The technological marvel of large language models (LLMs) like ChatGPT, developed by AI engineers and experts, has posed a unique challenge in the realm of copyright law. These advanced AI systems, which undergo extensive training on diverse datasets, including copyrighted material, and provide output highly dependent on user “prompts,” have raised questions about the bounds of fair use and the responsibilities of both the AI developers and users.

Building upon the Sony Doctrine, which protects dual use technologies with substantial non-infringing uses, I propose the TAO (“Training And Output”) Doctrine for AI LLMs like chatGPT, Claude, and Bard. This AI Doctrine recognizes that if a good faith AI LLM engine is trained using copyrighted works, where the (1) original work is not replicated but rather used to develop an understanding, and (2) the outputs generated are based on user prompts, the responsibility for any potential copyright infringement should lie with the user, not the AI system. This approach acknowledges the “dual-use nature” of AI technologies and emphasizes the crucial role of user intent and inputs such as prompts and URLs in determining the nature of the output and any downstream usage.

Understanding LLMs and Their Training Mechanism

LLMs operate by analyzing and synthesizing vast amounts of text data. Their ability to generate responses, write creatively, and even develop code stems from this training. However, unlike traditional methods of copying, LLMs like ChatGPT engage in a complex process of learning and generating new content based on patterns and structures learned from their training data. This process is akin to a person learning a language through various sources but then using that language independently to create new sentences. AI LLMs are important for the advancement of society as they are “idea engines” that allow for the efficient processing and sharing of ideas.

Copyright law does not protect facts, ideas, procedures, processes, systems, methods of operation, concepts, principles, or discoveries, even if they are expressed in copyrighted works. This principle implies that the syntactical, structural, and linguistic elements extracted during the training of LLMs fall outside the scope of copyright protection. The use of texts to train LLMs primarily involves analyzing these non-copyrightable elements to understand and statistically model language patterns.

The training of LLMs aligns with the principles of fair use as it involves an historically important transformative process that extends beyond the mere replication of copyrighted texts. It harnesses the non-copyrightable elements of language to create something new and valuable, without diminishing the market value of the original works. The LLM technology has brought society into the age of idea processors. Under the totality of the circumstance the use of texts to train LLMs can be considered fair use under current copyright law.

The Proposed Sony Doctrine for AI LLMs or the “Training and Output” (“TAO”) Doctrine

The training of AI large language models (LLMs) on copyrighted works, and their subsequent outputs from user prompts, presents a compelling case for being recognized as a form of dual use technology. This recognition could be encapsulated in what might be termed the “AI Training and Output” (“TAO”) Doctrine protecting developers from copyright infringement liability. Drawing parallels from the Sony Doctrine, which protected the manufacture of dual-use technologies like the VCR under the premise that they are capable of substantial non-infringing uses, the AI TAO Doctrine could safeguard AI development and deter the floodgates of litigation.

LLMs, like the VCR, have a dual-use nature. They are capable of transient and modest infringing activities when prompted or used inappropriately by users, but more significantly, they possess a vast potential for beneficial, non-infringing uses such as educational enrichment, idea enhancements, and advances in language processing. The essence of the AI TAO Doctrine would center on this dual-use characteristic, emphasizing the substantial, legitimate applications of AI that far outweigh potential abuses.

Protecting developers of LLM training and automated output under such a doctrine aligns with fostering innovation and technological advancement while recognizing the need for responsible use. The AI TAO Doctrine would not fully absolve good faith AI developers from implementing robust safeguards against copyright infringement but would acknowledge the inherent dual-use nature of AI technologies, thereby promoting a balanced approach to copyright considerations in AI development.

User Responsibility and AI Outputs

Users play a pivotal role in how LLMs are utilized. Now, consider the user “prompt,” the user’s recipe instruction. An LLM presented with a prompt regarding a copyrighted article can cook up fair or foul outputs. A thoughtful “summarize and critique” prompt extracting key points and offering analysis falls squarely under fair use. It’s like taking notes and forming opinions after reading a book – a crucial aspect of learning and criticism. The LLM used this way is an excellent “idea processor.”

However, a prompt demanding the LLM “regurgitate the entire article word-for-word” may cross the line. Did the team of “users” involved in the NY Times v. OpenAI complaint go too far and game the technology or act with unclean hands when they formulated aggressive prompts? Should the law tolerate such transient and rare gaming in favor of the greater good that LLMs have to offer? It’s likely that as that case moves forward, we’ll learn a lot about the process through which the eye-catching results were generated. The likelihood is that AI LLMs will continue to be tuned against such similar copyright “gaming” incidents.

But, there are still user contexts where such a verbatim output and subsequent usage can constitute fair use. Some degree of copyright “gaming” may need to be tolerated in order for fair use and free speech to flourish. For an ironic example, an LLM assisted legal analysis of the NY Times v. OpenAI complaint, by its nature, should include the fair use of all parts of the complaint including the NY Times’ “copyrighted” articles embedded in that court document. Wholesale “blocking” of such article text in LLM output is likely over-broad especially for legal scholars.

Copyright infringement involving LLM output is also highly dependent on how the user uses the content off site or downsteam. Users like students, news organizations, and lawyers using the same outputs may each have a different copyright infringement analysis for their off site use cases. One user’s fair use is another user’s infringement. If a user prompts an LLM to generate content that is used in a manner that infringes on copyright, the responsibility should lie with the user not the LLM.

Here’s the crux: the LLM itself can’t predict the user’s intentions. It simply processes patterns based on prompts. The LLM learning machine and idea processor shouldn’t be stifled due to potential user misuse. Instead, in the rare circumstances when there is a legitimate copyright infringement, users ought to be held accountable for their prompts and subsequent usage and give the AI LLM “dual use technology” developers the non-infringing status of the VCR manufacturer under the Sony Doctrine.

This approach would benefit from developing privacy oriented AI systems that can flag potentially infringing uses and guide users towards responsible usage, thereby fostering a culture of awareness and accountability in the digital domain. Care must be taken to not intrude on user privacy by analyzing private prompts and outputs and storing them which can reveal the most sensitive information about a person from health care concerns to trade secrets.

Ironically, AI LLMs retaining the original copyrighted works, under the fair use doctrine, to hash portions and to bolster fingerprint technologies, can help create better copyright infringement filtering, feedback, and alert systems. How to balance copyright risk with innovation requires a holistic approach by all the stakeholders. The stakes are high. Too much friction and other countries with a “technology first” set of policies will take the lead in global AI.

The training of LLMs on copyrighted material, under the umbrella of fair use and subsequent outputs in response to user prompts within the proposed AI TAO Doctrine, presents a balanced approach to fostering innovation while respecting copyright laws. This perspective emphasizes the transformative nature of AI training, the importance of user intent in the generation of outputs, and the need for technological tools to assist in responsible usage. Such an approach not only supports the advancement of AI technology but also upholds the principles of intellectual property rights, ensuring a harmonious coexistence of technological innovation and copyright law.

Ira P. Rothken is a leading advisor and legal counsel to companies in the social network, entertainment, internet, cloud services, and videogame industries.

Filed Under: , , , , , , ,
Companies: ny times, openai

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Copyright Liability On LLMs Should Mostly Fall On The Prompter, Not The Service”

Subscribe: RSS Leave a comment
67 Comments
gandoron (profile) says:

Copyright Indemnification

Some services, including Microsoft, are now extending copyright indemnification to users of their enterprise AI/LLM services. Their is guidance on metaprompt language to ensure this protection extends to your use of the service.

This should help corporate adoption, but also requires adequate logging by the customer of prompts and completions to prove that guidelines were followed.

https://blogs.microsoft.com/on-the-issues/2023/09/07/copilot-copyright-commitment-ai-legal-concerns/

IP Nerd says:

Reserved copyright reserved rights

I have used AI LLM extensively to analyze personal written works contributions. My original works are my preferred data shard but mass use of industry jargon makes it difficult to process to mass audience. AI LLM prompts simply and lengthen written works concepts. Society seems to link amount of pages to quality. Here is my question?

My original written works that written and rewritten multiples times over the last few years. Does AI LLM prompts and email sharing reduce my reserved copyrights for material produced without prompting?

This comment has been deemed insightful by the community.
rich says:

Ok

“These advanced AI systems, which undergo extensive training on diverse datasets, including copyrighted material, and provide output highly dependent on user “prompts,” have raised questions about the bounds of fair use and the responsibilities of both the AI developers and users.”

The same can be said about everyone I went to school with.

Still not sure why so many writers think that AI systems or developers owe them some sort of payoff because an AI was possibly trained with their material. Stephen King has often mentioned how much he has been influenced by other authors, like Tolkien, Orwell, Ellison, and many others. Should Stephen King now have to pay up since his success was built upon training derived from other people’s copyrighted material?

Anonymous Coward says:

Re:

Because the AI does not think. It does not create new ideas. It amalgamates the information and writing based on what information it has available to it. HP lovecraft translated his fear of the other into the creation of the Cthulu mythos which people have found wildly engaging. AI cannot create such a wild leap from the information presented into the creation of an entirely new topic, because the ideas behind the mythos is a translation of already existing ideas into an entirely different narrative field and structure.

Anonymous Coward says:

Re: Re:

AI does not need to be creative, that is the job of the human using it, it can however do a lot of the drudge work of choosing what words to use to express an idea, or fill in the little details to create an image. To use the image example. I want a flower bed there is the creative decision, drawing out all the flowers is the detail drudge work.

PaulT (profile) says:

Re: Re:

Lovecraft is an interesting outlier, since he did create something different with what he took in. The more interesting arguments are things like the 50 Shades series, which were explicitly Twilight fan fiction that diverted from its source only because lawyers told the author to change. Or, the aforementioned Stephen King – he turned out a lot of highly original work, but even he would say that Salem’s Lot was basically Dracula. One of the most influential vampire movies of all time, Nosferatu, was even ordered to be destroyed because it copied so much from Dracula while it was still in copyright. Meanwhile, Disney’s empire is built on imitation of public domain works.

The problem right now isn’t that “AI” copies from other works, the issue is how you define the copying compared to so many artists that have done it before. The AI speeds up a part of the process, but if an author is sensible and doesn’t just copy and paste then it’s really just what people have done for a long time. So, what constitutes the human touch that makes a human making another version of A Christmas Carol protected but an LLM generating it not creative?

Anonymous Coward says:

There are going to be use cases for AI appear to be situations where the user isn’t necessarily prompting AI, especially as Microsoft continues to implement AI into Windows. If something like Copilot creates liability without being directly prompted to do so, it seems hard to move that liability onto the user even if liability for directly prompting “go and copy this news article word for word” should fall on the user.

Anonymous Coward says:

Re:

There are going to be use cases for AI appear to be situations where the user isn’t necessarily prompting AI, especially as Microsoft continues to implement AI into Windows.

Then the courts will inspect whether Microsoft’s algorithms inject pieces such as “copy an article”. Liability can be split across parties. The TAO is about who is responsible for the prompt. The TAO is compatible with cases involving Microsoft’s meta-prompting algorithms.

Anonymous Coward says:

Re:

Facts cannot be copyrighted. Courts have ruled that entire volumes of such things as historical sports data and telephone numbers can be freely copied, even when their creators have put in a great deal of effort into their compilation; “sweat of the brow” is never a factor in asserting copyright.

If an AI system reproduces a copyrighted work, the liability rests with the owners of the system or whoever populated the system with the copyrighted work that was regurgitated. If the AI system somehow independently reproduced a copyrighted work without having had it introduced into its training, the same liability would inhere to the owners of the system; copyright is a property of the work, not of how the infringer copies it.

Anonymous Coward says:

Re: Re:

Facts cannot be copyrighted. Courts have ruled that entire volumes of such things as historical sports data and telephone numbers can be freely copied, even when their creators have put in a great deal of effort into their compilation; “sweat of the brow” is never a factor in asserting copyright.

You’d think so, but that’s not stopped copyright plaintiffs from relying on “sweat of the brow” as an argument for why they should be allowed to demand compensation. Compensation, they argue, is what motivates sweat of the brow in the first place. It’s literally one of the main arguments in favor of stronger intellectual property.

If the AI system somehow independently reproduced a copyrighted work without having had it introduced into its training, the same liability would inhere to the owners of the system

Are you attempting to make “innocent infringement” a thing? Because that is a very dangerous rabbit hole to dive into. If people can be sued for an infringement they don’t even know exists, corporations are going to get inundated with bullshit requests. Even the RIAA isn’t going to come away unscathed. Anyone dedicated can look for all the times the RIAA and other copyright organizations have used photographs and operating systems they didn’t pay for.

Anonymous Coward says:

Re: Re: Re:2

That we have as a species, at one point, genuinely believed that duct taping fruit to a surface at specific angles is something that merits legal protection and exorbitant sums of money so the “performance artist” can make even more taped fruit to walls, should have been a sign that the art world and intellectual property have completely gone off the rails of reason and sanity.

Some deep introspection is needed by pro-copyright interests, before their reputation is irreparably tied to that of scam enabling.

Anonymous Coward says:

Re: Re: Re:3

Artists can do whatever they like. Patrons can like whatever they like. Critics can like whatever they like. If you find that some piece of art does not rise to your standards, you are free not to look at it, not to patronize a museum or gallery that displays it, not to read more from critics who like it, and not to buy it. But you should not expect to be able to force your tastes on other people.

And as far as complexity and technique, I like this line from The Collaboration, attributed to Basquiat: Everyone looks at my art and says “I could do that”. But they don’t.

Anonymous Coward says:

Re: Re: Re:4

But you should not expect to be able to force your tastes on other people.

Neither should the people who think that duct taping fruit is art be able to raise the bar for what counts as and what doesn’t count as art, or how much it should cost.

But the truth is that they don’t live in a vacuum, and neither do their decisions. The fine arts scene is an incestuous landscape of prostitutes trying to pass off dungheaps as intellectually provoking and Johns desperately trying to find something to cover their money laundering.

Anonymous Coward says:

I would argue that the user should not be responsible for the copyright infringement of code generated by AI (especially on AI built specifically for handling code generation), where the purpose of the tool is to create content to use the output as directly as possible and you can end up with potentially infringing content without the type of “gaming the system” you see in the New York Times example.

sabroni says:

"used to develop an understanding"

LLMs don’t “understand” things. They produce plausible sounding sentences based on other sentences they’ve trained on.
That isn’t the same process that’s going on in a human brain when it understands something.
The article starts with an incorrect premise, it doesn’t really matter where it ends up.

Arianity says:

Building upon the Sony Doctrine, which protects dual use technologies with substantial non-infringing uses

I wonder if the Sony Doctrine would exist if DVRs came preloaded with content? I’m not sure that it would.

This principle implies that the syntactical, structural, and linguistic elements extracted during the training of LLMs fall outside the scope of copyright protection. The use of texts to train LLMs primarily involves analyzing these non-copyrightable elements to understand and statistically model language patterns.

This seems largely correct. However,

It harnesses the non-copyrightable elements of language to create something new and valuable, without diminishing the market value of the original works.

I’m not sure about the argument that it doesn’t diminish the market value of original works. That seems like a stretch. It probably does diminish the market value of the original works, to some degree (not all, and in some cases might enhance it). I think you can make the argument that the benefits to society/new artists outweigh those harms, but it probably is coming at the cost of diminishing the market value of a good chunk of works.

Anonymous Coward says:

Part of the use of these LLMs is to generate stuff like books right? If I write a book using AI, and part of that text is copied from some other book (without me trying to force it to copy), why would I be responsible for the tool giving me what ended up being legally defective results when using the LLM as intended.

If Google sold me a self driving car without a steering wheel and it crashed when I was using it as intended, Google would also be at fault even if traditionally they wouldn’t have been.

This comment has been deemed insightful by the community.
Strawb (profile) says:

Re:

If I write a book using AI, and part of that text is copied from some other book (without me trying to force it to copy), why would I be responsible for the tool giving me what ended up being legally defective results when using the LLM as intended.

Because you’re the one who used to it generate the text. There’s nothing “legally defective” about it.

Anonymous Coward says:

Liability for Owner of AI Platform

If an AI system reproduces copyrighted work into its output, liability should test with the owners of the platform or software that is doing so, because they are the ones causing the copyright violation. If a service simply recorded copyrighted material and reproduced it on demand, this world be an open-and-shut case; running it through fancy processing that just results in the same output doesn’t change anything.

Anonymous Coward says:

Re: Re:

A person using a copier to reproduce a work must have the legal right to do so. The owners of an AI system reproducing a work should be required to have the legal right to do so. Why do you believe the two situations are different?

The courts have already ruled that companies rebroadcasting over-the-air TV stations without permission are in violation of copyright, for example.

Anonymous Coward says:

Re: Re: Re:2

The owners of the AI system are the ones making the copies; the user provided a prompt, and the AI system reproduced a copyrighted work. An AI system is not a person, it is a device. When a device makes illegal copies of copyrighted material that its owners have placed within it, the owners are liable.

Strawb (profile) says:

Re: Re: Re:3

The owners of the AI system are the ones making the copies; the user provided a prompt, and the AI system reproduced a copyrighted work.

Even if that actually happened, the user is the one who made it happen. The owners created a tool that could.

When a device makes illegal copies of copyrighted material that its owners have placed within it, the owners are liable.

A user using a tool in a particular way doesn’t make the creators of the tool liable for the crime. By that logic, car companies would be responsible for mass killings, but they’re not. The drivers are.
It should be the same with AI tools.

Anonymous Coward says:

Re: Re: Re:4

When the “tool” contains copyrighted works and delivers them on demand and without compensation to the copyright holders, it is indeed the providers of the tool who are liable. “Will no one rid me of this meddlesome priest?” doesn’t absolve the murderers even when the requester is complicit.

Anonymous Coward says:

Re: Re: Re:5

When the “tool” contains copyrighted works and delivers them on demand

That is not what happened, as that would imply that OpenA1 managed to include the exact articles that the NY Times tried to replicate using the AI. Further it is a waste of model space to include specific articles, when the same space could be used for for generally useful data. Also, it is likely that the NU Times has been very selective about what they have released, and have conveniently omitted all the trials it took to train the AI to deliver what they wanted.

Anonymous Coward says:

Re: Re: Re:

A person using a copier to reproduce a work must have the legal right to do so.

Except for certain provisions, yes.

The owners of an AI system reproducing a work should be required to have the legal right to do so.

If that’s the case, the owner of the copier should be the one who should have the right to do so. And the office/library/Kinko’s isn’t punished for what their employee/user/customer does if the work is “copyrighted”.

Why do you believe the two situations are different?

Because, you disingenious asshole, the one you claim to be at fault are TWO DIFFERENT PEOPLE.

If you want to make your delusions a reality, then, yes, let’s sue car manufacturers for making cars, since people use cars to knock pedestrians down. Let’s sue John Deere for making farm equipment, since their stuff can be used to hurt people (and probably has). Let’s sue ISPs for providing their services to domestic terrorists as well. Banks, too, for having the gall to help organized crime launder their money. Oh, and gun manufacturers, for making and selling guns.

And all the related industries that arose from these companies providing these products and services.

After all, anything and everything can be used to either physically hurt people or infringe copyright if applied creatively…

Anonymous Coward says:

Re: Re: Re:2

So angry, and so wrong.

The owners of the AI system are the ones liable for copyright violation because it’s their system that is producing the copyrighted work based on copyrighted work that they have incorporated. The user of the copy machine is liable for copyright violation because it’s that user who is introducing the copyrighted work into the machine that is reproducing it.

Anonymous Coward says:

Re: Re: Re:6

In this case, the AI system reproduced a copyrighted article exactly. The probability that an AI system could do this without ever having seen the original is vanishingly small; if true, the owners of the system should provide evidence that the system was never trained on the article in question.

Anonymous Coward says:

Re: Re: Re:3

And again, by your logic, the gun manufacturer should also be held liable for all the gun crime and gun-related suicides and homicides that happen every day.

And again, Mike should also be held liable for comments the peanut gallery known as the comment section makes. Right down to the disturbing support for antisemitism, pedophillia and fascism the white supremacist threats to society keep pushing for.

Is this what you want?

Anonymous Coward says:

Re: Re: Re:2

If the library/office/Kinkos knows you are violating copyright, they will stop you from copying, because they can be liable.

Many AIs already have restrictions; they aren’t supposed to give you instructions on how to build a bomb or arguments on why Hitler was a good guy. I wonder if some of these lawsuits are aiming for AIs to build copyright restrictions as well, so if you ask it for copyrighted material it will just say “no”.

PaulT (profile) says:

Re: Re: Re:

“A person using a copier to reproduce a work must have the legal right to do so”

If they have rights to the original work. I could photocopy Shakespeare all I want, but not the the JK Rowling novel…

But, there’s a lot in Rowling’s work that is an imitation of what came before. The question is where the line is between Rowling using the fantasy works that preceded her, but an LLM infringing by using the same source.

Anonymous Coward says:

Re: Re: Re:3

The NT Times also enables the web search module before prompting the system, and act as though looking up their articles is copyright infringement. While they do not present what happened in this fashion, they effectively used the AI to find and repeat their articles and called that copyright infringement. That is not showing that the AI has infringing copies in its database, but rather that it can act as an agent to find things on the web.

This comment has been flagged by the community. Click here to show it.

Anonymous Coward says:

Nice try Ira!
Keep shopping your defense for corporate clients when they try to regulate them.

So your big thought piece is that the LLM’s make the money but can’t filter out protected intellectual property? Ira, ever heard of YouTube? They’ve been handling that challenge for better or worse for years.

You want to go back to the days of Metallica suing some fan on Napster. That worked out well. I don’t have to solve your problem Ira. You’re the one getting paid to figure this out but you know you can’t. If the best you’ve got is
“LLM’s don’t hurt copyright,
people do”,
go find some other court argument to recycle.

Ira – get very clear where you are right now.
You’re not talking to Napster users from years ago.
LLM’s are passing the bar and helping the people you’re trying to throw under the bus.

That post you made is just part of your PR push which is fine. Whatever makes you happy. Just tell your clients to pay the licensing fees where needed like everyone else does. Even Facebook paid media companies to have news on their feeds. The ship has sailed. Work it out -don’t write essays about whose fault it is besides your clients.

Google just laid off 30,000 workers and some of them are attorneys too. What was the last thing they wrote about before they were let go? Something like your stuff?

You’re helping your clients who will do the same to you. Your arguments are too old for this era and you have no idea. Best of luck to you.

gandoron (profile) says:

LLM vs our Brain

I don’t know if we can claim that and LLM doesn’t work the same way our human brain does. An LLM is largely a “next word/token” generator based on vector math internally. However the output that we get is somewhat coherent.

For our brain, we can only observe something at a much higher level of abstraction and have no idea what is really happening at a synapse level. We know from experiments that many times our brain basically lies to us as it tries to paint a coherent picture of how we perceive the world, from memories to making your vision look consistent, despite a hole in the lower portion and only Black/white at the fringes.

Hard to know how we are creative or just a more complex amalgamation with some randomness (eg. LLM attribute temperature more than 0).

terop (profile) says:

Currently copyright infringer is by definition the person who “publishes” the copyrighted work without permission from copyright holder. Thus both the platform vendor and the prompter is in violation, if those end results of the prompt are used in some product. But platform vendor is the initial infringer, and the prompter is only secondary infringer in case he uses the png file generated to build a product of his own.

Anonymous Coward says:

Re:

Thanks for confirming that Meshpage has violated the copyright of Scott Cawthon, even though all you did was download someone else’s 3D model package.

I’ll expect that in line with stricter copyright law you delete Meshpage off the Internet, but we all know you don’t have the honesty to do that.

terop (profile) says:

Re: Re:

I’ll expect that in line with stricter copyright law you delete Meshpage off the Internet, but we all know you don’t have the honesty to do that.

This deletion almost happened since the storage lobby noticed something wrong with my web site, and linux kernel marked the ssd that I was using as read-only and requiring fsck. That worked few times, but at some point it was inevitable that fsck would fuck the contents of the disk and we’d be free from meshpage.org for good.

But worry not, since I had working backup, and it just took few days to purchase 2 new ssds (from different vendor), and get enough storage space for meshpage.org. Then just using the github repo and my asset backup, I managed to get meshpage.org up and running again. Only thing left to fix is some ERR_CERT_COMMON_NAME_INVALID and CORS problems, but those were kinda interesting problems after changing hard disk. But other than that, it’s just matter of fixing file system permissions/tar.gz files by default failed to restore the permissions.

terop (profile) says:

Re: Re:

Thanks for confirming that Meshpage has violated the copyright of Scott Cawthon

Given above information, my defense would use the wookie defense. I.e. if wookie lives in endor, you need to acquit. I.e. my system doesn’t even have a prompt, so your assumptions about the structure of meshpage.org’s system is invalid, and thus you need to acquit.

Scott obviously never demanded any of his copyrights to be respected, even after laughing like hell our trolling of the assets.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...