AI Art Is Eating The World, And We Need To Discuss Its Wonders And Dangers

from the an-ai-did-not-write-this dept

After posting the following AI-generated images, I got private replies asking the same question: “Can you tell me how you made these?” So, here I will provide the background and “how to” of creating such AI portraits, but also describe the ethical considerations and the dangers we should address right now.

Astria AI images of Nirit Weiss-Blatt

Background

Generative AI – as opposed to analytical artificial intelligence – can create novel content. It not only analyzes existing datasets but it generates whole new images, text, audio, videos, and code.

Sequoia’s Generative-AI Market Map/Application Landscape, from Sonya Huang’s tweet

As the ability to generate original images based on written text emerged, it became the hottest hype in tech. It all began with the release of DALL-E 2, an improved AI art program from OpenAI. It allowed users to input text descriptions and get images that looked amazing, adorable, or weird as hell.

DALL-E 2 image results

Then, people start hearing about Midjourney (and its vibrant Discord) and Stable Diffusion, an open-source project. (Google’s Imagen and Meta’s image generator are not released to the public). Stable Diffusion allowed engineers to train the model on any image dataset to churn out any style of art.

Due to the rapid development of the coding community, more specialized generators were introduced, including new killer apps to create AI-generated art from YOUR pictures: Avatar AI, ProfilePicture.AI, and Astria AI. With them, you can create your own AI-generated avatars or profile pictures. You can change some of your features, as demonstrated by Andrew “Boz” Bosworth, Meta CTO, who used AvatarAI to see himself with hair:

Screenshot from Andrew “Boz” Bosworth’s Twitter account

Startups like the ones listed above are booming:

The founders of AvatarAI and ProfilePicture.AI tweet about their sales and growth

In order to use their tools, you need to follow these steps:

1. How to prepare your photos for the AI training

As of now, training Astria AI with your photos costs $10. Every app charges differently for fine-tuning credits (e.g., ProfilePicture AI costs $24, and Avatar AI costs $40). Please note that those charges change quickly as they experiment with their business model.

Here are a few ways to improve the training process:

  • At least 20 pictures, preferably shot or cropped to a 1:1 (square) aspect ratio.
  • At least 10 face close-ups, 5 medium from the chest up, 3 full body.
  • Variation in background, lighting, expressions, and eyes looking in different directions.
  • No glasses/sunglasses. No other people in the pictures.

Examples from my set of pictures

Approximately 60 minutes after uploading your pictures, a trained AI model will be ready. Where will you probably need the most guidance? Prompting.

2. How to survive the prompting mess

After the training is complete, a few images will be waiting for you on your page. Those are “default prompts” as examples of the app’s capabilities. To create your own prompts, set the className as “person” (this was recommended by Astria AI).

Formulating the right prompts for your purpose can take a lot of time. You’ll need patience (and motivation) to keep refining the prompts. But when a text prompt comes to life as you envisioned (or better than you envisioned), it feels a bit like magic. To get creative inspiration, I used two search engines, Lexica and Krea. You can search for keywords, scroll until you find an image style you like, and copy the prompt (then change the text to “sks person” to make it your self-portrait).

Screenshot from Lexica

Some prompts are so long that reading them is painful. They usually include the image’s setting (e.g., “highly detailed realistic portrait”) and style (“art by” one of the popular artists). As regular people need help crafting those words, we already have an entirely new role for artists under prompt engineering. It’s going to be a desirable skill. Just bear in mind that no matter how professional your prompts are, some results will look WILD. In one image, I had 3 arms (don’t ask me why).

If you wish to avoid the whole prompts chaos, I have a friend who just used the default ones, was delighted with the results, and shared them everywhere. For those apps to be more popular, I recommend including more “default prompts.”

Potentials and Advantages

1. It’s NOT the END of human creativity

The electronic synthesizer did not kill music, and photography did not kill painting. Instead, they catalyzed new forms of art. AI art is here to stay and can make creators more productive. Creators are going to include such models as part of their creative process. It’s a partnership: AI can serve as a starting point, a sketch tool that provides suggestions, and the creator will improve it further.

2. The path to the masses

Thus far, Crypto boosters didn’t answer the simple question of “what is it good for?” and have failed to articulate concrete, compelling use cases for Web3. All we got was needless complexity, vague future-casting, and “cryptocountries.” On the contrary, AI-generated art has a clear utility for creative industries. It’s already used in various industries, such as advertising, marketing, gaming, architecture, fashion, graphic design, and product design. This Twitter thread provides a variety of use cases, from commerce to the medical imaging domain.

When it comes to AI portraits, I’m thinking of another target audience: teenagers. Why? Because they already spend hours perfecting their pictures with various filters. Make image-generating tools inexpensive and easy to use, and they’ll be your heaviest users. Hopefully, they won’t use it in their dating profiles.

Downsides and Disadvantages

1. Copying by AI was not consented to by the artists

Despite the booming industry, there’s a lack of compensation for artists. Read about their frustration, for example, in how one unwilling illustrator found herself turned into an AI model. Spoiler alert: She didn’t like being turned into a popular prompt for people to mimic, and now thousands of people (soon to be millions) can copy her style of work almost exactly.

Copying artists is a copyright nightmare. The input question is: can you use copyright-protected data to train AI models? The output question is: can you copyright what an AI model creates? Nobody knows the answers, and it’s only the beginning of this debate.

2. This technology can be easily weaponized

A year ago on Techdirt, I summed up the narratives around Facebook: (1) Amplifying the good/bad or a mirror for the ugly, (2) The algorithms’ fault vs. the people who build them or use them, (3) Fixing the machine vs. the underlying societal problems. I believe this discussion also applies to AI-generated art. It should be viewed through the same lens: good, bad, and ugly. Though this technology is delightful and beneficial, there are also negative ramifications of releasing image-manipulation tools and letting humanity play with them.

While DALL-E had a few restrictions, the new competitors had a “hands-off” approach and no safeguards to prevent people from creating sexual or potentially violent and abusive content. Soon after, a subset of users generated deepfake-style images of nude celebrities. (Look surprised). Google’s Dreambooth (which AI-generated avatar tools use) made making deepfakes even easier.

As part of my exploration of the new tools, I also tried Deviant Art’s DreamUp. Its “most recent creations” page displayed various images depicting naked teenage girls. It was disturbing and sickening. In one digital artwork of a teen girl in the snow, the artist commented: “This one is closer to what I was envisioning, apart from being naked. Why DreamUp? Clearly, I need to state ‘clothes’ in my prompt.” That says it all.

According to the new book Data Science in Context: Foundations, Challenges, Opportunities, machine learning advances have made deepfakes more realistic but also enhanced our ability to detect deepfakes, leading to a “cat-and-mouse game.”

In almost every form of technology, there are bad actors playing this cat-and-mouse game. Managing user-generated content online is a headache that social media companies know all too well. Elon Musk’s first two weeks at Twitter magnified that experience — “he courted chaos and found it.” Stability AI released an open-source tool with a belief in radical freedom, courted chaos, and found it in AI-generated porn and CSAM.

Text-to-video isn’t very realistic now, but with the pace at which AI models are developing, it will be in a few months. In a world of synthetic media, seeing will no longer be believing, and the basic unit of visual truth will no longer be credible. The authenticity of every video will be in question. Overall, it will become increasingly difficult to determine whether a piece of text, audio, or video is human-generated or not. It could have a profound impact on trust in online media. The danger is that with the new persuasive visuals, propaganda could be taken to a whole new level. Meanwhile, deepfake detectors are making progress. The arms race is on.

AI-generated art inspires creativity, and enthusiasm as a result. But as it approaches mass consumption, we can also see the dark side. A revolution of this magnitude can have many consequences, some of which can be downright terrifying. Guardrails are needed now.

Dr. Nirit Weiss-Blatt (@DrTechlash) is the author of The Techlash and Tech Crisis Communication

Filed Under: , ,

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “AI Art Is Eating The World, And We Need To Discuss Its Wonders And Dangers”

Subscribe: RSS Leave a comment
74 Comments
This comment has been deemed insightful by the community.
Anonymous Coward says:

The input question is: can you use copyright-protected data to train AI models? The output question is: can you copyright what an AI model creates?

A better question is should copyright, with its presumption of rarity continue to exist. It made sense when publication capacities were low, and the time to set and print a book was measured in months. Note well, the rarity, or limited resource, was the printing press, rather than the actual creativity. Also, copyright has mainly benefited the middleman, and the meme think of the starving artists is older than copyright as a creators right.

The Internet has made publishing normal, rather than a winning ticket in a lottery, but it should also be noted that throughout history, most creativity does not earn the creator any money. Also, creative people can make money if they gain a big enough fan base to pay them to create more works, which make protection of attribution desirable, along with anti-plagiarism laws.

Anonymous Coward says:

Re:

This is exactly what copywrite was created for. This is it’s purpose, to protect people who produce works from being taken advantage of and having some protection and control over their original work. For all the problems with copywrite, this is what it’s for, to somehow provide protection from wholesale theft that is occuring for the training data used by these AI image generators.

This comment has been deemed insightful by the community.
cpt kangarooski says:

Re: Re:

No, copyright was created to serve the public.

The public has an interest in having more works created and published, and in having those works be in the public domain.

In the absence of copyright, the first interest is served slightly, and the second is completely satisfied. Copyright trades off some of the satisfaction of the second interest — temporarily, and to as little an extent as possible — in order to get much more satisfaction of the first interest.

Generating art with software is a great idea, as it helps the first interest and may be cheap enough as the technology improves that less protection is needed to incentivize it.

The use of existing works for training data is not a big deal. There’s no market for works as training data, so it doesn’t harm the market for the works. And the training data cannot be extracted and turned back into the original works. There are issues about the output being derivative works though, and that could be an infringement, so the owners of the software are going to want to be careful about that. But the gist is not remarkably different than having a human being with a photographic memory look at thousands of paintings, using that knowledge to learn how to paint in another’s style (which is not a copyright infringement).

I think the overall idea has great potential, and it reminds me of the holodeck from Star Trek, where you could tell the computer to create a scenario with a few parameters and it would fill in the details. (See, e.g., this, that, and the other)

Anonymous Coward says:

Re: Re:

This is it’s purpose, to protect people who produce works from being taken advantage of and having some protection and control over their original work.

Considering that it was invented by the stationers company, and written so that publishers are granted complete control over works that they accept for publication, its purpose was and is to protect the publishers interests. All that matters in that case is that the author attribution remains with copies of the work.

Also consider that people like Chris Lester publish their works in podcast form under a creative commons license, and sell the same work as an Audible book. Allowing people to freely share a work does not harm sales of the same work in a slightly different format.

Anonymous Coward says:

Re: Re: Re:

Considering that it was invented by the stationers company, and written so that publishers are granted complete control over works that they accept for publication, its purpose was and is to protect the publishers interests. All that matters in that case is that the author attribution remains with copies of the work.

Part of the history of copyright was that, I doubt anyone familiar with the topic will deny that.

However, the Statute of Anne, as it stood, had clauses for both publishers and culture, and that can’t be handwaved away.

Anonymous Coward says:

Re: Re: Re:2

However, the Statute of Anne, as it stood, had clauses for both publishers and culture, and that can’t be handwaved away.

And a bit of performative spin has always been part of politics. At the time of the Statute of Anne, and author could do one thing with his copyright, and that was transfer it to the publisher, and relinquish all control over the work and it publication.

The US version of copyright has the bit about “promote the progress…” yet huge amounts of research is locked up behind the paywalls of the academic publishers, where legal access is expensive, indeed has become so expensive that academic libraries are limiting the journals to which they subscribe, limiting students ability to expand their interests into topic outside of core courses.

PaulT (profile) says:

Re:

That’s a really dumb take. All artists train based on imitating and being influenced by art that came before them. I’ve been to museums displaying their early work where it’s clear that Dali and Picasso were very much copying existing styles before they and their contemporaries created their own. Do their estates owe money for the art they built upon while they were still developing?

Glendon Mellow (user link) says:

Re: Re:

Respectfully Paul, that oft-repeated talking point doesn’t hold up. “Artists learn from other artists, what’s the difference?”

The difference is, human beings learn from other artists, yes; but we also learn from our observations of the world, our dreams and imagination, happy accidents with materials (whether digital or traditional), pareidolia, experimentation and exploration.

If you take the LAION-5B training dataset away from an AI, it can’t do anything. It doesn’t have the other “inputs” that a human being does.

But if you take the influence of other artists away from a human being, we can still create.

Rocky says:

Re: Re: Re:

But if you take the influence of other artists away from a human being, we can still create.

Your argument is based on using different criteria for humans and AI to make a point that doesn’t hold up. Using your criteria for AI’s and applying it to artists, we would have take all experiences and memories away from them after which I guarantee they won’t be able to create anything.

PaulT (profile) says:

Re: Re: Re:

“But if you take the influence of other artists away from a human being, we can still create.”

I’d love to know how that’s possible, but sure.

In the real world, it’s not possible to remove influence from other artists. By the time a person attempts to create their own work, they’ve already experienced and internalised huge amounts of other artists’ work.

I’m put in mind of something Stephen King once said. If you put him in from of a landscape and asked him to write about it, and the Western writer Louis L’Amour the same, they would both come up with something different. Some of that is personal ideas and taste, but you can’t tell me that King’s predilection to create a story of monsters is not because King grew up reading EC comics and L’Amour did not.

Under normal circumstances, this doesn’t mean a lot. King is quite capable of creating an EC comics inspired story without people telling him he has to separate his own ideas from what he read as a kid, if that’s even possible. But, if an AI is remembering the stories and King is remembering the stories, it doesn’t mean that the AI has an unfair legup.

I understand apprehension and concern on this subject, but if your reaction is to pretend that human artists aren’t influenced by huge amounts of existing material, you’re creating a larger fantasy than King ever wrote.

Christenson says:

Re: DeviantArt

The difficulty with CSAM is a combination of very unsympathetic clients (eww, who is turned on by THAT??? you sick fucks!) and the assumption that children are brought to harm by the very creation of such material. (We must save the children!).

The assumption of harming children gets attenuated if you have what amounts to a computer powered imagination (the AI) generating the pictures. This is especially true if you recognize that much of what makes something pornographic is what goes on in the mind of the viewer and not necessarily what’s actually in front of the viewer. See for example the famous picture of the napalm girl taken during the vietnam war.

Finally, I’d be highly curious to see some science on whether and when pornography tends to increase or decrease acting out on what is depicted or described, especially where violence is involved. I suspect it’s mostly an extended moral panic, like the one over violent video games, but I have no data.

Rich (profile) says:

Take away the AI for a moment..

What would be the legal ramifications if I had a friend who was a particularly artist, who then sold paintings by simililar requests? “A portrait of my wife as painted by “, or something along those lines?

While I do find something objectionable about an artist’s work being used to train an AI to sell other art in that artist’s style without permission, compensation, or even knowledge, i can’t completely see the difference between that and a person or group studying that same artists work and then selling custom work in that same imitated style.

If people copying an artist’s unique and identifying style to create new works for profit is illegal, or, at least, legally actionable, then it seems the right or wrong of it is settled, it’s just a matter of setting the table for establishing that the AI is just a proxy, and the creators or welders or such AI are equally at fault as any real person would be.

If a person, group, or company can indeed use a stable of in house artists to create and sell pieces that are “in the style of…”, then there’s your starting point. As is usually the case, however, it will probably be settled in a court room somewhere, and the decision will go to whichever side shows up with the lawyer that has the most charisma in the eyes of a jury.

All of that aside, this is all vaguely terrifying, for a variety of reasons. For one thing, go hit up dall-e and ask for “mechanical overlords by h.r. giger”

How much damage has the legal system suffered with the proliferation of digital image manipulation over the past 20 or 30 years? I remember when Photoshop, and the like, (nods to Corel PhotoPaint) were seen as the beginning of the end of photographic evidence being useful in court. We seem ok there, is this that much different?

Anonymous Coward says:

Re:

Here is the very clear difference.

The person is interpreting the work and making their own version of the work. What they are doing is an interpretation of the work and emulating a style but will always be influenced by their own take on it, and if they have the mechanical skill to emulate it perfectly they have EARNED it and you are paying for that mechanical skill and precision. The artist has improved and learned based not just on looking at HR Geiger but studying what the artistic composition is and how to use it effectively and translates that into the work they are producing.

The AI does not do that, it takes the previous persons work, derives a mathematical formula (the lines should be spaced this far apart blah blah blah) and takes that persons other HR geiger work, then other poeoples HR geiger like works, then HR geiger themselves works and spits out a composite of those things. The more real peoples art it can copy and mathematically break down the better the results. There is no skill involved, there is no interpretation, it is a mathematical output based on the prompts. There is no learning or skill involved and the outputs quality is directly dependant on the inputs quality and it will never improve beyond that without better quality input. This is an inherent limitation in how AI development functions, and as a result it will spit out “images” based on the input it has, it will make the same errors, it will lack the same cohesion, but eventually, through spam of the generate button, it may add on something of sufficient quality or accidentally get something right. You will then pick out the one that was “right” which tells the AI that “this is preferred”. It operates entirely based on input without any interpretation or skill involved.

This would be more equivilent to tracing. Something artists look down on. To copy someone elses work or style without your own interpretation is not something that has ever been lauded unless it is a display of extreme skill.

People interpret and do not require things exactly, AI requires input to create something of quality. AI is useless without the artist, despite how many people say it is a replacement for the artist.

This comment has been deemed insightful by the community.
Rich (profile) says:

Re: Re: Can you clarify?

First, I think peddling pictures “in the style of” an artist, using that artist’s name, reputation, or creativity without any sort of authorization, licensing, money, payment, or even notification is shitty, and I believe fairly simple to derive said shittyness without putting too much strain on yourself. But I can’t quite nail down the point of demarcation.

Your comment seems to apply value to the produced artwork based on how much you seem to think a mimicking artist might appreciate and interpret the work that they are learning from. And, if they have the “mechanical skill to emulate it perfectly” then they have “earned it”…”It” being the money made by selling art while employing style and technique taken and copied with no knowledge or authorization of the original artist. If it is the amount of effort involved that determines the right to profit, then I would submit to you that the computing industry has put more time and effort into the development of an AI capable of this than any of us could ever dedicate to anything.

Anyway…I seek clarity:

Scenario A: AI looks at various samples of a particular artist’s work, uses some algorithms to analyze some stylistic properties, and then uses that information to mimic the style when producing an image of something rendered in the style of said artist. Customer pays person/group/company who runs the AI, the original artist receives nothing.

Scenario B: Person looks at various samples of a particular artist’s work, sees how the brush strokes work, understands what goes into the style. Person then paints custom pictures upon request, in the style of said artist. Customer pays person, (group, or company that employs person), the original artist receives nothing.

From the original artist’s perspective, what is the difference?

cpt kangarooski says:

Re: Re:

Uh huh. I’ll take a Rothko, a 1920’s Mondrian, and a Pollock, please. And some Richard Prince.

I joke, but not too much. Your criticisms are basically exactly the sort of thing people used to say about photography; that it was cheating, and not really art. Turns out no one really cares.

It’ll be a long time, if ever, before a computer can recognize a work of art and make a meaningful decision about it, but so long as a human operator is asking for, say, a diptych of a velvet Elvis and a sad clown, and can pick out the ones they like from a menu of just-now generated examples, the problem is not a real hurdle, and the human is happy, so why should you be so upset?

cpt kangarooski says:

Re: Re: Re:2

This is not a person getting inspiration or learning, this is a machine being fed data.

AI in other fields has to obtain permissions and copywrites for many things it does, why not art?

Well, let’s look at another field.

A human who is well read and who has learned to use various information archives can use their knowledge to help guide people to works that they’re looking for. For example, if I were working in a library and someone asked me to guide them to a play which they can’t remember the title of, but one of the main characters was named Romeo, and another was named Juliet, I would probably suggest that they look at Shakespeare’s play Romeo and Juliet. I would rely on my knowledge of the play to give them that advice.

A computer program into which we have input copies of lots of works, and which uses AI to analyze them, could probably offer similar advice.

There’s no good reason to allow the former to be legal but to ban the latter, especially as the latter might be more effective, easier to use, etc.

And that’s what the courts decided in ruling that Google Book Search, which uses a database of lots and lots of scanned books, is legal. Other big parts of the ruling involved that you could not effectively pull the book back out again, and that it was no substitute for the books and didn’t harm the market for the books.

What’s the difference here? So long as you can’t tell the software to make you an exact copy of a preexisting copyrighted work, AI that generates art seems to be on solid legal ground.

nasch (profile) says:

Re: Re: Re:2

AI in other fields has to obtain permissions and copywrites for many things it does, why not art?

Does it need permission to get input data? Talking about “many things” isn’t really useful. Is permission required for this specific thing? I have not heard of any copyright (which is how it’s spelled, not “copyrwrite”) case based on AI input data, so to the best of my knowledge, the situation is unclear.

ValhallanGuardsman says:

Re: Re: Re:2

Quote: “Because its trained on peoples work who did not consent. This is not a person getting inspiration or learning, this is a machine being fed data.”
A living human being also learn by.. “feeding” oneself data.

Quote: “I have not heard of any copyright (which is how it’s spelled, not “copyrwrite”) case based on AI input data”
Well, for exapmple the very existence of much of the free-and-open-source (FOSS) softwere we owe to their authors having the ability to analyze the code of copyrighted computer programms. Without such ability, it would not have been possible, I’m afraid…

Rich (profile) says:

Oof...legal stuff

So, to partially poke my last comment, I just found myself wondering how long it will be until (if not already) you can put the spouse-following private detective out of business by taking a few pictures of your bedroom, your spouse, and your neighbor, and commissioning the creation of a couple of video clips documenting the “consecration” of a suspected affair.

NN says:

1. It’s NOT the END of human creativity

Of course its not.
thats a human need.

but thats so not why artists are upset.

its a tread to many peoples LIVELIHOOD. Through people who stole their work, put it into a blender and now openly talk about replacing said artists.

And if you know that your work, your soul, your style can now be put into these ais just to generate endless copys of your style many people will simply stop working on their art. Since there is no longer a way to protect it and a chance of making an income with it.

And all of this just so some greedy tech-bros can make profit.

ValhallanGuardsman says:

Re:

Quote: “its a tread to many peoples LIVELIHOOD”
So the question is not about creativity or hobby. It is about monetary concerns.
However, in a free market economy nobody is guaranteed their business model will be sustainable in perpetuity. Quite the contrary. Technological innovations can render and had in known history rendered some business models obsolete.
For example, late 18-th – 19-th century industrial revolution meant “loss of established livelyhood” for many individual craftsmen, since machine-produced goods driven their handcrafted goods off market.
PS: Also worh noting that even in 21-st century craftsmanship and hand-crafted goods have their own market niche, however, mass-produced economy-price goods sector is firmly dominated by Chinese industrial conglomerates, using increasingly robotized production methods and techniques.
PPS: In socialist economy (that I personally consider morally superior to capitalist one)_it would also not be the case that one would be entitled to being able to unconditionally make a living with the desired field of occupation. Instead, jobs would be planned according to society’s needs for goods, R&D and services and a person would choose one of those available jobs upon one’s skills and abilities (with the option to learn necessary skills if one have the talents and abilities for the desired job but not education or training needed).

Anonymous Coward says:

No Guardrails Needed

The only guardrails needed are the ones that stop people from stopping other people from doing what they want. The premise behind banning child pornography was that it involved photographing real children in real illegal activity, and permitting it would encourage criminals to produce more and thereby harm more actual children. It was not supposed to be a ban on wrongthink, and so, for example, written stories of child pornography are still not illegal. Having computers generate images of child pornography involving no actual children should not be illegal either.

Anonymous Coward says:

The applications

The author here was beautiful before AI started generating these images for her. It’s not the case for most people. So I agree that perfecting social media profile pictures will help this technology become more popular.
But it would be more useful and engaging if you could use it in gaming to represent yourself as a player. That’s where the money is.

Anonymous Coward says:

Re:

No.

Most people are not even that creative to begin with. Or want to be creative at all.

The cryptoscammers pushing hard for this want to replace artists and photographers. And it’s a lot more than just economics could prediict.

Fortunately, the cryptoscammers also don’t seem to want to learn about ethics, and like copyright maximalists, appear to want to either enslave the creatives as well.

N0083rp00f says:

I look at this technology and what comes to mind are toys like spirograph and pendulart.
In both cases a mechanical system with limited input by the “artist” produces a work that may or may not be pleasing to look at.
The former is a kids toy yet the latter scaled up, is used to produce commercially viable works.

My observation is that this is not much different from loading a bunch of paints onto a pendulum, send it swinging and have it drip the paint onto a canvas.

Anonymous Coward says:

The only thing that really terrifies me about AI art is knowing that sometime around late 2023 or early 2024, The House Of Mouse will announce that their in-house AI has written all the stories that can ever be imagined (or at least so close to every conceivable character and plot structure that they can credibly sue for infringement) and it becomes impossible to create fiction without violating their copyright.

n00bdragon (profile) says:

Re: Re:

The key difference between that and current reality being that the theoretical nature of such a thing is no longer “no one knows how to do such a thing much less possesses the power to make it so” and has affirmatively become “no one has done it yet… that we know of”. For all we know, Disney could at this very moment be tossing every single combination of keywords into an AI to produce hundreds of feature-length movie scripts an hour, super hero comic books by the second, and then attributing all of it to Bob Smith, attorney at law, as a work for hire. They don’t need to actively do anything with them. All they need to do is be able to find the ones you’ve unwittingly infringed upon.

cpt kangarooski says:

Re:

Well, copyright, unlike patents, does not require novelty (ie that something has never been done before). Instead, it requires originality (that your work was not copied from a prior work).

So creating every permutation of a work is pointless unless you can show that a later author copied from your collection. If it’s a popular song on the radio, maybe you have a shot at saying that the second author must have heard it, but an online collection of, say, every possible haiku, will do you no good unless the site logs show that they actually read the one you allege was copied.

Dr. Nirit Weiss-Blatt, PhD (profile) says:

An Update from Stable Diffusion

Stable Diffusion made copying artists and generating porn harder

The Verge, by James Vincent

https://www.theverge.com/2022/11/24/23476622/ai-image-generator-stable-diffusion-version-2-nsfw-artists-data-changes

What has been removed from Stable Diffusion’s training data, though, is nude and pornographic images. AI image generators are already being used to generate NSFW output, including both photorealistic and anime-style pictures. However, these models can also be used to generate NSFW imagery resembling specific individuals (known as non-consensual pornography) and images of child abuse.

Discussing the changes Stable Diffusion Version 2 in the software’s official Discord, Mostaque notes this latter use-case is the reason for filtering out NSFW content. “can’t have kids & nsfw in an open model,” says Mostaque (as the two sorts of images can be combined to create child sexual abuse material), “so get rid of the kids or get rid of the nsfw.”

Nawshzen (profile) says:

Digital vs "Art"

I’ve read most of the comments and have a question. I get the inherent conflict of using prior art to train the AI. But at the moment, the AI can’t produce an actual oil painting, just digital works, is that right?

If true, other than being able to create (and assuming print?) a digital piece of art, I can’t buy an original painting, right? If there is no original work do these AI similar styles have any effect on the value of an original artists work?

Christenson (user link) says:

Re: Digital art vs paintings...

I don’t think AI-created digital art will have much effect on actual paintings…I’ve seen a few actual famous paintings in museums, but when you get to my house, my living room has a bas-relief, but everything else is color-printed — typically straight photographs, but also some favorite greeting cards and a colorful box with a hurricane lamp showing abstract, discrete candle flames in the background.

Not that AI generating pictures won’t have a huge effect on digital artists — if interesting images can be created in 20 minutes or an hour, it may be much quicker than the “usual” set of digital image manipulation and creation tools. This will, of course, increase the flood of content.

Anonymous Cowboy says:

Laws need to be tightened regarding AI art

Despite barring obvious terms that allow for exploitation, AI art generators like Google’s dreambooth allow a LOT to slip through the cracks, which frankly are more like chasms. I noticed when playing around with it when I typed in something as a joke I thought for sure it wouldn’t allow, it generated art of it. I did some more testing and by using more “innocent” terms arranged in a way to create something not-innocent, some really disturbing images were generated and I think the fact that Google isn’t doing more diligent checks of their software is frankly infuriating. I’m still trying to get some of the images out of my head that I saw, but instead I may compile some and send them to Google, letting them know that their software allows such awful things to be created. If they do nothing, I may do the one thing that will work, public shaming on social media. If Google does nothing, hopefully legislators do, otherwise it’s going to be the new “dark web” of awful content.

Georgia Grey (profile) says:

It’s a pretty interesting topic. Some people think it’s cool because it’s more affordable and accessible than traditional artwork. But others worry that it might decrease the value of original pieces since it’s not as unique. However, there are some advantages to using custom AI generated art for businesses. With the help of algorithms, companies can create art that perfectly matches their brand and target audience. This can help them stand out from competitors and establish a strong visual identity. What do you think about all of this?

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...