AI Art Is Eating The World, And We Need To Discuss Its Wonders And Dangers
from the an-ai-did-not-write-this dept
After posting the following AI-generated images, I got private replies asking the same question: “Can you tell me how you made these?” So, here I will provide the background and “how to” of creating such AI portraits, but also describe the ethical considerations and the dangers we should address right now.
Background
Generative AI – as opposed to analytical artificial intelligence – can create novel content. It not only analyzes existing datasets but it generates whole new images, text, audio, videos, and code.
As the ability to generate original images based on written text emerged, it became the hottest hype in tech. It all began with the release of DALL-E 2, an improved AI art program from OpenAI. It allowed users to input text descriptions and get images that looked amazing, adorable, or weird as hell.
Then, people start hearing about Midjourney (and its vibrant Discord) and Stable Diffusion, an open-source project. (Google’s Imagen and Meta’s image generator are not released to the public). Stable Diffusion allowed engineers to train the model on any image dataset to churn out any style of art.
Due to the rapid development of the coding community, more specialized generators were introduced, including new killer apps to create AI-generated art from YOUR pictures: Avatar AI, ProfilePicture.AI, and Astria AI. With them, you can create your own AI-generated avatars or profile pictures. You can change some of your features, as demonstrated by Andrew “Boz” Bosworth, Meta CTO, who used AvatarAI to see himself with hair:
Startups like the ones listed above are booming:
In order to use their tools, you need to follow these steps:
1. How to prepare your photos for the AI training
As of now, training Astria AI with your photos costs $10. Every app charges differently for fine-tuning credits (e.g., ProfilePicture AI costs $24, and Avatar AI costs $40). Please note that those charges change quickly as they experiment with their business model.
Here are a few ways to improve the training process:
- At least 20 pictures, preferably shot or cropped to a 1:1 (square) aspect ratio.
- At least 10 face close-ups, 5 medium from the chest up, 3 full body.
- Variation in background, lighting, expressions, and eyes looking in different directions.
- No glasses/sunglasses. No other people in the pictures.
Approximately 60 minutes after uploading your pictures, a trained AI model will be ready. Where will you probably need the most guidance? Prompting.
2. How to survive the prompting mess
After the training is complete, a few images will be waiting for you on your page. Those are “default prompts” as examples of the app’s capabilities. To create your own prompts, set the className as “person” (this was recommended by Astria AI).
Formulating the right prompts for your purpose can take a lot of time. You’ll need patience (and motivation) to keep refining the prompts. But when a text prompt comes to life as you envisioned (or better than you envisioned), it feels a bit like magic. To get creative inspiration, I used two search engines, Lexica and Krea. You can search for keywords, scroll until you find an image style you like, and copy the prompt (then change the text to “sks person” to make it your self-portrait).
Some prompts are so long that reading them is painful. They usually include the image’s setting (e.g., “highly detailed realistic portrait”) and style (“art by” one of the popular artists). As regular people need help crafting those words, we already have an entirely new role for artists under prompt engineering. It’s going to be a desirable skill. Just bear in mind that no matter how professional your prompts are, some results will look WILD. In one image, I had 3 arms (don’t ask me why).
If you wish to avoid the whole prompts chaos, I have a friend who just used the default ones, was delighted with the results, and shared them everywhere. For those apps to be more popular, I recommend including more “default prompts.”
Potentials and Advantages
1. It’s NOT the END of human creativity
The electronic synthesizer did not kill music, and photography did not kill painting. Instead, they catalyzed new forms of art. AI art is here to stay and can make creators more productive. Creators are going to include such models as part of their creative process. It’s a partnership: AI can serve as a starting point, a sketch tool that provides suggestions, and the creator will improve it further.
2. The path to the masses
Thus far, Crypto boosters didn’t answer the simple question of “what is it good for?” and have failed to articulate concrete, compelling use cases for Web3. All we got was needless complexity, vague future-casting, and “cryptocountries.” On the contrary, AI-generated art has a clear utility for creative industries. It’s already used in various industries, such as advertising, marketing, gaming, architecture, fashion, graphic design, and product design. This Twitter thread provides a variety of use cases, from commerce to the medical imaging domain.
When it comes to AI portraits, I’m thinking of another target audience: teenagers. Why? Because they already spend hours perfecting their pictures with various filters. Make image-generating tools inexpensive and easy to use, and they’ll be your heaviest users. Hopefully, they won’t use it in their dating profiles.
Downsides and Disadvantages
1. Copying by AI was not consented to by the artists
Despite the booming industry, there’s a lack of compensation for artists. Read about their frustration, for example, in how one unwilling illustrator found herself turned into an AI model. Spoiler alert: She didn’t like being turned into a popular prompt for people to mimic, and now thousands of people (soon to be millions) can copy her style of work almost exactly.
Copying artists is a copyright nightmare. The input question is: can you use copyright-protected data to train AI models? The output question is: can you copyright what an AI model creates? Nobody knows the answers, and it’s only the beginning of this debate.
2. This technology can be easily weaponized
A year ago on Techdirt, I summed up the narratives around Facebook: (1) Amplifying the good/bad or a mirror for the ugly, (2) The algorithms’ fault vs. the people who build them or use them, (3) Fixing the machine vs. the underlying societal problems. I believe this discussion also applies to AI-generated art. It should be viewed through the same lens: good, bad, and ugly. Though this technology is delightful and beneficial, there are also negative ramifications of releasing image-manipulation tools and letting humanity play with them.
While DALL-E had a few restrictions, the new competitors had a “hands-off” approach and no safeguards to prevent people from creating sexual or potentially violent and abusive content. Soon after, a subset of users generated deepfake-style images of nude celebrities. (Look surprised). Google’s Dreambooth (which AI-generated avatar tools use) made making deepfakes even easier.
As part of my exploration of the new tools, I also tried Deviant Art’s DreamUp. Its “most recent creations” page displayed various images depicting naked teenage girls. It was disturbing and sickening. In one digital artwork of a teen girl in the snow, the artist commented: “This one is closer to what I was envisioning, apart from being naked. Why DreamUp? Clearly, I need to state ‘clothes’ in my prompt.” That says it all.
According to the new book Data Science in Context: Foundations, Challenges, Opportunities, machine learning advances have made deepfakes more realistic but also enhanced our ability to detect deepfakes, leading to a “cat-and-mouse game.”
In almost every form of technology, there are bad actors playing this cat-and-mouse game. Managing user-generated content online is a headache that social media companies know all too well. Elon Musk’s first two weeks at Twitter magnified that experience — “he courted chaos and found it.” Stability AI released an open-source tool with a belief in radical freedom, courted chaos, and found it in AI-generated porn and CSAM.
Text-to-video isn’t very realistic now, but with the pace at which AI models are developing, it will be in a few months. In a world of synthetic media, seeing will no longer be believing, and the basic unit of visual truth will no longer be credible. The authenticity of every video will be in question. Overall, it will become increasingly difficult to determine whether a piece of text, audio, or video is human-generated or not. It could have a profound impact on trust in online media. The danger is that with the new persuasive visuals, propaganda could be taken to a whole new level. Meanwhile, deepfake detectors are making progress. The arms race is on.
AI-generated art inspires creativity, and enthusiasm as a result. But as it approaches mass consumption, we can also see the dark side. A revolution of this magnitude can have many consequences, some of which can be downright terrifying. Guardrails are needed now.
Dr. Nirit Weiss-Blatt (@DrTechlash) is the author of The Techlash and Tech Crisis Communication
Filed Under: ai art, generative ai, portraits
Comments on “AI Art Is Eating The World, And We Need To Discuss Its Wonders And Dangers”
A better question is should copyright, with its presumption of rarity continue to exist. It made sense when publication capacities were low, and the time to set and print a book was measured in months. Note well, the rarity, or limited resource, was the printing press, rather than the actual creativity. Also, copyright has mainly benefited the middleman, and the meme think of the starving artists is older than copyright as a creators right.
The Internet has made publishing normal, rather than a winning ticket in a lottery, but it should also be noted that throughout history, most creativity does not earn the creator any money. Also, creative people can make money if they gain a big enough fan base to pay them to create more works, which make protection of attribution desirable, along with anti-plagiarism laws.
Re:
This is exactly what copywrite was created for. This is it’s purpose, to protect people who produce works from being taken advantage of and having some protection and control over their original work. For all the problems with copywrite, this is what it’s for, to somehow provide protection from wholesale theft that is occuring for the training data used by these AI image generators.
Re: Re:
No, copyright was created to serve the public.
The public has an interest in having more works created and published, and in having those works be in the public domain.
In the absence of copyright, the first interest is served slightly, and the second is completely satisfied. Copyright trades off some of the satisfaction of the second interest — temporarily, and to as little an extent as possible — in order to get much more satisfaction of the first interest.
Generating art with software is a great idea, as it helps the first interest and may be cheap enough as the technology improves that less protection is needed to incentivize it.
The use of existing works for training data is not a big deal. There’s no market for works as training data, so it doesn’t harm the market for the works. And the training data cannot be extracted and turned back into the original works. There are issues about the output being derivative works though, and that could be an infringement, so the owners of the software are going to want to be careful about that. But the gist is not remarkably different than having a human being with a photographic memory look at thousands of paintings, using that knowledge to learn how to paint in another’s style (which is not a copyright infringement).
I think the overall idea has great potential, and it reminds me of the holodeck from Star Trek, where you could tell the computer to create a scenario with a few parameters and it would fill in the details. (See, e.g., this, that, and the other)
Re: Re:
This is why “eggcorn” has been added to the Scrabble dictionary.
Re: Re:
Considering that it was invented by the stationers company, and written so that publishers are granted complete control over works that they accept for publication, its purpose was and is to protect the publishers interests. All that matters in that case is that the author attribution remains with copies of the work.
Also consider that people like Chris Lester publish their works in podcast form under a creative commons license, and sell the same work as an Audible book. Allowing people to freely share a work does not harm sales of the same work in a slightly different format.
Re: Re: Re:
Part of the history of copyright was that, I doubt anyone familiar with the topic will deny that.
However, the Statute of Anne, as it stood, had clauses for both publishers and culture, and that can’t be handwaved away.
Re: Re: Re:2
And a bit of performative spin has always been part of politics. At the time of the Statute of Anne, and author could do one thing with his copyright, and that was transfer it to the publisher, and relinquish all control over the work and it publication.
The US version of copyright has the bit about “promote the progress…” yet huge amounts of research is locked up behind the paywalls of the academic publishers, where legal access is expensive, indeed has become so expensive that academic libraries are limiting the journals to which they subscribe, limiting students ability to expand their interests into topic outside of core courses.
I hate this, the wholesale theft of peoples art to train these programs which people then profit off of with no attribution or credit or payments to the original artists who created the dataset.
Re:
That’s a really dumb take. All artists train based on imitating and being influenced by art that came before them. I’ve been to museums displaying their early work where it’s clear that Dali and Picasso were very much copying existing styles before they and their contemporaries created their own. Do their estates owe money for the art they built upon while they were still developing?
Re: Re:
Less dumb when you realize that a lot of these companies have done just that.
And flat out lied about it.
Re: Re:
Respectfully Paul, that oft-repeated talking point doesn’t hold up. “Artists learn from other artists, what’s the difference?”
The difference is, human beings learn from other artists, yes; but we also learn from our observations of the world, our dreams and imagination, happy accidents with materials (whether digital or traditional), pareidolia, experimentation and exploration.
If you take the LAION-5B training dataset away from an AI, it can’t do anything. It doesn’t have the other “inputs” that a human being does.
But if you take the influence of other artists away from a human being, we can still create.
Re: Re: Re:
Your argument is based on using different criteria for humans and AI to make a point that doesn’t hold up. Using your criteria for AI’s and applying it to artists, we would have take all experiences and memories away from them after which I guarantee they won’t be able to create anything.
Re: Re: Re:
“But if you take the influence of other artists away from a human being, we can still create.”
I’d love to know how that’s possible, but sure.
In the real world, it’s not possible to remove influence from other artists. By the time a person attempts to create their own work, they’ve already experienced and internalised huge amounts of other artists’ work.
I’m put in mind of something Stephen King once said. If you put him in from of a landscape and asked him to write about it, and the Western writer Louis L’Amour the same, they would both come up with something different. Some of that is personal ideas and taste, but you can’t tell me that King’s predilection to create a story of monsters is not because King grew up reading EC comics and L’Amour did not.
Under normal circumstances, this doesn’t mean a lot. King is quite capable of creating an EC comics inspired story without people telling him he has to separate his own ideas from what he read as a kid, if that’s even possible. But, if an AI is remembering the stories and King is remembering the stories, it doesn’t mean that the AI has an unfair legup.
I understand apprehension and concern on this subject, but if your reaction is to pretend that human artists aren’t influenced by huge amounts of existing material, you’re creating a larger fantasy than King ever wrote.
Hmm
The part about naked teenage girls is horrible. AI-generated CSAM.
What does Deviant Art have to say in its defense???
Re:
they defense in the talk with RJ Palmer was basically “yeah, we thought about doing it ethical, but that would have needed years of time so we didnt do it. Also everybody else is doing it too, so no point in not doing it.”
Re: DeviantArt
The difficulty with CSAM is a combination of very unsympathetic clients (eww, who is turned on by THAT??? you sick fucks!) and the assumption that children are brought to harm by the very creation of such material. (We must save the children!).
The assumption of harming children gets attenuated if you have what amounts to a computer powered imagination (the AI) generating the pictures. This is especially true if you recognize that much of what makes something pornographic is what goes on in the mind of the viewer and not necessarily what’s actually in front of the viewer. See for example the famous picture of the napalm girl taken during the vietnam war.
Finally, I’d be highly curious to see some science on whether and when pornography tends to increase or decrease acting out on what is depicted or described, especially where violence is involved. I suspect it’s mostly an extended moral panic, like the one over violent video games, but I have no data.
Re: Re:
some say that all artists are babies but Not everyone who is an artist harass, people who are having fun with the technology. Like that comment on a youtube video, “Ai isn’t meant for art or labor.” (this was part of the comment)
Let me get this straight:
Users can upload 20 pics of ANYONE and create photos that become more and more realistic
And no one asks the obvious “What could go wrong?” Question?
Re:
It’s not “no one” … just as I did here, there are articles raising this question.
Now, we need the developers to read/listen.
Take away the AI for a moment..
What would be the legal ramifications if I had a friend who was a particularly artist, who then sold paintings by simililar requests? “A portrait of my wife as painted by “, or something along those lines?
While I do find something objectionable about an artist’s work being used to train an AI to sell other art in that artist’s style without permission, compensation, or even knowledge, i can’t completely see the difference between that and a person or group studying that same artists work and then selling custom work in that same imitated style.
If people copying an artist’s unique and identifying style to create new works for profit is illegal, or, at least, legally actionable, then it seems the right or wrong of it is settled, it’s just a matter of setting the table for establishing that the AI is just a proxy, and the creators or welders or such AI are equally at fault as any real person would be.
If a person, group, or company can indeed use a stable of in house artists to create and sell pieces that are “in the style of…”, then there’s your starting point. As is usually the case, however, it will probably be settled in a court room somewhere, and the decision will go to whichever side shows up with the lawyer that has the most charisma in the eyes of a jury.
All of that aside, this is all vaguely terrifying, for a variety of reasons. For one thing, go hit up dall-e and ask for “mechanical overlords by h.r. giger”
How much damage has the legal system suffered with the proliferation of digital image manipulation over the past 20 or 30 years? I remember when Photoshop, and the like, (nods to Corel PhotoPaint) were seen as the beginning of the end of photographic evidence being useful in court. We seem ok there, is this that much different?
Re:
Here is the very clear difference.
The person is interpreting the work and making their own version of the work. What they are doing is an interpretation of the work and emulating a style but will always be influenced by their own take on it, and if they have the mechanical skill to emulate it perfectly they have EARNED it and you are paying for that mechanical skill and precision. The artist has improved and learned based not just on looking at HR Geiger but studying what the artistic composition is and how to use it effectively and translates that into the work they are producing.
The AI does not do that, it takes the previous persons work, derives a mathematical formula (the lines should be spaced this far apart blah blah blah) and takes that persons other HR geiger work, then other poeoples HR geiger like works, then HR geiger themselves works and spits out a composite of those things. The more real peoples art it can copy and mathematically break down the better the results. There is no skill involved, there is no interpretation, it is a mathematical output based on the prompts. There is no learning or skill involved and the outputs quality is directly dependant on the inputs quality and it will never improve beyond that without better quality input. This is an inherent limitation in how AI development functions, and as a result it will spit out “images” based on the input it has, it will make the same errors, it will lack the same cohesion, but eventually, through spam of the generate button, it may add on something of sufficient quality or accidentally get something right. You will then pick out the one that was “right” which tells the AI that “this is preferred”. It operates entirely based on input without any interpretation or skill involved.
This would be more equivilent to tracing. Something artists look down on. To copy someone elses work or style without your own interpretation is not something that has ever been lauded unless it is a display of extreme skill.
People interpret and do not require things exactly, AI requires input to create something of quality. AI is useless without the artist, despite how many people say it is a replacement for the artist.
Re: Re: Can you clarify?
First, I think peddling pictures “in the style of” an artist, using that artist’s name, reputation, or creativity without any sort of authorization, licensing, money, payment, or even notification is shitty, and I believe fairly simple to derive said shittyness without putting too much strain on yourself. But I can’t quite nail down the point of demarcation.
Your comment seems to apply value to the produced artwork based on how much you seem to think a mimicking artist might appreciate and interpret the work that they are learning from. And, if they have the “mechanical skill to emulate it perfectly” then they have “earned it”…”It” being the money made by selling art while employing style and technique taken and copied with no knowledge or authorization of the original artist. If it is the amount of effort involved that determines the right to profit, then I would submit to you that the computing industry has put more time and effort into the development of an AI capable of this than any of us could ever dedicate to anything.
Anyway…I seek clarity:
Scenario A: AI looks at various samples of a particular artist’s work, uses some algorithms to analyze some stylistic properties, and then uses that information to mimic the style when producing an image of something rendered in the style of said artist. Customer pays person/group/company who runs the AI, the original artist receives nothing.
Scenario B: Person looks at various samples of a particular artist’s work, sees how the brush strokes work, understands what goes into the style. Person then paints custom pictures upon request, in the style of said artist. Customer pays person, (group, or company that employs person), the original artist receives nothing.
From the original artist’s perspective, what is the difference?
Re: Re:
Uh huh. I’ll take a Rothko, a 1920’s Mondrian, and a Pollock, please. And some Richard Prince.
I joke, but not too much. Your criticisms are basically exactly the sort of thing people used to say about photography; that it was cheating, and not really art. Turns out no one really cares.
It’ll be a long time, if ever, before a computer can recognize a work of art and make a meaningful decision about it, but so long as a human operator is asking for, say, a diptych of a velvet Elvis and a sad clown, and can pick out the ones they like from a menu of just-now generated examples, the problem is not a real hurdle, and the human is happy, so why should you be so upset?
Re: Re: Re:
Because its trained on peoples work who did not consent. This is not a person getting inspiration or learning, this is a machine being fed data.
AI in other fields has to obtain permissions and copywrites for many things it does, why not art?
Re: Re: Re:2
And guess what drive inspiration, a person experiencing all sorts of inputs, that is feeding themselves data.
Re: Re: Re:2
Well, let’s look at another field.
A human who is well read and who has learned to use various information archives can use their knowledge to help guide people to works that they’re looking for. For example, if I were working in a library and someone asked me to guide them to a play which they can’t remember the title of, but one of the main characters was named Romeo, and another was named Juliet, I would probably suggest that they look at Shakespeare’s play Romeo and Juliet. I would rely on my knowledge of the play to give them that advice.
A computer program into which we have input copies of lots of works, and which uses AI to analyze them, could probably offer similar advice.
There’s no good reason to allow the former to be legal but to ban the latter, especially as the latter might be more effective, easier to use, etc.
And that’s what the courts decided in ruling that Google Book Search, which uses a database of lots and lots of scanned books, is legal. Other big parts of the ruling involved that you could not effectively pull the book back out again, and that it was no substitute for the books and didn’t harm the market for the books.
What’s the difference here? So long as you can’t tell the software to make you an exact copy of a preexisting copyrighted work, AI that generates art seems to be on solid legal ground.
Re: Re: Re:2
Does it need permission to get input data? Talking about “many things” isn’t really useful. Is permission required for this specific thing? I have not heard of any copyright (which is how it’s spelled, not “copyrwrite”) case based on AI input data, so to the best of my knowledge, the situation is unclear.
Re: Re: Re:2
Quote: “Because its trained on peoples work who did not consent. This is not a person getting inspiration or learning, this is a machine being fed data.”
A living human being also learn by.. “feeding” oneself data.
Quote: “I have not heard of any copyright (which is how it’s spelled, not “copyrwrite”) case based on AI input data”
Well, for exapmple the very existence of much of the free-and-open-source (FOSS) softwere we owe to their authors having the ability to analyze the code of copyrighted computer programms. Without such ability, it would not have been possible, I’m afraid…
Re: Re: Re: umm
By that point the Ai probably destroyed humanitity and we are all cyborgs who forgot what art is. I don’t see photography bad. Plus the cameras help kickstart animation.
Re: Re:
What does that have to do with the law? Whether a work is infringing on another work is not decided based on the skill, learning, or interpretation of the creator.
Re:
Copying the style of a work is what creates art movements, music, literary and other genres in culture.
Oof...legal stuff
So, to partially poke my last comment, I just found myself wondering how long it will be until (if not already) you can put the spouse-following private detective out of business by taking a few pictures of your bedroom, your spouse, and your neighbor, and commissioning the creation of a couple of video clips documenting the “consecration” of a suspected affair.
Re:
And conversely how long before a public figure claims the video or photo of them in a compromising situation is an AI fake?
Re: Re:
Didn’t this already happen with a certain politician already?
The deepfake video was made by the “conservatives”, as I remember it…
Re: Re: Re:
Probably, though I don’t remember hearing about it.
1. It’s NOT the END of human creativity
Of course its not.
thats a human need.
but thats so not why artists are upset.
its a tread to many peoples LIVELIHOOD. Through people who stole their work, put it into a blender and now openly talk about replacing said artists.
And if you know that your work, your soul, your style can now be put into these ais just to generate endless copys of your style many people will simply stop working on their art. Since there is no longer a way to protect it and a chance of making an income with it.
And all of this just so some greedy tech-bros can make profit.
Re:
Quote: “its a tread to many peoples LIVELIHOOD”
So the question is not about creativity or hobby. It is about monetary concerns.
However, in a free market economy nobody is guaranteed their business model will be sustainable in perpetuity. Quite the contrary. Technological innovations can render and had in known history rendered some business models obsolete.
For example, late 18-th – 19-th century industrial revolution meant “loss of established livelyhood” for many individual craftsmen, since machine-produced goods driven their handcrafted goods off market.
PS: Also worh noting that even in 21-st century craftsmanship and hand-crafted goods have their own market niche, however, mass-produced economy-price goods sector is firmly dominated by Chinese industrial conglomerates, using increasingly robotized production methods and techniques.
PPS: In socialist economy (that I personally consider morally superior to capitalist one)_it would also not be the case that one would be entitled to being able to unconditionally make a living with the desired field of occupation. Instead, jobs would be planned according to society’s needs for goods, R&D and services and a person would choose one of those available jobs upon one’s skills and abilities (with the option to learn necessary skills if one have the talents and abilities for the desired job but not education or training needed).
Begs the question
When AI is done eating the world, where will we live?
Re:
On Mars. With Musk.
Oh, the horror.
No Guardrails Needed
The only guardrails needed are the ones that stop people from stopping other people from doing what they want. The premise behind banning child pornography was that it involved photographing real children in real illegal activity, and permitting it would encourage criminals to produce more and thereby harm more actual children. It was not supposed to be a ban on wrongthink, and so, for example, written stories of child pornography are still not illegal. Having computers generate images of child pornography involving no actual children should not be illegal either.
Re:
What the F is wrong with you? Nothing should encourage perverts to create CSAM images – real or fake. Jesus.
Re: Re:
You do not get to decide whether other people are perverts, or be able to limit their behavior, as long as that behavior does not involve sexual activity with real people who have not or cannot have given consent. Otherwise it’s none of your business.
Re:
How do you feel about the fact that law enforcement uses CSAM images to help find and rescue exploited children, and if the market is flooded with AI generated ones, that will flood the space and make it much, much harder to find the children actually being exploited because it will be that much more difficult to figure out which images involve actual children?
Re: Re:
I feel that this is an excuse to continue persecuting people who have committed no crime. No one should be responsible for curtailing their legal behavior in order to make it easier for law enforcement to do their jobs.
Your buddy Tim Cushing writes endlessly with disapproval about pretextual traffic stops. You people here are opposed to broken-windows policing and stop-and-frisk. But all of a sudden, “think of the children” becomes a valid reason when you don’t like a certain class of people. Not surprising, but sad.
Re: Re:
Lemme see if I have this right first because I don’t want to misinterpret you and argue against a straw man. Are you saying that police look at images to determine if they depict an actual real life crime? Or that they look at images to LEARN what real crimes look like?
If it’s the former, why is that any different from police looking at photorealistic images of any other kind of crime? How is anyone supposed to find video of bank robberies amid all the Hollywood movies depicting one? One would think police should be conducting actual investigations based on verified real world sources like cameras or testimonies rather than trolling through online porn sites hoping someone has photographed a crime in order to jerk off to it and then shared it with others.
If it’s the latter, then all hail AI, because now LEOs can train to spot real CSAM images (which perpetuate harms to the victim I guess, somehow) without storing or using real CSAM images (which would necesarily perpetuate harms to the victim). They don’t train their shooting skills on live people (at least they aren’t supposed to). Why should they train that particular skill in such a way?
Re: Re: Re: Identification
I think he means for identification purposes…As in, if pictures or videos show up depicting young children in horrible situations, those images can lead to the identification and possible location of missing children. At the moment, if law enforcement stumbles across a flash drive with a load of images on it, the immediately known fact is that they are looking at a victim of a crime. If they can cross reference that and find the identity, then it becomes actionable evidence, that might ultimately lead to that child’s recovery.
If, on the other hand, the discovery of such images on a person’s flash drive indicates nothing at all, as they might have been created with the latest in AI child-pr0n generation technology, then they don’t have any reason to try to identify and find the child pictured, which, in turn, leads to legitimate victims of such things being overlooked.
There is an argument to me made that if such material could be generated on demand, perhaps there would be far fewer children being victimized to begin with. I suppose that is possible, but as with regular p0rn, I think the average consumer of such materials would not be satiated for long, and the market for the real thing would again gain ground.
There are plenty of fake images and videos of celebrities out there that don’t ever seem to satisfy those that want to see such things, and those people will often decry and expose the fakes, as though the celebrities in question would be alerted to this fraudulent effort to deceive an adoring fan, then step in to comfort the aggrieved by providing some pictures and video of “the real thing”.
Re: Re: Re:2
So how does that not justify banning horror movies? If I stumble across a hard drive with a video of someone being violently stabbed I immediately know I am looking at the victim of a crime. I can cross reference their identity and turn it into actionable evidence that might ultimately lead to the arrest of the murderer.
Re: Re: Re:3
Why? Your question here is a non sequitur. Or do you think horror movies somehow makes it harder for police to investigate things?
How do you know? What if it’s from an indie horror movie you aren’t familiar with that was using some random extra as a victim?
I should point out that there exists fake snuff videos too, how would you know if you stumble upon one of those if it was real or not?
Nothing is as cut and dry as you think it is and flooding the internet with AI-generated CSAM-images will hurt investigations.
Regardless, the problem and its future solution may fall in the category of “The road to hell is paved with good intentions”.
Re: Re: Flooding the market with CSAM...
Mike:
As I said above, it’s complicated and I want actual data — we imagine these effects good and bad, but what happens in the real world? Yes, this is a highly charged subject, but children are best served by actual data and not by unsupported belief.
* How does the CSAM audience change its behavior with exposure to or more available CSAM?
* How would the CSAM production environment change if the market was flooded with AI generated CSAM?
* How many children does NCMEC help out of CSAM situations, and what proportion of the total children in CSAM situations is that?
* How does the magnitude of this problem compare to that of in-family and out-of-family authority figures abusing kids, something the catholic church has a long-running scandal on?
* Could CSAM be better addressed by working on the runaway kids problem? Better parent training?
Unfortunately, as with revenge porn and teens sexting, there’s some absolutely awful people both outside and inside of law enforcement. This week we have news of a school official criminally charged for documenting sexting amongst his students as part of his disciplinary process.
I hope you’ll have a guest post on this topic soon!
Re: Re: Flooding the market with CSAM...
Mike:
As I said above, it’s complicated and I want actual data — we imagine these effects good and bad, but what happens in the real world? Yes, this is a highly charged subject, but children are best served by actual data and not by unsupported belief.
* How does the CSAM audience change its behavior with exposure to or more available CSAM?
* How would the CSAM production environment change if the market was flooded with AI generated CSAM?
* How many children does NCMEC help out of CSAM situations, and what proportion of the total children in CSAM situations is that?
* How does the magnitude of this problem compare to that of in-family and out-of-family authority figures abusing kids, something the catholic church has a long-running scandal with?
* Could CSAM be better addressed by working on the runaway kids problem? Better parent training? (Aside: I remember my mom mentioning that the neighbor cop predicted the neighbor’s, white, privileged kid at 10 years old was going to end up involved in the juvenile justice system based on his behavior patterns).
Unfortunately, as with revenge porn and teens sexting, there’s some absolutely awful people both outside and inside of law enforcement. This week we have news of a school official criminally charged for documenting sexting amongst his students as part of his disciplinary process.
I hope you’ll have a guest post on this topic soon!
The applications
The author here was beautiful before AI started generating these images for her. It’s not the case for most people. So I agree that perfecting social media profile pictures will help this technology become more popular.
But it would be more useful and engaging if you could use it in gaming to represent yourself as a player. That’s where the money is.
Brace yourselves, a wave of Avatars is coming
What would start as the new cool thing (a bold guy with hair, that’s harmless) will end as an episode of Black Mirror (everybody’s using their fake image).
It’s like photoshop on AI steroids
It will get more people who weren’t creative before to become creative. I love it.
Re:
No.
Most people are not even that creative to begin with. Or want to be creative at all.
The cryptoscammers pushing hard for this want to replace artists and photographers. And it’s a lot more than just economics could prediict.
Fortunately, the cryptoscammers also don’t seem to want to learn about ethics, and like copyright maximalists, appear to want to either enslave the creatives as well.
I look at this technology and what comes to mind are toys like spirograph and pendulart.
In both cases a mechanical system with limited input by the “artist” produces a work that may or may not be pleasing to look at.
The former is a kids toy yet the latter scaled up, is used to produce commercially viable works.
My observation is that this is not much different from loading a bunch of paints onto a pendulum, send it swinging and have it drip the paint onto a canvas.
Starving AI Artists are not trivial
After mixing every word in the dictionary, I’m an F’ing poet 🙂
Mixing with depraved emotions only makes more depraved emotions. AI was trained on depraved emotions
The only thing that really terrifies me about AI art is knowing that sometime around late 2023 or early 2024, The House Of Mouse will announce that their in-house AI has written all the stories that can ever be imagined (or at least so close to every conceivable character and plot structure that they can credibly sue for infringement) and it becomes impossible to create fiction without violating their copyright.
Re:
That idea has been explored before:
https://en.wikipedia.org/wiki/The_Library_of_Babel
Re: Re:
The key difference between that and current reality being that the theoretical nature of such a thing is no longer “no one knows how to do such a thing much less possesses the power to make it so” and has affirmatively become “no one has done it yet… that we know of”. For all we know, Disney could at this very moment be tossing every single combination of keywords into an AI to produce hundreds of feature-length movie scripts an hour, super hero comic books by the second, and then attributing all of it to Bob Smith, attorney at law, as a work for hire. They don’t need to actively do anything with them. All they need to do is be able to find the ones you’ve unwittingly infringed upon.
Re:
Well, copyright, unlike patents, does not require novelty (ie that something has never been done before). Instead, it requires originality (that your work was not copied from a prior work).
So creating every permutation of a work is pointless unless you can show that a later author copied from your collection. If it’s a popular song on the radio, maybe you have a shot at saying that the second author must have heard it, but an online collection of, say, every possible haiku, will do you no good unless the site logs show that they actually read the one you allege was copied.
But isn’t there a advertising market for data off of social media networks? It seems like what’s happening to artists is the equivalent to having an account signed up for you absent consent on a fresh SM site, using data from a SM site you didn’t know you signed up on.
Uncopyrightable?
What if AI art is made uncopyrightable? 🤔
Re:
Right now, it is.
At least while the courts are willing to be reasonable about it.
An Update from Stable Diffusion
Stable Diffusion made copying artists and generating porn harder
The Verge, by James Vincent
https://www.theverge.com/2022/11/24/23476622/ai-image-generator-stable-diffusion-version-2-nsfw-artists-data-changes
What has been removed from Stable Diffusion’s training data, though, is nude and pornographic images. AI image generators are already being used to generate NSFW output, including both photorealistic and anime-style pictures. However, these models can also be used to generate NSFW imagery resembling specific individuals (known as non-consensual pornography) and images of child abuse.
Discussing the changes Stable Diffusion Version 2 in the software’s official Discord, Mostaque notes this latter use-case is the reason for filtering out NSFW content. “can’t have kids & nsfw in an open model,” says Mostaque (as the two sorts of images can be combined to create child sexual abuse material), “so get rid of the kids or get rid of the nsfw.”
Thank you for this article, Dr. Weiss-Blatt!
It’s actually the best post I’ve ever read on TechDirt, and maybe I’ll reference your work in what I’m writing about how generative art will create a larger demand for processing than other recent fads have, as is implied by the point you make in “2. The path to the masses” above.
Digital vs "Art"
I’ve read most of the comments and have a question. I get the inherent conflict of using prior art to train the AI. But at the moment, the AI can’t produce an actual oil painting, just digital works, is that right?
If true, other than being able to create (and assuming print?) a digital piece of art, I can’t buy an original painting, right? If there is no original work do these AI similar styles have any effect on the value of an original artists work?
Re: Digital art vs paintings...
I don’t think AI-created digital art will have much effect on actual paintings…I’ve seen a few actual famous paintings in museums, but when you get to my house, my living room has a bas-relief, but everything else is color-printed — typically straight photographs, but also some favorite greeting cards and a colorful box with a hurricane lamp showing abstract, discrete candle flames in the background.
Not that AI generating pictures won’t have a huge effect on digital artists — if interesting images can be created in 20 minutes or an hour, it may be much quicker than the “usual” set of digital image manipulation and creation tools. This will, of course, increase the flood of content.
What conflict, as every human artist has looked at, learnt from, and emulated the style of prior works. Training an AI on extant works is no different from what humans do with extant works.
Darn, becoming an artist was my plan B if AI ever took my job: Dalle2 Meme
plan C is being a musician and bet that is next in line.
ProfilePictureAI added a lot of “default” prompts
The users can choose from tens of options!
So, listened to your advice? 😉
Anyway, it’s more user friendly now.
Laws need to be tightened regarding AI art
Despite barring obvious terms that allow for exploitation, AI art generators like Google’s dreambooth allow a LOT to slip through the cracks, which frankly are more like chasms. I noticed when playing around with it when I typed in something as a joke I thought for sure it wouldn’t allow, it generated art of it. I did some more testing and by using more “innocent” terms arranged in a way to create something not-innocent, some really disturbing images were generated and I think the fact that Google isn’t doing more diligent checks of their software is frankly infuriating. I’m still trying to get some of the images out of my head that I saw, but instead I may compile some and send them to Google, letting them know that their software allows such awful things to be created. If they do nothing, I may do the one thing that will work, public shaming on social media. If Google does nothing, hopefully legislators do, otherwise it’s going to be the new “dark web” of awful content.
It’s a pretty interesting topic. Some people think it’s cool because it’s more affordable and accessible than traditional artwork. But others worry that it might decrease the value of original pieces since it’s not as unique. However, there are some advantages to using custom AI generated art for businesses. With the help of algorithms, companies can create art that perfectly matches their brand and target audience. This can help them stand out from competitors and establish a strong visual identity. What do you think about all of this?
Thank you for this link
Nice background material/ intro to Generative-AI