As they are part of a corporate entity selling access to their words, I say hold them to professional standards.
They misidentify a mushroom in a mushroom identifying app? The company is liable as ifthe chatbot could complete the task, with only impersonation of an expert as the out.
Canning advoce gives botulism? Poison.
Medical advice was wrong? Malpractice. Should not be giving medical advice.
One thing I am always aware of about age gating is that there is never an opt-in for adults. It's always "YOU ARE A MINOR OR YOU CONSENT TO THE THINGS WE DO NOT ALLOW COMPANIES TO DO TO MINORS".
When you are that wrapped up in age verification, you miss the more obvious and less creepy ways to handle the problem.
Which is probably because it was never about the problem itself.
Actually I have a relevant story that I heard elsewhwere on the internet. The story goes like this: Early in development, GLADOS was mean. Players were impacted by her initial personality and ultimately felt horrible.
The developers saw this and reworked the character to be more petty, leading to the character we have today.
In other words, video game developers are cognisant of the harms their works can do to players and will adapt accordingly.
Four seems really young... but also I believe these are the wrong analogies. Would you have let your four year old play with a chainsaw? My guess is no - best case scenario, you still want them large enough that they can counter the forces of the chainsaw, focused enough to keep control of it, and mature enough to understand and heed safe chainsaw use.
Note that this is for the chainsaw, which we as a culture do understand and have good safety practices for, and which is sold by companies that are paying attention to issues like user safety. AI is not well understood, and the companies involved are encouraging unsafe practices.
If Knife Company advertises their knives as useful in roughhousing, then absolutely.
If Alice and Bob are fourth graders and the knife company had a whole campaign to get a knife to every elementry school children, then also absolutely.
The analogy falls apart with knives not because knives are not dangerous, or even because you went with a higher party example, but because knives:
Are reasonably designed. When used correctly (a well known, well advertised way), the user will not injure themselves.
Are clearly designed rather than deceptively designed (Note how much of design regarding is about fooling the user. Knives on the other hand, have a clear blade and handle. Always honest)
Are clearly advertised for their target demographic and use (To ADULTS for CUTTING. Never to teens. Never for use as a microphone or a pen)
Are always packaged as their own product and labled appropriately. As in labled SHARP. And NOT sold as a replacement for Spoons or Lamps
Also, knife companies do not actively encourage people to cary their products onto busses or into bedrooms. They encourage people to store their knives in the kitchen when not in use.
In other words, when you look at the knife companies, they are not actively encouraging dangerous behavior like AI companies are. In my mind, that counts for a lot.
We also should not forget though that we have children who are fairly restricted in where they go. A child who is not allowed to go to the park on their own is not exactly likely to turn into a teen who hangs out at the mall.
If we want independent people who use third spaces... we have to make laws and culture that supports people being independent and using third spaces.
Is suicide a complex topic? Yes.
Do we fail teens in many ways? So much yes.
But speaking of uncomfortable problems, we need to discuss the one where we have created a norm that a company can just... dismiss accountability for their actions as long as they design themselves to do so.
Sites that demand an arbitration waver if you wish to read an article
Contracts making realators not responsible for gross neglegence
Long terms of service no one reads
This weird thing where you agree to the terms on your product after paying for it and installing it
Advertisement networks where no one can be held responsible for copyright infringement, because the only one allowed to file a claim is the copyright holder, but there is no way to directly reference the advertisement so the rights holder can see it.
And now, we have AI. The latest in tech designes so that no one is responsible. So that the words that come out of it have no weight. No monitor. No judgement. No consequences when something blatantly untrue is said.
We have bad precident here. These chatbots should be a legal risk, because they are a risk. This is not the first person pushed to extremes with AI. That we have the so-named "AI Psychosis" and are continuing to push AI into private, unmonitored spaces is, in my mind if not the law, a sign that companies are willfully selling a product they know may be dangerous, but don't care because they have insulated themselves from the consequences of such a product.
It's not the tech. It's the failure of our system to ensure that actual proven dangers are taken care of. It's our acceptance of companies abilities to ignore the dangers that their practices create.
It’s almost as if adults just don’t want kids to gather with each other anywhere at all
Yeah that's my bet.
Kids have a strong sense of Fair, no dependents to worry about, and have yet to be broken. I can see how that would scare those trying to control them.
But for now, we’re just going to have to suffer through a bunch of people who refuse to cede power, no matter what any other branch of the government says.
Except we don't in this case?
The judge said he is not a prosicutor. So he shouldn't be treated like one. Revoke any access keys he has, change any locks that need to be changed and then either ignore or arest him like you would if I walked in and pretended to be a prosicutor.
Send a memo to all subordinates of his saying "He is not your boss. He has no authority" just to make it easier on them.
Then just... continue doing that?
Techdirt has not posted any stories submitted by Epic_Null.
As they are part of a corporate entity selling access to their words, I say hold them to professional standards. They misidentify a mushroom in a mushroom identifying app? The company is liable as ifthe chatbot could complete the task, with only impersonation of an expert as the out. Canning advoce gives botulism? Poison. Medical advice was wrong? Malpractice. Should not be giving medical advice.
One thing I am always aware of about age gating is that there is never an opt-in for adults. It's always "YOU ARE A MINOR OR YOU CONSENT TO THE THINGS WE DO NOT ALLOW COMPANIES TO DO TO MINORS". When you are that wrapped up in age verification, you miss the more obvious and less creepy ways to handle the problem. Which is probably because it was never about the problem itself.
Four seems really young... but also I believe these are the wrong analogies. Would you have let your four year old play with a chainsaw? My guess is no - best case scenario, you still want them large enough that they can counter the forces of the chainsaw, focused enough to keep control of it, and mature enough to understand and heed safe chainsaw use. Note that this is for the chainsaw, which we as a culture do understand and have good safety practices for, and which is sold by companies that are paying attention to issues like user safety. AI is not well understood, and the companies involved are encouraging unsafe practices.
If Knife Company advertises their knives as useful in roughhousing, then absolutely. If Alice and Bob are fourth graders and the knife company had a whole campaign to get a knife to every elementry school children, then also absolutely. The analogy falls apart with knives not because knives are not dangerous, or even because you went with a higher party example, but because knives:
- Are reasonably designed. When used correctly (a well known, well advertised way), the user will not injure themselves.
- Are clearly designed rather than deceptively designed (Note how much of design regarding is about fooling the user. Knives on the other hand, have a clear blade and handle. Always honest)
- Are clearly advertised for their target demographic and use (To ADULTS for CUTTING. Never to teens. Never for use as a microphone or a pen)
- Are always packaged as their own product and labled appropriately. As in labled SHARP. And NOT sold as a replacement for Spoons or Lamps
Also, knife companies do not actively encourage people to cary their products onto busses or into bedrooms. They encourage people to store their knives in the kitchen when not in use. In other words, when you look at the knife companies, they are not actively encouraging dangerous behavior like AI companies are. In my mind, that counts for a lot.We also should not forget though that we have children who are fairly restricted in where they go. A child who is not allowed to go to the park on their own is not exactly likely to turn into a teen who hangs out at the mall. If we want independent people who use third spaces... we have to make laws and culture that supports people being independent and using third spaces.
Evading Accountability
Is suicide a complex topic? Yes. Do we fail teens in many ways? So much yes. But speaking of uncomfortable problems, we need to discuss the one where we have created a norm that a company can just... dismiss accountability for their actions as long as they design themselves to do so. Sites that demand an arbitration waver if you wish to read an article Contracts making realators not responsible for gross neglegence Long terms of service no one reads This weird thing where you agree to the terms on your product after paying for it and installing it Advertisement networks where no one can be held responsible for copyright infringement, because the only one allowed to file a claim is the copyright holder, but there is no way to directly reference the advertisement so the rights holder can see it. And now, we have AI. The latest in tech designes so that no one is responsible. So that the words that come out of it have no weight. No monitor. No judgement. No consequences when something blatantly untrue is said. We have bad precident here. These chatbots should be a legal risk, because they are a risk. This is not the first person pushed to extremes with AI. That we have the so-named "AI Psychosis" and are continuing to push AI into private, unmonitored spaces is, in my mind if not the law, a sign that companies are willfully selling a product they know may be dangerous, but don't care because they have insulated themselves from the consequences of such a product. It's not the tech. It's the failure of our system to ensure that actual proven dangers are taken care of. It's our acceptance of companies abilities to ignore the dangers that their practices create.
Lying under oath does happen to be one of the few crimes we have proven to be impeachable.