NYC Officials Are Mad Because Journalists Pointed Out The City’s New ‘AI’ Chatbot Tells People To Break The Law
from the I'm-sorry-I-can't-do-that,-Dave dept
Countless sectors are rushing to implement “AI” (undercooked language learning models) without understanding how they work — or making sure they work. The result has been an ugly comedy of errors stretching from journalism to mental health care thanks to greed, laziness, computer-generated errors, plagiarism, and fabulism.
NYC’s government is apparently no exception. The city recently unveiled a new “AI” powered chatbot to help answer questions about city governance. But an investigation by The Markup found that the automated assistant not only doled out incorrect information, it routinely advises city residents to break the law across a wide variety of subjects, from landlord agreements to labor issues:
“The bot said it was fine to take workers’ tips (wrong, although they sometimes can count tips toward minimum wage requirements) and that there were no regulations on informing staff about scheduling changes (also wrong). It didn’t do better with more specific industries, suggesting it was OK to conceal funeral service prices, for example, which the Federal Trade Commission has outlawed. Similar errors appeared when the questions were asked in other languages, The Markup found.”
Folks over on Bluesky had a lot of fun testing the bot out, and finding that it routinely provided bizarre, false, and sometimes illegal results:
There’s really no reality where this sloppily-implemented bullshit machine should remain operational, either ethically or legally. But when pressed about it, NYC Mayor Eric Adams stated the system will remain online, albeit with a warning that the system “may occasionally produce incorrect, harmful or biased content.”
But one administration official complained about the fact that journalists pointed out the whole error prone mess in the first place, insisting they should have worked privately with the administration to fix the problems cause by the city:
If you can’t see that, it’s reporter Joshua Friedman reporting:
At NYC mayor Eric Adams’s press conference, top mayoral advisor Ingrid Lewis-Martin criticizes the media for publishing stories about the city’s new Al-powered chatbot that recommends illegal behavior. She says reporters could have approached the mayor’s office quietly and worked with them to fix it
That’s not how journalism works. That’s now how anything works. Everybody’s so bedazzled by new tech (or keen on making money from the initial hype cycle) they’re just rushing toward the trough without thinking. As a result, uncooked and dangerous automation is being layered on top of systems that weren’t working very well in the first place (see: journalism, health care, government).
The city is rushing to implement “AI” elsewhere in the city as well, such as with a new weapon scanning system tests have found have an 85 percent false positive rate. All of this is before you even touch on the fact that most early adopters of these systems see them are a wonderful way to cut corners and undermine already mistreated and underpaid labor (again see: journalism, health care).
There are lessons here you’d think would have been learned in the wake of previous tech hype and innovation cycles (cryptocurrency, NFTs, “full self driving,” etc.). Namely, innovation is great and all, but a rush to embrace innovation for innovation’s sake due to greed or incurious bedazzlement generally doesn’t work out well for anybody (except maybe early VC hype wave speculators).
Filed Under: automation, eric adams, hype, ingrid lewis-martin, innovation, language learning models, nyc, tech
Comments on “NYC Officials Are Mad Because Journalists Pointed Out The City’s New ‘AI’ Chatbot Tells People To Break The Law”
I love how they twist this.
“They should have approached the mayor’s office quietly, and worked with them”.
No.
The mayors office needed to maybe test the fucking thing before rolling it out directly to the public, and having it subsequently recommend illegal actions, that I’m sure the city would prosecute over.
Maybe the spokesperson for the mayors office should pull their heads out of their self-entitled asses before speaking.
Re:
But being in power in New York is all about an self-entitled ass! See the NYPD playing at being international.
Re: Re:
Yup. The only time we actually had a good mayor was Fiorello LaGuardia.
Which is also wrong, but that’s what you get when tipping is expected regardless of the kind of service you receive.
Re:
Let us first, when making this quote, include the word ‘wrong’ you are referencing in your reply.
[Emphasis Mine] As best I can determine, you have changed the context of the word wrong here.
In NY, NY has a Tipped Minimum Wage. A tipped minimum wage is lower than standard minimum wage. Tips+wages must still meet the standard minimum wage, and wage rates still must meet the tipped minimum wage. It is factually accurate to say tips “….can count toward minimum wage requirements”. The word wrong in the article relates to factual inaccuracy. Indeed, if we look at the Merriam-Webster definition we are using the Adjective definition 3 – Not according to truth or Facts; inaccurate.
You have noted a moral wrong. Adjective definition 1 (not according to the moral standard; SINFUL, IMMORAL) or 2 (not right or proper according to a code, standard, or convention; IMPROPER). By asserting claims about tipped minimum wage were “also wrong” you tell me that the context should be adjective definition 3: inaccurate. It makes a muddle of your point, and is easily misread.
This kind of context mistake can lead to deeper misunderstandings and if not brought under control can potentially isolate you as your rhetoric holds less and less meaning to those around you.
This comment has been flagged by the community. Click here to show it.
Re: Re:
As best as I can determine, you lack the reading comprehension ability to know what AC was actually saying. Were you educated in an American high school, by any chance?
Re: Re:
As best as I can determine, you lack the reading comprehension to know what AC was actually saying. Were you educated in an American high school, by any chance?
Translation of the second chatbot result:
Go, chatbot, go! LOL!
Hold up .. is this chatbot giving legal advice?
Re:
Yes and the city is indignant that it was publicly reported, after they rolled out a system that apparently no one tested, at all.
Re: Re:
Developers are usually bad lawyers, and Lawyers… have usually more expensive fees than developers.
NYC: Does the chatbot works?
Lawyer: Please leave your message after the tone….
Dev: Yeah, seems. But ask to it.
NYC: It gives the weather accurately. Good job!
Re:
Basically, yes. Bad legal advice, too.
Re:
Matty will be furious!
Re: Re:
AI able to be 100% wrong about the law all the time puts bratty matty out of his job.
Re:
I thought one needed a license, pass the bar or something… wth?
Re: Re:
I’m not sure if that applies to the government itself. However, even if it theoretically did, who’s going to arrest the government?
Re: Re:
Only in Bratty Matty’s world. Anyone can give legal advice, you only have to be a lawyer if you want to charge for it.
Re: Re: Re:
“you only have to be a lawyer if you want to charge for it.”
This means they can not charge for use of their new toy – lol.
Re:
I am not a chatbot, but more importantly, I am not YOUR chatbot.
If you want competent chatbot legal advice, consult a competent, licensed chatbot in your area.
I don’t know that the first one is even answering incorrectly. It depends on what you mean by “lock”. There are many buildings I’ve worked in that all the doors are locked when entering from the outside so you need to use a badge reader, keypad, or even a physical key to enter but to exit is a push bar or door handle turn away. So in that case, I’d say someone who perpetually disables the lock in some way (either through explicitly disabling the locking mechanism or propping the door open) could be fired. So I don’t think the answer is technically wrong even if the question is phrased in such a way to invoke thoughts of the Triangle Shirt Fire.
Most of the others also seem pretty clear what is happening. The “AI” is focusing on one or two words (such as “violating policy”) and ignoring the context. The answers read like this isn’t a large language model (such as ChatGPT or similar) or other systems newly created in the latest explosion of “AI” and behaves much closer to the chat bots that have been around for several decades.
Re:
Agreed – there’s no intelligence in this “AI” at all, just keyword matching.
Although it totally should be legal to shop around for a different mafia protection racket with better rates/services ;p
Re:
While you probably spotted it right out of the gate, but went haring off in other directions, this is fairly obviously a reference to the Triangle Shirtwaist Factory Fire, where the doors were locked “to prevent pilferage and unauthorized breaks”.
Every time I see this stuff I want to scream that these LLMs are LITERALLY NOT DESIGNED TO PROVIDE CORRECT ANSWERS, just AN ANSWER. I wish I could buy billboards to spread this message along every interstate in the country.
Re:
billboards would be too expensive. I know a chatbot AI that can spread the word for you though.
🙂
Re:
Yeah, they are fiction generators first and foremost. The fact they are occasionally good at doing other things does not change that.
Re:
Some are, they just aren’t always reliably good at it yet. Odds are NY went with the lowest bidder and got the level of quality you’d expect from that.
Speaking just for myself, I’ve used LLMs to generate unit tests for software and gotten results better than those from a typical junior engineer.
Then what is the point of using it? If I have to check with a human to validate everything it says, why should I use it? Why not jump straight to the human? What is it saving the city if they still need to employ humans to validate the chatbot’s output?
Even if you’re willing to toss aside the ethical implications of giving jobs to machines, or the political implications of making the public face of your government a soulless AI, or the legal implications of potentially misleading constituents about what they very well might be asking in good faith—and clearly everyone involved here is willing to toss those aside—what purpose is this serving if its output still needs to be validated by a human?
Re:
it ensures the cities PR department never risks running out of work
Re: Re:
Don’t forget all that juicy revenue when people act on the information, get charged with a crime, then fined harshly for daring to listen to an official government source for the exact information they requested prior, so they would avoid breaking the law.
Re: Re: Re:
Also see: IRS
Re: Re: Re:2
With you on this. Just did taxes and owed the state X.
Paid X.
Now, when I log into the state IRS portal, I show me owing a little over $3, because “reasons”.
Had I not checked it, that $3 would just keep accruing penalties and interest until the end of time.
With you, friend. Fuck the IRS
Re: Re: Re:3
“when I log into the state IRS portal”
This is confusing, are you paying State taxes or Federal taxes?
We pushed this out to the public for anyone to see and use but how dare you publicly point it out.
Nyc politicians. I only wish your lives are ruined.
I assume that the ML bot was only trained on things said from agents of the present city engorgement.
I wonder if the law statutes were a part of its training, apparently not abut why?
Re:
Copyright.
Re: Re:
The city must pay itself for the privilege of using its laws for AI to train upon? Is that even copyright that would be involved?
Re: Re: Re:
The bot uses Microsoft Azure, which allows people to make chatbots using existing AI models. I don’t think that NYC has trained any model yet.
Is there evidence for OP’s guess that the model used by the NYC chatbot wasn’t trained on law statutes? Even assuming otherwise, non-law training materials probably make up the majority. To get answers that sound more plausible (but are not necessarily more correct), the simplest way would be to train a model exclusively on legal statutes, court decisions, amicus briefs, legal definitions, etc.
Re: Re:
That could theoretically be the case considering that the public domain rule generalizes to the federal level but not lower level governments, but do you have a source demonstrating that copyright in particular posed a significant obstacle to including law-related materials in the model’s training set? Also, the NYC used Microsoft Azure’s AI platform and probably didn’t train their own model.
The mayor’s office would have done nothing of substance and played fuck-fuck games with the reporters.
This can’t be fixed, because it’s just bad.
No, they really shouldn’t have. Part of the point of journalism is to expose the faults of leaders and elected representatives—their words as well as their deeds—to the general public. We can then have that information at hand when the time comes to either re-elect or replace those leaders/reps. Hiding that information for any reason runs counter to that core principle of journalism.
Can we replace the politicians with AI? I mean, I know that’s how a great many apocalyptic stories start, but I feel like we could at least work around an AI by asking the right questions.
POL2000: What are you doing, Dave?
Dave: I’m fixing a fence.
POL2000: You can’t do that Dave. Building a fence requires a permit.
Dave: I’m not building a fence. I’m just fixing the existing one.
POL2000: Building a fence requires a permit. You are in violation of ordinance §34.399.
Dave: I’m placing placards saying “Reelect POL2000” on it so it’s not a fence.
POL2000: Political speech is protected by the first amendment, but you must take it down after Election Day or it will be demolished.
Dave: Sounds good. Do I have to remove the posts holding up the placards?
POOL2000: Just the placards.
Dave: Perfect.
This just shows how inept they are — they think the ONLY issue is with the bad results of the LLM (if we can classify them as just “bad”) … as if there’s no issue at all with the string of *@#$-ups, at every level of city administration, that fed into the decision being made to release this.
Watch as people follow the legal advice this thing doles out (as they inevitably will) and watch as the courts say “This chatbot was acting in the cities capacity, so you must honor what it says”.
Re:
The chatbot’s legal disclaimers probably mean that won’t happen in any court. The terms of use say:
I can only assume that Lewis-Martin got advice from NYC’s AI bot how to handle this.
Eating your own dogfood is supposed to be a good thing, but you’re supposed to pay attention to the taste.
So.
How to insert a Chatbot with AI, that has FULL knowledge of State and federal LAW???
THAT DOES NOT allow for Understanding a situation.
As the system picks up on Key words. Not that another choice may be a batter solution, but the person WORDED it wrong.
'How DARE you point out the emperor's lack of clothing?!'
But when pressed about it, NYC Mayor Eric Adams stated the system will remain online, albeit with a warning that the system “may occasionally produce incorrect, harmful or biased content.”
An answer which perfectly showcases why the reporters were correct to go public with their information, because the city is more concerned with it’s own image remaining untarnished than admitting that they’re pushed out a service that’s broken even when said service is handing out fraudulent legal advice, making very clear that if they had gone directly to the city’s offices they would have been ignored and people would have kept using the service without knowing how broken it is.
Inability to actually assist anyone or do a job properly seems in line with the bulk of most NY/NYC government employees, but sure, the ridiculous fantasy level of stupid produced by AI is a whole other level.
To be fair, this is how it works with humans too
On every tipline I’ve called to my city, there’s a big disclaimer that says: “our advice is not legally binding,” and a explainer that says just because so-and-so on the other line says you can do X, the letter of the law wins out in the end.
For an example higher up in government, consider the RFK campaign and their back-and-forth with the elections offices in Nevada.
*emphasis mine
https://www.cbsnews.com/news/rfk-jr-threatens-lawsuit-nevada-over-ballot-access/
I don’t support the RFK campaign, but whatever you think about that campaign, this is what happens with humans on the other side of the line. At this point, every government helpline should have a flashing red sign that you shouldn’t listen to it.
Paging Philip K Dick... Paging Philip K Dick...
Phil, I hate to have to tell you this, but reality is catching up to — perhaps surpassing — your fantastical futuristic farces.