NYC Officials Are Mad Because Journalists Pointed Out The City’s New ‘AI’ Chatbot Tells People To Break The Law

from the I'm-sorry-I-can't-do-that,-Dave dept

Countless sectors are rushing to implement “AI” (undercooked language learning models) without understanding how they work — or making sure they work. The result has been an ugly comedy of errors stretching from journalism to mental health care thanks to greed, laziness, computer-generated errors, plagiarism, and fabulism.

NYC’s government is apparently no exception. The city recently unveiled a new “AI” powered chatbot to help answer questions about city governance. But an investigation by The Markup found that the automated assistant not only doled out incorrect information, it routinely advises city residents to break the law across a wide variety of subjects, from landlord agreements to labor issues:

“The bot said it was fine to take workers’ tips (wrong, although they sometimes can count tips toward minimum wage requirements) and that there were no regulations on informing staff about scheduling changes (also wrong). It didn’t do better with more specific industries, suggesting it was OK to conceal funeral service prices, for example, which the Federal Trade Commission has outlawed. Similar errors appeared when the questions were asked in other languages, The Markup found.”

Folks over on Bluesky had a lot of fun testing the bot out, and finding that it routinely provided bizarre, false, and sometimes illegal results:

There’s really no reality where this sloppily-implemented bullshit machine should remain operational, either ethically or legally. But when pressed about it, NYC Mayor Eric Adams stated the system will remain online, albeit with a warning that the system “may occasionally produce incorrect, harmful or biased content.”

But one administration official complained about the fact that journalists pointed out the whole error prone mess in the first place, insisting they should have worked privately with the administration to fix the problems cause by the city:

If you can’t see that, it’s reporter Joshua Friedman reporting:

At NYC mayor Eric Adams’s press conference, top mayoral advisor Ingrid Lewis-Martin criticizes the media for publishing stories about the city’s new Al-powered chatbot that recommends illegal behavior. She says reporters could have approached the mayor’s office quietly and worked with them to fix it

That’s not how journalism works. That’s now how anything works. Everybody’s so bedazzled by new tech (or keen on making money from the initial hype cycle) they’re just rushing toward the trough without thinking. As a result, uncooked and dangerous automation is being layered on top of systems that weren’t working very well in the first place (see: journalism, health care, government).

The city is rushing to implement “AI” elsewhere in the city as well, such as with a new weapon scanning system tests have found have an 85 percent false positive rate. All of this is before you even touch on the fact that most early adopters of these systems see them are a wonderful way to cut corners and undermine already mistreated and underpaid labor (again see: journalism, health care).

There are lessons here you’d think would have been learned in the wake of previous tech hype and innovation cycles (cryptocurrency, NFTs, “full self driving,” etc.). Namely, innovation is great and all, but a rush to embrace innovation for innovation’s sake due to greed or incurious bedazzlement generally doesn’t work out well for anybody (except maybe early VC hype wave speculators).

Filed Under: , , , , , , ,

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “NYC Officials Are Mad Because Journalists Pointed Out The City’s New ‘AI’ Chatbot Tells People To Break The Law”

Subscribe: RSS Leave a comment
51 Comments
This comment has been deemed insightful by the community.
31Bob (profile) says:

I love how they twist this.

“They should have approached the mayor’s office quietly, and worked with them”.

No.

The mayors office needed to maybe test the fucking thing before rolling it out directly to the public, and having it subsequently recommend illegal actions, that I’m sure the city would prosecute over.

Maybe the spokesperson for the mayors office should pull their heads out of their self-entitled asses before speaking.

James Burkhardt (profile) says:

Re:

Let us first, when making this quote, include the word ‘wrong’ you are referencing in your reply.

The bot said it was fine to take workers’ tips (wrong, although they sometimes can count tips toward minimum wage requirements)

[Emphasis Mine] As best I can determine, you have changed the context of the word wrong here.

In NY, NY has a Tipped Minimum Wage. A tipped minimum wage is lower than standard minimum wage. Tips+wages must still meet the standard minimum wage, and wage rates still must meet the tipped minimum wage. It is factually accurate to say tips “….can count toward minimum wage requirements”. The word wrong in the article relates to factual inaccuracy. Indeed, if we look at the Merriam-Webster definition we are using the Adjective definition 3 – Not according to truth or Facts; inaccurate.

You have noted a moral wrong. Adjective definition 1 (not according to the moral standard; SINFUL, IMMORAL) or 2 (not right or proper according to a code, standard, or convention; IMPROPER). By asserting claims about tipped minimum wage were “also wrong” you tell me that the context should be adjective definition 3: inaccurate. It makes a muddle of your point, and is easily misread.

This kind of context mistake can lead to deeper misunderstandings and if not brought under control can potentially isolate you as your rhetoric holds less and less meaning to those around you.

This comment has been flagged by the community. Click here to show it.

Anonymous Coward says:

I don’t know that the first one is even answering incorrectly. It depends on what you mean by “lock”. There are many buildings I’ve worked in that all the doors are locked when entering from the outside so you need to use a badge reader, keypad, or even a physical key to enter but to exit is a push bar or door handle turn away. So in that case, I’d say someone who perpetually disables the lock in some way (either through explicitly disabling the locking mechanism or propping the door open) could be fired. So I don’t think the answer is technically wrong even if the question is phrased in such a way to invoke thoughts of the Triangle Shirt Fire.

Most of the others also seem pretty clear what is happening. The “AI” is focusing on one or two words (such as “violating policy”) and ignoring the context. The answers read like this isn’t a large language model (such as ChatGPT or similar) or other systems newly created in the latest explosion of “AI” and behaves much closer to the chat bots that have been around for several decades.

Anonymous Coward says:

Re:

Most of the others also seem pretty clear what is happening. The “AI” is focusing on one or two words (such as “violating policy”) and ignoring the context. The answers read like this isn’t a large language model (such as ChatGPT or similar) or other systems newly created in the latest explosion of “AI” and behaves much closer to the chat bots that have been around for several decades.

Agreed – there’s no intelligence in this “AI” at all, just keyword matching.

Although it totally should be legal to shop around for a different mafia protection racket with better rates/services ;p

Anonymous Coward says:

Re:

I don’t know that the first one is even answering incorrectly.

My garment factory has a stringent safety policy. Part of the safety policy requires that all doors be locked.

While you probably spotted it right out of the gate, but went haring off in other directions, this is fairly obviously a reference to the Triangle Shirtwaist Factory Fire, where the doors were locked “to prevent pilferage and unauthorized breaks”.

Anonymous Coward says:

Re:

these LLMs are LITERALLY NOT DESIGNED TO PROVIDE CORRECT ANSWERS

Some are, they just aren’t always reliably good at it yet. Odds are NY went with the lowest bidder and got the level of quality you’d expect from that.

Speaking just for myself, I’ve used LLMs to generate unit tests for software and gotten results better than those from a typical junior engineer.

This comment has been deemed insightful by the community.
David A (profile) says:

NYC Mayor Eric Adams stated the system will remain online, albeit with a warning that the system “may occasionally produce incorrect, harmful or biased content.”

Then what is the point of using it? If I have to check with a human to validate everything it says, why should I use it? Why not jump straight to the human? What is it saving the city if they still need to employ humans to validate the chatbot’s output?

Even if you’re willing to toss aside the ethical implications of giving jobs to machines, or the political implications of making the public face of your government a soulless AI, or the legal implications of potentially misleading constituents about what they very well might be asking in good faith—and clearly everyone involved here is willing to toss those aside—what purpose is this serving if its output still needs to be validated by a human?

Anonymous Coward says:

Re: Re: Re:

The bot uses Microsoft Azure, which allows people to make chatbots using existing AI models. I don’t think that NYC has trained any model yet.

Is there evidence for OP’s guess that the model used by the NYC chatbot wasn’t trained on law statutes? Even assuming otherwise, non-law training materials probably make up the majority. To get answers that sound more plausible (but are not necessarily more correct), the simplest way would be to train a model exclusively on legal statutes, court decisions, amicus briefs, legal definitions, etc.

Anonymous Coward says:

Re: Re:

I wonder if the law statutes were a part of its training, apparently not abut why?

Copyright.

That could theoretically be the case considering that the public domain rule generalizes to the federal level but not lower level governments, but do you have a source demonstrating that copyright in particular posed a significant obstacle to including law-related materials in the model’s training set? Also, the NYC used Microsoft Azure’s AI platform and probably didn’t train their own model.

This comment has been deemed insightful by the community.
Stephen T. Stone (profile) says:

They should have approached the mayor’s office quietly, and worked with them.

No, they really shouldn’t have. Part of the point of journalism is to expose the faults of leaders and elected representatives⁠—their words as well as their deeds⁠—to the general public. We can then have that information at hand when the time comes to either re-elect or replace those leaders/reps. Hiding that information for any reason runs counter to that core principle of journalism.

Anonymous Coward says:

Can we replace the politicians with AI? I mean, I know that’s how a great many apocalyptic stories start, but I feel like we could at least work around an AI by asking the right questions.

POL2000: What are you doing, Dave?
Dave: I’m fixing a fence.
POL2000: You can’t do that Dave. Building a fence requires a permit.
Dave: I’m not building a fence. I’m just fixing the existing one.
POL2000: Building a fence requires a permit. You are in violation of ordinance §34.399.
Dave: I’m placing placards saying “Reelect POL2000” on it so it’s not a fence.
POL2000: Political speech is protected by the first amendment, but you must take it down after Election Day or it will be demolished.
Dave: Sounds good. Do I have to remove the posts holding up the placards?
POOL2000: Just the placards.
Dave: Perfect.

PB&J (profile) says:

They should have approached the mayor’s office quietly, and worked with them

This just shows how inept they are — they think the ONLY issue is with the bad results of the LLM (if we can classify them as just “bad”) … as if there’s no issue at all with the string of *@#$-ups, at every level of city administration, that fed into the decision being made to release this.

That One Guy (profile) says:

'How DARE you point out the emperor's lack of clothing?!'

But when pressed about it, NYC Mayor Eric Adams stated the system will remain online, albeit with a warning that the system “may occasionally produce incorrect, harmful or biased content.”

An answer which perfectly showcases why the reporters were correct to go public with their information, because the city is more concerned with it’s own image remaining untarnished than admitting that they’re pushed out a service that’s broken even when said service is handing out fraudulent legal advice, making very clear that if they had gone directly to the city’s offices they would have been ignored and people would have kept using the service without knowing how broken it is.

Anonymous Coward says:

To be fair, this is how it works with humans too

On every tipline I’ve called to my city, there’s a big disclaimer that says: “our advice is not legally binding,” and a explainer that says just because so-and-so on the other line says you can do X, the letter of the law wins out in the end.

For an example higher up in government, consider the RFK campaign and their back-and-forth with the elections offices in Nevada.

Rossi also linked to an email exchange on Nov. 14 between the campaign and the secretary of state’s office in which the office erroneously said the petition did not require a named running mate.

“Does the vice presidential candidate have to be listed on the petition forms,” a Kennedy ballot access manager asked in the email. “No,” the office staffer replied, referring the campaign to the petition format on page 5 of the state’s petition guide. Rossi also linked to Jan. 9 correspondence from the secretary of state’s office approving Kennedy’s petition.

This differs from Nevada statutes, which say that in an independent candidate’s petition of candidacy, “the person must also designate a nominee for Vice President.”

[…]

The secretary of state’s office acknowledged its staff had misinformed Kennedy.

But the office also said that despite the error, it was up to Kennedy’s campaign to follow the statute.

“When a government agency communicates with a member of the public and gives an unclear or incorrect answer to a question, Nevada courts have been clear that the agency is not permitted to honor the employee’s statements if following those statement[s] would be in conflict with the law,” the office said.

*emphasis mine

https://www.cbsnews.com/news/rfk-jr-threatens-lawsuit-nevada-over-ballot-access/

I don’t support the RFK campaign, but whatever you think about that campaign, this is what happens with humans on the other side of the line. At this point, every government helpline should have a flashing red sign that you shouldn’t listen to it.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...