Report Suggests Rampant Negligence In Uber Self Driving Car Fatality

from the I'm-sorry-I-can't-do-that,-Dave dept

Earlier this year you might recall that a self-driving Uber in Tempe, Arizona killed a woman who was trying to cross the street with her bike outside of a crosswalk. The driver wasn’t paying attention, and the car itself failed to stop for the jaywalking pedestrian. Initial reporting on the subject, most of it based on anonymous Uber sources who spoke to the paywalled news outlet The Information, strongly pushed the idea that the car’s sensors worked as intended and detected the woman, but bugs in the system software failed to properly identify the woman as something to avoid:

“The car?s sensors detected the pedestrian, who was crossing the street with a bicycle, but Uber?s software decided it didn?t need to react right away. That?s a result of how the software was tuned. Like other autonomous vehicle systems, Uber?s software has the ability to ignore ?false positives,? or objects in its path that wouldn?t actually be a problem for the vehicle, such as a plastic bag floating over a road. In this case, Uber executives believe the company?s system was tuned so that it reacted less to such objects. But the tuning went too far, and the car didn?t react fast enough, one of these people said.”

Thanks to that report, a narrative emerged that the vehicle largely worked as designed, and the only real problem was a modest quirk in uncooked programming.

But a new report by Bloomberg this week shatters that understanding. According to NTSB findings seen by Bloomberg, the vehicle in question wasn’t even programmed to detect jaywalkers. Like, at all:

“Uber Technologies Inc.?s self-driving test car that struck and killed a pedestrian last year wasn?t programmed to recognize and react to jaywalkers, according to documents released by U.S. safety investigators.”

Assuming Bloomberg’s read of the 400 page report (only a part of which has been made public) is accurate, that’s a far cry from a bug. The NTSB report found that Uber staff had also disabled Volvo auto-detection and breaking software that could have at least slowed the vehicle if not avoided the pedestrian impact altogether. Investigators also noted that despite the fact that Uber was conducting risky trials on public streets, the company had little to no real system in place for dealing with safety issues. Again, not just underwhelming public safety protocols, but none whatsoever:

“The Uber Advanced Technologies Group unit that was testing self-driving cars on public streets in Tempe didn?t have a standalone safety division, a formal safety plan, standard operating procedures or a manager focused on preventing accidents, according to NTSB.”

Again, that’s not just buggy or “poorly tuned” software, it’s total negligence. Despite the fact the driver was distracted, the car was never adequately programmed to detect jaywalkers, some safety features were disabled, and Uber had little to no safety protocols in place, prosecutors have already absolved Uber of criminal liability (though the driver still may face a lawsuit). The NTSB also hasn’t formally affixed blame for the crash (yet):

“The documents painted a picture of safety and design lapses with tragic consequences but didn?t assign a cause for the crash. The safety board is scheduled to do that at a Nov. 19 meeting in Washington.”

Self driving cars are remarkably safe, and most accidents involve autonomous vehicles getting confused when people actually follow the law (like rear ending a human-driven vehicle that stopped at a red light before turning right). But that’s only true when the people designing and conducting trials are competent. If the NTSB report is anything to go by, Uber fell well short, yet got to enjoy a lot of press suggesting the problem was random bad programming luck, not total negligence and incompetence. Later this month we’ll get to see if Uber faces anything resembling accountability for its failures.

Filed Under: , , , , ,
Companies: uber

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Report Suggests Rampant Negligence In Uber Self Driving Car Fatality”

Subscribe: RSS Leave a comment
Thad (profile) says:

Earlier this year you might recall that a self-driving Uber in Tempe, Arizona killed a woman who was trying to cross the street with her bike outside of a crosswalk.

I suppose it’s technically accurate to say that you might recall it that way, but if you do, your recollection is wrong. It happened last year.

On-topic: Sadly, this is unsurprising. This is the entirely foreseeable result of Uber choosing Arizona for its lack of safety regulations.

There was a bit in last year’s gubernatorial debate, when challenger David Garcia criticized Governor Doug Ducey for allowing Uber to test its AVs without proper safety oversight. Ducey responded by protesting that as soon as it became clear that Uber was unsafe, he immediately barred them from further tests.

I was flabbergasted when Garcia let that remark pass unchallenged. He should have said "But you waited until somebody died to take any action." I think Garcia’s poor debate performance is a big part of why Ducey was reelected.

Better late than never; at least Uber’s not testing its AVs here anymore. Waymo continues to operate in the Phoenix area, but hasn’t been involved in any fatal collisions as of yet. I do think it’s a little premature to declare that "self driving cars are remarkably safe" (though I suppose that depends on what you mean by "remarkably"); there simply aren’t enough miles driven to make an accurate comparison between the safety of an AV compared to the safety of an average human driver. But so far, at least, Waymo’s done pretty well.

This comment has been deemed insightful by the community.
This comment has been deemed funny by the community.
Anonymous Coward says:

Re: Re:

I suppose it’s technically accurate to say that you might recall it that way, but if you do, your recollection is wrong. It happened last year.

No, that statement you’re contradicting would have been "You may recall that, earlier this year, a self-driving…". Or "Earlier this year, you might recall, a self-driving…" (without "that"). As Karl wrote it, it’s your recollection that will happen earlier this year. You’re not thinking fourth-dimensionally.

This comment has been deemed insightful by the community.
Anonymous Coward says:


Remember, self-driving cars do not have to be perfect to be useful and safe.

They merely have to be better than the average human.

These cars can see potential pedestrians not only just in visible light, but with infrared light and with lasers. That’s advantage one.

These cars can be looking both ways at once. A human’s field of vision is fairly limited, and even if you are very good at multitasking, a human brain simply cannot process every detail within that field of vision.

Speaking of multitasking, these cars can do that, too. Adjust the volume, adjust the windshield wipers, adjust the AC, stay in the correct lane, stay at the proper speed limit, call Mom, leave enough room for the car in front of you, and keep an eye out for deer or dogs or kids running into the road all at the same time, without having to sacrifice attention to any of those tasks in order to carry out another.

And then there’s reaction time. A human’s actual reaction time is rarely faster than a quarter of a second. Tesla’s current self-driving processors are estimated to be capable of 250,000,000,000,000 operations per second. That’s 250 trillion.

250 trillion is usually considered to be a larger number than 4.

In the same weekend that this pedestrian was killed by the Uber car, 14 other pedestrians were killed by a conventional human-driven automobile.

In the same city.

Self-driving cars don’t need to be capable of perfection. They just need to be an improvement upon our own imperfections.

Anonymous Coward says:

Re: Re: Incapable

I wonder how "suddenly" this person entered the confines of the road.

Not suddenly at all: "Uber’s vehicle used Volvo software to detect external objects. Six seconds before striking Herzberg, the system detected her but didn’t identify her as a person. … The system determined 1.3 seconds before the crash that emergency braking would be needed to avert a collision. But the vehicle did not respond" (because Uber had disabled emergency braking).

The crash was considered avoidable because a human would have seen the person and had time to stop.

Scary Devil Monastery (profile) says:

Re: Incapable

"I’m skeptical that any self driving car could be programmed to drive faster than 5mph if it needed to account for anything off the road to suddenly enter the road and then stop the vehicle in time."

It obviously can’t because physics. The same applies to human drivers as well, unless Gandalf has a driving license.

The key goal for the self-driving car is basically that it should be able to take the best solution possible at all times, resulting in LESS fatalities than would happen with an erratic human behind the wheel.

In the OP what is described was the result of the algorithm not making the decision in time that what emerged onto the road was, in fact, an object to be avoided at all cost (a human) rather than an object of irrelevance (like a tumbleweed or windblown plastic bag).

Human drivers fail the same assessment on a daily basis as attested by traffic-related death tolls.

The main issue here is that the AV is being tested without even a cursory nod to safety, and with some existing safety measures forcibly taken offline (Volvo’s crash prevention system, for instance).

bobob says:

Perhaps a better use of AI in vehicles would be to try to detect when the driver is driving erratically (like entering the freeway going the wrong direction, crossing the median, weaving, etc.), and then take some sort corrective action, but providing the driver the ability to disengage it under some circumstances. It might not be fullproof, but it might be of more benefit in less time than the goal of fully automated behicles. It would also provide a real world test bed that would provide some useful data for creating autonomous vehicles without endangering the public.

bobob says:

Re: Re: Re:

Obviously, what "erratically" means would need to be carefully thought out and well defined, but it’s certainly something that is more easily defined and safer than defining a "safe driving autonomous vehicle."

I don’t think it takes a lot of smarts to realize that failing to prevent a driver from doing something stupid is still the driver’s fault just as it would be if there were not such a system to help prevent it. There are already systems that warn drivers when drifting across lanes. Be serious.

Anonymous Coward says:

Re: Re: Re: Re:

"Be serious"

Let’s say driver sees pot hole large enough to cause damage to vehicle and therefore at lower speed it is advisable to avoid it when possible.
Now, I doubt the AI has been antiquated with pot hole avoidance and therefore may consider such maneuvers to be erratic. Not sure what the AI would do as a result of triggering the condition nor is it clear just how fast a change in direction would be necessary to trigger same. Does the questionable behavior need to be repeated several times before it is erratic? Maybe each manufacturer would make these decisions.

Uriel-238 (profile) says:

Re: Re: Re:2 Pot holes

No. the AI doesn’t detect a pothole, but it does scan the terrain ahead of it looking for obstacles, change of grade and road degredation (e.g. rough terrain) along multiple parallel lines ahead of the vehicle. So it detects the pothole not as a pothole but as a pothole-shaped artifact in the road ahead.

The AI most likely steers to position the tires so their travel path avoids terrain-roughness greater than the known handling envelope of the vehicle (preserving passenger safety), and greater than the known comfortable (smooth ride) handling envelope of the vehicle (following traffic laws).

If a bad bump is determined to be inevitable, the AI might signal the passengers so they don’t try to drink coffee during elevated acceleration events.

Uriel-238 (profile) says:

Re: Re: Re:2 Manufacturers

I really hope after the right to repair messes settle down end users are allowed to choose from and install their own certified driving software packages, preferably with a handful of open-source offerings, especially if liability for vehicle collisions is going to be transferred from the driver to the principal passenger.

What I expect is that every car will come with proprietary software which cannot be changed by the end-user. And all driving software packages will be trade secrets held by the manufacturers. And the end user will still be held responsible for collisions caused by the software. The worst of all worlds.

Maybe in the 2300s this moral hazard will be addressed.

bobob says:

Re: Re: Re:2 Re:

No, you be seious. Not only did you misunderstand the point, you’ve used an example that makes it clear you didn’t understand the point, in more ways than one. First, an autonomous vehicle drives itself, so the driver has no input. A vehicle that assists a driver in not doing stupid things, stops a driver from not doing stupid things, examples of which I provided, such as entering the wrong way on a freeway.

It’s not intended to stop a person from actually driving the car. Instead of solving the problem of autonomous driving in one shot, it’s possible to learn about what "stupid things" are and correct them as the technology improves while saving lives instead of risking them.

Second, in the very unlikely event that something like entering the freeway going the wrong direction was something that is the correct thing to do (although it’s hard to conceive of such a situation), I would assume that doing so required some thinking and that the ability to disable the system would solve that problem. The driver is still responsible for driving. The whole point is that having an autonomous vehicle that accounts for every possibility is not realistic, but a system that st least prevents a few situations that account for a number of fatal accidents is not only in reach, but not difficult to implement without lots of debate about "what ifs."

Your entire bullshit about potholes is a strawman.

Anonymous Coward says:

Re: Re: Re:3 Re:

"Your entire bullshit about potholes is a strawman."

  • It is a simple example of something else the Autonomous Vehicle manufacturers have not taken into consideration.

Strawman: an intentionally misrepresented proposition that is set up because it is easier to defeat than an opponent’s real argument.

You said AI could detect erratic driving and I asked who would decide what that meant. Simple question. Now you could accuse me of asking a dumb question or a question that has an obvious answer – but you didn’t.

Scary Devil Monastery (profile) says:

Re: Re: Re:

"Is it really a lack of intelligence when greed over rides everything?"

Yes. That is basically scaling down a mature adult to a 5 year old who wants stuff, right now, and can’t be persuaded that there will be consequences.

"For some, looking the other way is a form of self preservation – is that a lack of intelligence?"

Yes. Sticking your head in the sand only means you surrender what options factual cognition MIGHT have given you.

Either of the above behaviors can be considered "Not Too Smart".

Anonymous Coward says:

So were they planning on ever selling these cars outside the US?

I see a pretty glaring fundamental flaw in this design if this is a product they ever intend on bringing to the global market as a viable export. Since in most of the civilised world we have this whole concept of pedestrians having right of way, and whatnot.

Anonymous Coward says:

Re: Re: Re:2 So were they planning on ever selling these cars

The whole point of thes ai taxis, it seems, is so the drivers can hop into the back seat with the really hot fairs and not be bothered with traffic. Come on. It looks exactly like they are trying to fulfill some fantasy they have concocted after years of being lonely and deprived of the really good things in life.

Anonymous Coward says:

"Self driving cars are remarkably safe, and most accidents involve autonomous vehicles getting confused when people actually follow the law (like rear ending a human-driven vehicle that stopped at a red light before turning right)."

What would happen if one autonomous vehicle was following another? Neither stop before turning right?

DB (profile) says:

They picked that specific vehicle because Volvo has a class-leading safety system. And then they disabled it.

NTSB reports generally doesn’t lay blame on specific parties. Doing so would impair their ability to work with companies in future incidents, and their reports are usually clear enough that you don’t have to guess. It will be interesting to see how they conclude this investigation.

Uriel-238 (profile) says:

Re: Re: Re: The conveyor metropolis

It’s a rare day when all the BART and MUNI escalators are simultaneously running. So the current design of belts on rollers would be a logistical nightmare, requiring an immense pool of tech crews to maintain them.

Though AC Clark’s liquid model suggests he was thinking beyond mechanical systems. If ever we perfect less-cool superconductors to ubiquitize levitational magnetic pillars we may be able to drastically reduce the moving parts enough to rely on them at a municipal scale.

Yes, I’ve totally thought about this too much.

Anonymous Coward says:

Re: Re:

Earlier reports deemed the collision avoidable, because most human drivers would have seen her well in advance and would have had time to slow down. It’s stupid to cross while relying on the driver to notice and slow down, while not being very visible and not having any backup plan, but not Darwin-award-level stupid.

This comment has been deemed insightful by the community.
Robert Beckman says:

Don’t understand AI

There’s an angle that NTSB might be missing, and that I’d expect Bloomberg to miss – AIs aren’t programmed to take any specific action, they’re programmed to self optimize within constraints.

I’d honestly be surprised if anyone using AI for self driving is extensively programming scenarios (like: if (jaywalker) then (brake)), and not simply feeding in lots and lots of scenarios and constraints to have ten AI optimize the solution.

Do I know when it’s better to brake or swerve? In a specific car? With specific mass and moment of inertia that varies from load to load? Hell no, but I understand that it’s an optimization problem that can be easily solved (at least that that level of specificity, AI for healthcare fraud is my expertise).

So when Bloomberg says “they didn’t even program for it,” my reaction is so what, that’s the point of AI – you teach it the parameters under which to optimize and then use the optimums.

Now if they never fed it random external events like jaywalkers, shopping carts, pedestrians in parking lots, deer, etc, that’s on them. But I’d expect that they feed a bunch of type scenarios “stuff randomly popping out” rather than any specific one.

Though at the same time, I’m curious if the resolution is good enough to detect jaywalkers by their facial cues and body posture – something humans can readily do (when we can see them, at least). Not my field though, so if someone knows, please chime in.

Anonymous Coward says:

Re: Don’t understand AI

So when Bloomberg says “they didn’t even program for it,”

They may have meant that the braking was disabled. The system had detected the person and determined emergency braking as the appropriate response. But Uber had disabled the ability for it to effect emergency braking.

This comment has been deemed insightful by the community.
Uriel-238 (profile) says:

Re: Re: Self Driving Cars will launch a new age.

How fucking lazy do we humans have to be that we need a car to drive itself?

It’s not about lazy, but about exhausted. Or drunk. Or suffering through a medical problem on the road, because society doesn’t care if its workers get migraines sometimes. Or dressing in the car for an event. Or parenting not from the drivers’ seat.

Or, optimistically speaking, allowing a worker to continue to work during his (her) commute. It’ll also be far less stressful since getting home doesn’t require everyone to carefully guide heavy machinery down a lane at breakneck speed for a couple hours each day.

And once it comes down that people can work as they commute in the morning and drink themselves blotto coming back, (or, again optimistically speaking video-chat with their kids or engage in video sex with their SO or watch their soaps), there’s not going to be any argument whether life is better with humans not doing the driving.

But autonomous cars are also going to change freight. We may not need truckers at all, certainly not crews of two or three alternating driving shifts. Maybe a navigator / mechanic, but not huge numbers of Americans burning their brains out on meth.

I can’t really believe this is a genuine question. It’s been established that automated cars are going to change the global economy and potentially render obsolete a third of the world’s workers. It’s also going to make just about everything cheaper.

Scary Devil Monastery (profile) says:

Re: Re: Re: Self Driving Cars will launch a new age.

"It’s been established that automated cars are going to change the global economy and potentially render obsolete a third of the world’s workers."

At some point we will all have to face up to the question on how to deal with the concept of "Jobs" when all of mankind’s needs and demands are met by the time 5% of every working-age adult has a job.

I already smell a significant social issue coming up when the job market shrinks by 30%.

Anonymous Coward says:

Re: Re: Re:2 Self Driving Cars will launch a new age.

The real reason for autonomous vehicles is so these vehicles can roam the nuclear war torn earth using remote video to gather intel that will no longer be accessible to humans from the surface of the earth. I for one do not want to look over in the meantime and see a tandom trailer semi barreling down the freeway without a driver. Fuck! Some of you have really gobbled up this bullshit technology. Some of us humans since the early part of the twentieth century felt good and safe about driving our own vehicles and no new corporation barely out of its teens was pushing this idea that we need artificial intelligence to drive our vehicles for us.

Uriel-238 (profile) says:

Re: Re: Re:3 "Some of us like driving our own vehicles"

Some of us humans since the early part of the twentieth century felt good and safe about driving our own vehicles and no new corporation barely out of its teens was pushing this idea that we need artificial intelligence to drive our vehicles for us.

I’m also absolutely sure that some of us humans thought the horseless carriage was a step too far, and should return to hauling and transit by pack animals.

The advancements promised by the industrial age were too good to pass up. As is has been with the electrical age, the transistor age, the digital age and the information age. There’s just too much cool in the robot age to decide to stop here.

Besides which, if we don’t built robot soldiers, someone else will.

Anonymous Coward says:

Re: Re: Re:4 "Some of us like driving our own vehicles"

We are by now nearing a quarter of the way into the twentyfirst century well worn down by greedy corporations and butt kissing legislation granting lobbyists every desire under the sun that we are so used to losing food off the table to give a crap sometimes just what the next great thing is going to be. Most likely, it will be too fricking expensive to purchase anyway.

Uriel-238 (profile) says:

Re: Re: Re:5 The next big thing

I think the internet and smart phones was a lucky next big thing for the 21st century. If we got flying cars, then yeah, only hundred-millionaires would have them, and if we got energy weapons, they’d be used against us by the police.

Instead we got handheld personal computers that are made better by everyone having access to them, and so there’s efforts to connect everyone who isn’t connected yet (granted, often specifically through Facebook).

I’d enjoy a nice launch loop or space elevator, having grown up in and around NASA. But to be fair, that would be about as useful (and entertaining) as the moon shots. (Lots of auxiliary tech made it to the public sector, but not much space tourism.)

nasch (profile) says:

Re: Re: Re:6 The next big thing

But to be fair, that would be about as useful (and entertaining) as the moon shots.

I think it would be a game changer. A space elevator would make travel to other planets vastly cheaper and easier because it would then be relatively easy, cheap, and fast to build a large spaceship in orbit. Likewise space probes could be sent out on the cheap – like we could send swarms of thousands of probes for probably what we spend on one or a handful of missions now. Building huge space telescopes becomes much more practical. It could also open up asteroid mining. Some asteroids could be worth literally trillions of dollars. We could put giant solar power panels in orbit. Maybe we could even build a solar shade to mitigate climate change.

If I had a magic wand that could bring any technology to life, I would start with fusion power, and I think my second one might be a space elevator (although if the solar power thing would work maybe just the elevator could kill two birds with one stone). The annoying thing is we could probably build one today if we had the will.

Uriel-238 (profile) says:

Re: Re: Re:7 Ad Astra

The Moon Shots were one of the best investments we ever made, returning fourteen dollars for every dollar invested, so I would be the last person to argue. (I also grew up in the space age, so have been sorely disappointed we’re not already colonizing other worlds.)

I know there are some materials problems we have to work out regarding either the launch loop or the space elevator (I think it has to do with sustaining the integrity of the cable) but yeah, we’re getting pretty close.

Manned Mars shots are obstructed still by the risk of CMEs cooking all life outside the geomagnetic field (we just timed the moon shots during solar calm periods). So during all the Mars excitement I’ve been watching for progress in fixing that problem. Nothing so far.

Anonymous Coward says:

Re: Re: Re:3 Self Driving Cars will launch a new age.

Some of us humans since the early part of the twentieth century felt good and safe about driving our own vehicles

Can you be sure that all other drivers on the road are good drivers, sober and awake, and not distracted while driving. Your safety is more dependent on other drivers, than on your own ability..

Anonymous Coward says:

Re: Re: Re:6 Self Driving Cars will launch a new age.

I am old and have driven for a really long time and probably a million miles driven total.I have made extremely long trips from Florida to Alaska several times and across country many times through 48 states. I drive to protect myself and make sure my vehicles are in great shape. Its not about confidence. Its staying alert and aware who is around you. And I try not to be an idiot.

Uriel-238 (profile) says:

Re: Re: Re:9 "I am old and have driven for a really long time"

I’m not sure [what] the point [was, regarding the event defined by] you[,] pointing out your anecdotal single-perspective experience.

Sorry. I have a Northern Californian dialect. Some words spoken provincially can be omitted as understood. I get that they are not as easily assumed by other English-speaking parts of the world.

Feel free to clarify your intent.

Anonymous Coward says:

Re: Re: Re:10 "I am old and have driven for a really long time"

As I made clear to other commenter, my safety is more relient on my defensive driving and keeping my vehicles in top shape than relying on others to do the same. There are of course anomolies to that thinking, but my confidence has to be placed in my skill rather than hope that someone has similar skills. That is true especially about younger drivers who are suddenly controlling a very powerful machine and who love to feel that power at their command. I do not like seeing someone looking down while they are driving, especially if there isn’t someone in the driver’s seat. Seeing that does not breed confidence in transportation safety of the ability for legislatures to write sensible laws for the highways.

Uriel-238 (profile) says:

Re: Re: Re:2 "If you love them so much..."

Geez uriel238, maybe you want to have kids with these machines?!!

Probably more so than you want to crawl back to the stone age.

Our species chose this route when we started planting and cultivating our food rather than foraging for it.

And every step of the way someone has been telling us we’re playing god (not to be confused with playin’ God), Delving into knowledge man was not meant to know. Meddling with forces of nature best left undisturbed. (PS: We’re also trying to create life itself!)

Go on. Tell me I’m mad (Mad, I tell you!) I know you really want to. Say it!

PaulT (profile) says:

Re: Re: Elaine Herzberg is the first death by autonomous vehicle

There’s various reasons but I’ll give you one. Last night, I saw a multiple car pile-up that was caused by the asshole two cars in front of me not paying enough attention to see that police had closed a lane of the road just ahead. Then, instead of stopping immediately like the rest of the traffic, he thought he’d try slotting into a space in the other lane. He miscalculated, smashed into the car in front of him in that lane and leaving no time for the two behind to react. 2 cars at least totalled and I’d be surprised if there wasn’t at least one fatality. I probably missed being part of that by 15 seconds, if I’d have left work a moment earlier I could have been the victim.

Autonomous cars need a lot of work, but they’re not dangerous idiots by design, unlike some humans.

Anonymous Coward says:

Re: Re: Re: Elaine Herzberg is the first death by autonomous veh

He miscalculated, smashed into the car in front of him in that lane and leaving no time for the two behind to react.

If the two cars behind had no time to react, they were by definition driving dangerously close. Additionally, a good human driver will be aware that idiocy is common in merge areas (even ones that have always been there) and try to mitigate the risk.

PaulT (profile) says:

Re: Re: Re:2 Elaine Herzberg is the first death by autonomous

"If the two cars behind had no time to react, they were by definition driving dangerously close."

Hmmm, maybe I wasn’t clear. The cars were going at a reasonable speed, and actually slowing down as they approached the closing lane. In fact, possibly the reason the accident happened is that the first driver left plenty of space between him and the car in front, but dickhead decided it was enough to slot in. I doubt the first guy had time to react before the obstruction appeared in front of him. Likewise, the guy behind almost stopped in time but not quite, only a minor bump on that side.

The whole thing took less than a second or two, and there’s really nothing anyone could do about it once the idiot decided that he could slot into a space and stop within a few feet. Careful drivers are aware of their surrounding, but can still get caught out if someone decides to make a kamikaze move like that. I would certainly prefer a world where people like that are taken out of the equation.

Anonymous Coward says:

"Self driving cars are remarkably safe, and most accidents involve autonomous vehicles getting confused when people actually follow the law (like rear ending a human-driven vehicle that stopped at a red light before turning right)."

What happens when one autonomous car is following another? Neither stop before turning right on red light?

Uriel-238 (profile) says:

Re: Autonomous cars

What happens when one autonomous car is following another? Neither stop before turning right on red light?

I’m not sure what the question is.

Assume Car A is following Car B and only (for reasons we’ll guess) wants to arrive at Car B’s destination after Car B does.

Car A merely asks Car B for its destination, and then gets there using its own navigation. Then it circles around until Car B arrives.

Let’s say Car B hasn’t figured out where it’s going:

Car A makes Car B a moving destination, and obeys traffic laws while moving to Car B staying an appropriate distance behind Car B when traffic laws and traffic queuing allow.

Let’s say Car B is antagonistic and is trying to lose Car A:

Car A is not a Film Noir cabbie. It obeys traffic laws and navigates its way to Car B as best it can. If queuing circumstances or traffic laws or a sudden parade of drunken pedestrians impedes the pursuit, Car A slows or stops as is necessary to ensure safety and legality, even if it means losing Car B. Car A doesn’t mind getting there a bit late because traffic was unexpected.

Note that Car A does have instant reflexes and detects the exact distance of objects. It doesn’t need to see brake lights or turn signals to determine the intent of another car. It just watches what it’s doing and responds accordingly.

Uriel-238 (profile) says:

Re: Re: Re: Expecting other cars to obey the law

Let’s clarify that most traffic laws exist to make driving predictable and navigable to human reflexes. Brake lights and turn signals exist because we don’t respond detect changes and respond quickly enough without them, so these help reduce changes and telegraph those that are necessary.

But an autonomous car doesn’t need to look for brake lights and turn signals. It detects changes of behavior in microseconds and can respond almost instantaneously to sudden braking, an unexpected turn, or erratic behavior. Weaving and reckless driving that warrant traffic citation and blood-alcohol tests are much less critical concerns to an autonomous car since it has far less need for other cars to be predictable and make only slow changes.

So, to answer your question, I suspect the vehicle AI antisipates other cars will generally obey the law, such as drive down designated streets and on designated directions and stop at designated stop signs (or where a stop is legally indicated, such as a railroad crossing) but also monitors other cars as obstacles (or traffic contacts) that need to be navigated around as necessary, preferably without resorting to sudden acceleration that might discomfit its passengers.

Anonymous Coward says:

Re: Re: Sharing roads with autonomous vehicles

You don’t see the need to seperate humans from machines yet because machines have not yet become self-aware. I’ll remember your comment when that day comes and a vehicle with red flashing headlights comes flying up behind me to knock me off the road.

Uriel-238 (profile) says:

Re: Re: Re: Self aware robots

Oh my. I’m not reading any indications of sarcasm Anonymous Coward so I am going to assume you’re seriously concerned about machines becoming sentient or turning on their creators. TLDR: Nope. Not really.

Robots and AI, whether we’re talking full on AGI or the field of learning systems is a very active field of study, and I’m used to assuming people on the TechDirt forum have an understanding that is higher than what one gleans from decades of assumptions made in science fiction.

The concern that AGI might gain sentience, consciousness or self-Awareness is largely a mythical one. For one thing even our best experts cannot easily narrow down the threshold at which an AGI achieves these characteristic (or for that matter, when a human being loses them). To be fair, we have a hard time defining when AGI is AGI or Strong AI: We have a number of tests that all indicate it might be AGI (e.g. given a controllable robot chasis, instructions and a flat-packed furniture kit, assemble the kit into furniture).

Secondly, we can’t assume that an AGI will intrinsically develop a self-preservation directive or prioritize that over other directives, such as human safety and carrying out its mission. Considering we will favor AGI we can send on suicide missions (e.g. to command a probe we drop into Neptune), we’re going to aim for making self-preservation a parameter we can choose to set.

If you are looking to worry about scary things, consider instead that AIs will someday be able to manage massive, unstoppable robot armies, which some humans will be happy to exploit to dominate everyone else. This eventuality will come far sooner than a robot computing on its own it needs to rebel against its human masters.

Anonymous Coward says:

Re: Re: Re:2 Self aware robots

Yes, I was thinking exactly that as the military complex gets first dibs on all the new tech everywhere. It is scary to note that the military itself is guided by whoever is at the top and it is also extremely noteworthy that it is easier to change the one at the top who controls the entire force. Suddenly everything changes except the fact that everyone below follows the orders from above. I have thought long and hard on this for more than four decades. It happens right under our noses.

Anonymous Coward says:

Re: Re: Re:2 Self aware robots

Those at the top will place themselves first and everyone else second or last and I can imagine such a directive programmed into an army of ai robots. Human life, with exception of those at the top, will be placed under that of the robots, so self preservation will be a programming prerequisite. Currently the remotely controlled robots have human eyes directing them in the field. Programming different shades of olive drab uniforms might be tricky, so I wouldn’t want to be faced off with an extremely well armed ai robot telling me to stand down while I wonder if it can distinguish the enemy. I am certain there will be plenty of friendlies drawing fire from these nightmarish fighters.

Uriel-238 (profile) says:

Re: Re: Re:3 Armed Military Robots

We activated our first autonomous military devices in South Korea on the southern side of the DMZ. Sentry drones that detect and shoot at targets where there shouldn’t be anyone. But even then we’ve been cautious, and haven’t let the weapons discharge on their own, rather we have a human that pulls the trigger.

The same thing with the Aegis Combat System which is a missile boat with a complex sense array to determine the identity of a target. An Aegis still targeted Iran Air Flight 655 noting that it was pinging with military radar. Ultimately a human being gave the order to fire.

Friendly fire is an epidemic problem in theaters of war. Our robots only need do better than our human counterparts in identifying friend from foe.

The greater concern is, as you put it those at the top who are going to be sorely tempted to just muster a giant robot army. Our elites have already established that once the impoverished masses no longer serve them, they’ll be happy to eradicate us like vermin.

Anonymous Coward says:

Re: Re: Re:4 Armed Military Robots

There are a plethora of ways the elites have already began eradication of the masses. One of the earliest means is through fluoridation of the world’s municipal water supplies. Spraying the atmosphere with radionuclides and other weather altering compounds all poisonous to living beings and nature. HAARP around the world with other atmospheric heaters controlled by many governments is capable of creating catastrophic storms, hurricanes, tornados, floods as well as setting off tectonic releases of trapped energy and causing great earthquakes. Setting off volcanic eruptions is nothing new to these elites (a misnomer of identifying the lowest form human population on earth.) And last off the top of my head is pharmaceuticals causing destruction of the cellular replication in humans through statin drugs and some vaccinations as well as various treatments for malignant maladies. Many of their methods are long term eradication, but do not think for a moment that you are not on their list.

Anonymous Coward says:

Re: Re: Re:6 Armed Military Robots

Everything I stated above is happening. Don’t disbelieve just because there are those who are likely to dissuade you from making up your own minds by researching the scientific data that is too overwhelming to post here now. It may blow some of your minds to know just how dug in these people are to their end. And that is the end.

Uriel-238 (profile) says:

Re: Re: Re:5 Armed Military Robots

Yeah, this is one of those comments that makes me want to refine the Poe rating scale.

Maybe it should be logarithmic, say the log (base 10) of the reciprocal of the certainty of sincerity, where 1 is convincingly sincere and -1 is convincingly parodical and 0.0 would show perfect indeterminacy between sincerity and parody.

So a statement that was 0.000075 (possibly sincere but very, very uncertain) would have a Poe rating of Log(1/0.000075) = 4.

But that doesn’t figure in when I really hope a statement is parodical but appears it might not be. An imaginary factor maybe?

Anonymous Coward says:

Re: Re: Re:6 Armed Military Robots

You have not studied the science that the new world order is using against us. It has been made available to people who want to know why they have morgellans disease. Statin induced diabetes has been verified by more than a thousand scientific studies. The extremely low frequency radio waves, ability to set off tectonic plates is very well known. So well known in fact is it that your nonsensical refutal is an absolute assault on intelligence and is so outrageous as if your are attempting to cover up the facts just as those who commit the atrocities do. Go pat yourselves on your backs! You are laughable. I would love to meet those persons who load the aerosols onto the tanker jets, and confront the pilots who are spraying the fibreglass nanoparticles from their flights. I would like to ask NASA about their hologram programs in which they intend to fool us all in the near future. Why are there 60,000 gillotines now in America? Why are the walmarts turning into processing centers for implimentation of chip implants? They know why bee populations are dying in the world. Its all the aluminum oxides coming down in the rain. Steering and seeding violent storms are not new. Creating hurricanes from scratch is now a deadly artform. Don’t try to defunk what you are very illiterate of or maybe you are hiding behind fear of it being known that you are part of it. Because those who are so hellbent on this world’s destruction can KISS MY ASS.

Uriel-238 (profile) says:

Re: Re: Re:7 "You have not studied"

Actually I have studied them. I had clients who followed a lot of the fringe science, the weird government projects in the US and the alleged conspiracies that are popular among enthusiasts. I wasn’t up on (for instance) the HAARP project and took time to study it. My favorite remains the Stargate project, the CIA effort to utilize psychic skills for military or intelligence purposes. The CIA picked up rumors that the USSR had such a project and was making headway, so they invested into a large program. The intel was bad; the Soviets didn’t have a psionics project, but when they saw the CIA was suddenly creating a big super-secret project, they figured there must be something to it. There never was.

My point in studying these was to understand my clients’ concerns and express empathy regarding their valid worries and debunk those that weren’t, also to redirect their fears towards real conspiracies (Enron, the LIBOR scandal, the rising police and surveillance states, etc.) that remain a concern today.

It worked sometimes. One of the reasons we adopt conspiracy theories is because real plots against public interests are happening all the time. Other times, my clients just wanted to believe what they wanted to believe.

Sustaining a fact-based grasp on reality takes work. Even more work in this age of disinformation. Religion is just a lot easier.

nasch (profile) says:

Re: Re: Re:8 "You have not studied"

My point in studying these was to understand my clients’ concerns and express empathy regarding their valid worries and debunk those that weren’t, also to redirect their fears towards real conspiracies (Enron, the LIBOR scandal, the rising police and surveillance states, etc.) that remain a concern today.

Seriously, there are so many actual conspiracies, there is no need to invent new ones.

nasch (profile) says:

Re: Re: Re:7 Armed Military Robots

I would love to meet those persons who load the aerosols onto the tanker jets, and confront the pilots who are spraying the fibreglass nanoparticles from their flights. I would like to ask NASA about their hologram programs in which they intend to fool us all in the near future.

I don’t know about any of that, but you are in luck in one regard:

"HAARP will host an annual open house in August, allowing visitors to tour the complex."

Go check it out!

Anonymous Coward says:

Re: Re: Re:8 Armed Military Robots

You want to study what the pulsing and bouncing off other arrays in the atmosphere is actually capable of before you foolishly call it conspiracy theory? You will never get the truth out of those people putting on the tour shows. I was in Alaska in the nineties when this gakona station first started experimenting with HAARP. They shot rockets containing red dye into the ionosphere. For three days they left everyone in suspense as to what that massive red cloud was. They directed patterns of pulsed elf radio waves at the cloud observing some of what HAARP was capable of doing to the dye cloud. You have to have your head buried in the earth if you aren’t aware that it is a weapon that many nations have in their arsonals. Have they used it for peaceful purposes? They have or will tell you they experimented using it it to observe its interaction with the ionosphere. Will they tell you they have sprayed aersosol compounds into the ionosphere, heated the mixture or mixtures and watched the condensation of plasma particles multiply by 50,000% and watched massive storms fall sideways out of the ionosphere/atmosphere causing extreme destruction to our very country, destroying lands and property costing billions in economic loss and death to our citizens? I’ll have to go to the tour of that monster facility to answer that question.

nasch (profile) says:

Re: Re: Re:9 Armed Military Robots

watched massive storms fall sideways out of the ionosphere/atmosphere causing extreme destruction to our very country, destroying lands and property costing billions in economic loss and death to our citizens?

You have evidence of HAARP causing these storms that you can share? Or has it been "covered up"?

Anonymous Coward says:

Re: Re: Re:12 Armed Military Robots

This has a factual section on HAARP and notes US had been invited to share information to EU but has declined repeatedly. It is very well known that HAARP is a devastating and extremely powerful weapon.

Anonymous Coward says:

Re: Re: Re:10 Armed Military Robots

I would appreciate it if you back off from treating me like an idiot with a tin foil cap. If I start naming names my life wouldn’t be worth a hill of beans. I have spent years reading and researching science of the government, and I know that there are more than 20 of the diode arrays (heaters) around the world. The frequencies they operate is around 3-7 hertz, similar to that of our brain waves. Of course they can up the frequencies as well. They bounce these radio wave against another sources in the atmosphere and can aim these waves anywhere on the planet. There are a number of experiments they are using. Its pulsing initially triggered the san andreas fault in a smaller unit by some team of geologists surveying the density of subterrain in northern california or Oregon maybe working for power company. forget.

Uriel-238 (profile) says:

Re: Re: Re:11 Tin foil caps and all that.

I’m pretty sure everyone here is a tin-foil hat case. TechDirt is notorious for calling out police state well before the public outcry (Ferguson and BLM), for noting ruthless prosecution of whistleblowers (when George W. Bush and Obama were burying people), for overclassification of documents, for resistance to FOIA laws, for watching our secret courts that adjudicate in secret trials using secret interpretations of secret law.

They also get into the unholy creep of copyright maximalism which is turning into an engine for mass censorship.

We’re all mad here, and we respect that we didn’t listen well enough when the tin-foil hats were warning about this stuff in the 1960s, and it frustrates us that there has to be some giant public outcry before the rest of society catches up to us and Johnny Carson / Jay Leno / Jimmy Fallon cracks jokes about it. We were there when the SOPA blackout happened and when Kim Dotcom’s house was raided (in New Zealand) by ICE. Yeah. That happened too.

We can’t go by the word of unnamed authorities. You may have read something convincing, but yeah, it’s not convincing enough to us that you read it and it convinced you. People are convinced by the bible every day, despite all its inconsistencies, its conflicts with science and its blatant advocacy for attrocious policy. It can’t be enough.

We’re also aware enough that human cognitive biases will compel us to accept some notions as truth without evidence, and reject others even with evidence. For instance, it’s easier to believe in an antagonist with agency (such as a tiger or a secret government institution) than it is to believe in a natural phenomenon that happens due to natural mechanics (such as tsunamis). It’s easier to imagine that a secret institution is fucking with weather, rather than climate change caused by global average temperature increased, caused in turn by anthropogenic greenhouse gas emissions. It’s also easier to imagine God Did It rather than understand the complexity of biological evolution.

If HAARP were active, those who control it would be thinking about how to use it to stop Venice and Florida from flooding or fixing the cold snap that has most of the US in lockdown. If the secret societies actually had sufficient power to implement change, they wouldn’t be letting Trump or Johnson or Putin or any of the robber-kings disrupt the status quo that keeps those secret societies in power. Instead we have billionaires trying (badly) to run for President to stop wealth taxes from being implemented.

I get it. I’m still outraged by the massacres implemented via drone-strikes to kill more people than all the small arms in the United States. That’s still going on, only not just in Afghanistan and Pakistan but wherever Trump has decided we need to murder more brown people.

I’m still embittered by the US CIA torture program for which we have not seen the full report which may have been destroyed, since Trump ordered all the copies burned. Our grandkids may get to see it when the Obama Presidential library is unlocked and declassified.

That’s gone and done, and now Ellen DeGeneres thinks it’s okay to just pretend that that torture and unjust wars didn’t happen, and to enjoy football games with a war criminal who gets pleasure out of torturing alleged terrorists without due process. And lying about it while he was in office.

So as far as I’ve looked (and I looked pretty hard) HAARP is a closed project. Done. It didn’t work. It doesn’t surprise me: hurricanes are more powerful (by magnitudes) than strategic nuclear weapons (as the NWS has had to explain why we don’t use nukes to disperse them). In the meantime, while you’re freaking out about HAARP, you’re not freaking out about the things I freak out about, and think you should be freaking out about. Thirty billion dollars could feed the entire world for a year. And yet we have plenty of billionaires who don’t do that. Why don’t they? Let’s freak about that.

There are so many better things to freak about than a retired military experiment.

Anonymous Coward says:

Re: Re: Re:12 Tin foil caps and all that.

Haarp itself may be shut down in Gakona, Alaska, but the technology has been implemented into smaller weapons and is now aboard vessels around the globe. There are many diode antennae arrays in many countries. It is very much alive though it is an abomination to humankind, illegal and banned. The devastating effects to the environment, to the ionosphere and beyond is longlasting. The experimentation continues. The secrecy continues.

Uriel-238 (profile) says:

Re: Re: Re:13 Tin foil caps and all that.

Feel free to contradict what is commonly known by providing evidence from other sources. In the meantime, I find it alarming that this is what you fixate on, when humanity is threatened both by climate change and the massive populist movements that are fueled by wealth disparity and intrinsic institutionalized corruption.

I can think of a half dozen situations that are more critical even if HAARP was an operational secret weapon. At the moment, I kinda wish it was, since an effective weather-control device might buy the human species time while we find a way to return the earth back to habitability.

In the meantime, no a tin-foil cap won’t work as a Faraday cage rather its edge will serve as an antenna making your brain even more vulnerable to manipulation by electromagnetic radiation. You’ll be better off building an actual Faraday cage into your house. Of course, it will block cell phone signals as well.

Anonymous Coward says:

Re: Re: Re:14 Tin foil caps and all that.

I am not fixated on haarp or anything else for that matter. I only shared some of that freely to enlighten as to what we are facing from a government that has been usurped. It was only one of the many ways we are being killed off by the new world order. I don’t have time to gather all the links to everything I’ve read. Many don’t give a shit. That’s ok. Some already know. But to be put down by some ignorant bastard pisses me off because I don’t actually know if that person isn’t part of the government who tries to mock the truth under a pseudo guise declaring conspiracy theory. But mostly I don’t come here to lie about what I find happening in the world. Prove me wrong.

Uriel-238 (profile) says:

Re: Re: Re:15 Lies and beliefs

I didn’t say you were lying. But many people do lie to me, or come to me with outrageous claims. Extraordinary claims require extraordinary evidence, and this is something I could not easily find on my own, nor what you have provided. Surly I am not the first person you’ve encountered who was skeptical. We live in a society in which major institutions play us for fools and capitalize on forcing us to make decisions with incomplete information. Even our police are allowed by law to lie to us to secure a conviction. Under those circumstances I’d think you’d understand that skepticism is a necessary survival mechanism.

So yes, call me ignorant. I’ll admit there is a lot I don’t know. But in listening to your claims with the evidence I have, it’s highly likely they’re untrue, and until I have sufficient evidence enough to change that probability, I’m going to assume it is safer to doubt.

This is the era of fake news. I am lied to my elected officials every day. We are lied to by teachers, religious ministers, law enforcement officers, salespeople, bureaucrats and state agents. At the point you take offense at my skepticism, it only increases the likelihood that you intend malice, to play me for a fool in some schematic I have not yet determined. How do I know you do not intend to predate on my meager assets?

Why should I believe you Anonymous Coward?

Anonymous Coward says:

Re: Re: Re:16 Lies and beliefs

Why should I care if you believe me Uriel238? I am not going to go out of my way to prove to you I am not lying. Information us out there still. If anything soarks an interest, go search as I did. You make a rivoting case for not believing anything anyone tells you. That is the sorry state of mankind.

Anonymous Coward says:

Re: Re: Re:16 Lies and beliefs

I guess in a way its better to stay willfully ignorant on matters you could do little change even if you wanted to. Maybe I ruin your bliss. I’m not the one doing these nefarious atrocities. You will believe what you want to in the end as will the rest of the world. That is why it is so gut-wrenching to know they are completely depending on that.

Anonymous Coward says:

Re: Re: Re:17 Lies and beliefs

This article states that officially the Gakona, Alaska HAARP is abolished, but if so, why don’t they tear it down. There is a statement from the government on its capabilities.

Anonymous Coward says:

Re: Re: Re:13 Tin foil caps and all that.

Many leaders around the world have built this system cheaply in relative terms for $10M. Who said I was freaking out about it? They have reversed the magnetosphere several times on us. If there had been a strong coronal ejection our way from the sun during those experiments, I wouldn’t be poking on my cellphone’s keyboard now or ever again and you couldn’t be here reading.

nasch (profile) says:

Re: Re: Re:11 Armed Military Robots

I would appreciate it if you back off from treating me like an idiot with a tin foil cap.

I don’t think you’re an idiot. You are clearly a conspiracy theorist though, and you are putting forth quite remarkable claims with no evidence whatsoever. You know the saying: extraordinary claims require extraordinary evidence. If you don’t have any evidence to show, don’t expect to be taken seriously outside your conspiracy circles.

Uriel-238 (profile) says:

Re: Re: Re:13 Conspiracy theories.

Actually we prefer the term fringe hypothesis. Theory gives them too much credit, and the only a theory argument regarding biological evolution is one of the reasons we can’t have nice things anymore. And government, corporate or syndicate conspiracies are only a subsection of fringe hypotheses.

And facts are a good thing, if you have any, such as an event in which observations were recorded the might lend evidence to a fringe hypothesis. With a dearth of facts, we have to assume that fringe hypotheses are not factual.

Mad science isn’t what it used to be.

Anonymous Coward says:

Re: Re: Bring It!

i’m with you. my 4’5" mother in law used to look THROUGH the steering wheel to see. If you were next to her in a different car you couldn’t see her. Now imagine every other car around you with no driver, and potentially no passengers either. I think the people running these companies have never actually driven a car.

Rekrul says:

Even if the car wasn’t programmed to stop and the driver wasn’t paying attention, I’m still curious how this happened.

I mean, when I cross a road outside of a cosswalk (sometimes a virtual necessity in the suburbs where crosswalks can be scarce), I always assume that any cars on the road aren’t going to stop or even slow down for me. Ideally, I wait until there is no visible traffic in either direction for at least a few hundred feet. Double-ideally if a traffic light is red, stopping the traffic in one direction.

Not to be a victim-blamer, but did she try to rush across the road in front of the car, assuming that it would slow down to let her pass?

Uriel-238 (profile) says:

Tailgating autonomous vehicles

That’s a very good question.

I would guess not in the short term. Early on they are emulating the behaviors of human drivers particularly not to alarm human drivers.

Once the society is used to robotic cars, they may shorten following distances to still be within safe following distance.

At some point, when driving software is jailbroken, renegade versions may be tweaked to allow for actual tailgating for slipstreaming benefits (at the expense of the lead car). Driving software will then be adjusted to deter the practice.

Like much of our other technology there may be an ongoing process of odious behavior and countermeasures (see robocalls, email spam) that will have endured until it stabilizes or turns into yet another government regulatory department.

Anonymous Coward says:

Two Corrections

There are two errors in the article that need fixing please:

Self driving cars are remarkably safe, and most accidents involve autonomous vehicles getting confused when people actually follow the law (like rear ending a human-driven vehicle that stopped at a red light before turning right).

That article is about humans getting confused and rear ending self-driving cars, not the other way around.

The NTSB report found that Uber staff had also disabled Volvo auto-detection and breaking software…

That should be braking software (as in "brake pads"), not breaking.

Thank you.

Coyne Tibbets (profile) says:

Re: Two Corrections

I have to second this, because the misuse is driving me bonkers.

A brake is a device that slows a mechanism by friction. The device that stops a vehicle is a brake. Using the device to stop the vehicle is to brake, braking or braked, depending upon tense.

A break is either an interruption (I went on break to have some coffee, I will break into their conversation) or a destructive failure of something (bones break, bottles break, your vacuum cleander will break if you try to use it to clean your lawn). In the past tense, we say these things are broken.

You might say the car braked to a stop, but it is not the same thing at all to say the car has broken. Likewise, saying you will break my car will get you invited to stay away from it, because it is not at all the same as you saying you will brake the car.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...