Cruise Suspends All Driverless Operations Nationwide – Slashdot

Catch up on stories from the past week (and beyond) at the Slashdot story archive




The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
Either you’re having a stroke or this is something ChatGPT generated.
See? Another perfect example of technology run amok. We can’t even get autospell right. This “autonomous” stuff might work on rail and sea, but not on our bumper car highways
It really is apropos to this story. You rum everything through spill chuck, and because the words look similar enough at a glance you don’t catch the errors.
Did you mean ‘catch the terrors’? I agree that this is pretty frightening.
Autocorrect is AI-assist, and when you apply it without checking, you display the mental acuity of a stroke victim.
I think people are finally realizing the whole “move fast and break things” ethos isn’t an acceptable model for a large range of endeavors (autonomous vehicles being an obvious one; medical treatments being another). A startup company’s profits shouldn’t be built on an idea that some number of human injuries or deaths is acceptable.
Why just because it is a startup?
Massive construction projects budget deaths… Panama Canal killed 30,000
Auto recalls are usually balanced on payout vs cost.
Here’s a lovely item at NIH “Development of Framework for Estimating Fatality-Related Losses in the Korean Construction Industry”
https://www.ncbi.nlm.nih.gov/p… [nih.gov]
There are many such calculations.. why are startups to be forbidden such ‘luxuries’
The US built the Panama Canal, so it wasn’t purely a third-world project. However, most (an estimated 22,000) of the cited deaths occured during an earlier French attempt to build the cabal that ultimately failed. Those deaths were overwhelming due to tropical diseases, particularly mosquito-borne diseases that were not known as such and so people did not have means to protect against them. The US attempt had a much lower death rate, but there were still many deaths from disease. The total number of deaths from workplace accidents was high, but nowhere near 30,000.
The self driving startups are killing people who aren’t even associated with them
Note that Ford took quite a beating in court when it came out that they were weighing payouts for deaths against recall costs. Also note that willing employees is a bit different than unwilling 3rd parties and that the Panama Canal was built in a different time.
“Move fast and break things” (people) is incompatible with modern standards for individuals and businesses where fatalities could be expected.
To be fair, driving itself is based on the idea that some risk is acceptable for mobility.
Mobility is also a type of profit.

If anyone remembers that McDonald’s Hot coffee lawsuit the reason why the payout was so large was because McDonald’s was already aware that there was a risk of severe burns prior to the incident that was sued over. Penalties increased drastically if the business in question is aware of the risks

If anyone remembers that McDonald’s Hot coffee lawsuit the reason why the payout was so large was because McDonald’s was already aware that there was a risk of severe burns prior to the incident that was sued over. Penalties increased drastically if the business in question is aware of the risks
Some grifts are timeless.

https://www.cnn.com/2023/10/25… [cnn.com]
Yeah, it was expected.
It seems every time self driving cars have been in the news for problems, it was a Cruise car. Read about traffic jams, cars piling up, intersections blocked, and Cruise was the cause.
The other companies seem to have solved the issues, and based on videos that occasionally make the rounds, are far better than human drivers not just during an unexpected swerve but at detecting and avoiding them before they take swift responses.

It seems every time self driving cars have been in the news for problems, it was a Cruise car…The other companies seem to have solved the issues,

It seems every time self driving cars have been in the news for problems, it was a Cruise car…The other companies seem to have solved the issues,
100% false. You just haven’t been paying attention.
Cruise was definitely less skilled than Waymo and dragged down the image of both companies, but neither company provides a straightforward way for the public to provide feedback or report active problems (e.g., akin to “how’s my driving?”).
They do provide a phone # to first responders, which apparently isn’t good enough since they still resort to breaking the windows of these cars in emergency situations. Cruise has a unique name on each car, but Waymo doesn’t even bother with that. In the case of Waymo the waymonauts I’ve spoken with think it’s quite sufficient to have a passive web form on their website. Neither company’s safety drivers seem to make reports of when their vehicles do something that’s in violation of the law (I doubt the drivers have any training or familiarity with the state’s vehicle code).
Of course waymo has nothing more than a web form.
Have you ever tried to contact a real human with authority at google to get customer service or report a real problem?
Lmao, so you’re in the middle of a google caused IT crisis and… you sent a letter? And this is ok support for a paid business level service?
And I’m the stupid one?

Cruise has been operating taxis in SF now for a while. There must be enough data on total damage / injury per passenger mile to make a fair comparison with human drivers. Its not surprising that automated vehicles will may different mistakes than humans, the safety issue is based on the overall rates.

Cruise has been operating taxis in SF now for a while. There must be enough data on total damage / injury per passenger mile to make a fair comparison with human drivers. Its not surprising that automated vehicles will may different mistakes than humans, the safety issue is based on the overall rates.
They put out this study [getcruise.com] a month ago that claims drastically better safety (94% fewer crashes than human where the AV was the primary contributor). I’m sure they influenced the metrics/design of the study, but I don’t doubt the AVs cause fewer accidents. I think the halt is more to do with their over-caution causing them to stop and block streets (and emergency vehicles).
I just got a few of the local youngsters who drive stolen cars round town for kicks to do a study on their own driving record and they, too, reckoned they were 94% better at driving than the ordinary driver.

I think the halt is more to do with their over-caution causing them to stop and block streets (and emergency vehicles).

I think the halt is more to do with their over-caution causing them to stop and block streets (and emergency vehicles).
You’re wrong. The halt is because they lied to the DMV. They got kicked off the streets in San Francisco, and removed them in other cities before getting kicked off there, too.

It’s fine to test self-driving cars on the street, but they need a safety driver until they are ready.
I think the halt is more to do with their over-caution causing them to stop and block streets (and emergency vehicles).
I think it’s more because of the accident. That’s why California pulled their license. I think it’s definitely out of caution in that they want to know what exactly happened to cause it – and in the meantime, halting operations just in case it’s a fleetwide fault is generally a good idea.
This happens in a lot of fields – in aviation any aircraft crash has the potential to grounding the enti
I agree with you that there is a need to properly motivate the designers and owners of the AI appliances.
If I run someone over I can do real jail time for vehicular manslaughter.
If a robo car runs over the same person in the same circumstances, who goes to jail? No one.
Once someone at the company has the same risk and responsibility as I do then no problem having robo cars on public roads, there is zero chance any of these cars would be out there if a company exec was at risk for jail and a felony conviction.
With companies, the risk isn’t going to jail, it’s losing a lawsuit and having to payout huge amounts of damages.
Not that either outcome is particularly relevant as a motivation; these companies aren’t trying to be reckless, it’s just that autonomous driving is a difficult problem to solve with 100% reliability. Holding onerous consequences over their heads isn’t likely to make them perform better than they would have otherwise.
It is not likely to make them better, but it is likely to make them stop overstating the capabilities. The roads self driving car technology, laws today are not ready for the experiment with public lives. This is what will stop if execs are threatened with jail time in the case of human injury, death etc.
Fines, company shutdown etc. are no deterrent: fines come from investor money, the execs can always find another job.
So what happens when a driverless car gets into a collision and you are injured?
Do you get to sue the manufacturer, the owner of the vehicle, or the other victim that had the temerity to collide with your robot-driven tank?
In the near future, when you are ran over by a self-driving Uber, they can argue that their 2 person legal defense team is too busy being sued by 10 other victims and that your case would have to be scheduled sometime in 2050. Do you think a judge can force Uber to hire more lawyers? Or w
Fortunately court scheduling doesn’t work that way.
Anyway, the real problem isn’t who to sue. The problem is when they kill someone no one goes to jail but when a human does the exact same thing they get hit with a felony vehicular manslaughter charge and do time.

The problem is when they kill someone no one goes to jail but when a human does the exact same thing they get hit with a felony vehicular manslaughter charge and do time.

The problem is when they kill someone no one goes to jail but when a human does the exact same thing they get hit with a felony vehicular manslaughter charge and do time.
Do, they, though? Maybe if it can be shown that they did it on purpose, or if they were driving drunk, or they fled the scene of the accident.
If (outside of hitting someone) they “do the right thing” (i.e. pull over immediately, call 911, aid the victim as best they can, are honest with the police about what happened), they likely won’t be prosecuted; at worst their insurance rates will go up or they’ll lose their license.
I happen to believe that Tesla is on the right track with regard to their driver-less technology road map. Instead of writing procedural code for all the cases that will arise (impossible) they are attempting to train their neural net with as much real-world data as they can grab. And they have more than anyone — a half billion miles of beta-FSD logged.
Yet I look at a video of where it fails [youtube.com] like this. The maker of the video is a) not a Tesla hater, and b) designs and executes a really good test case for a very basic driver-less software requirement. Not hitting a kid in the road.
It fails. The latest software version does better than most, but it still hits (oh so gently) the simulated kid and dog and once they are on the ground then proceeds to run over them and continue the trip! Obviously, once the figures are knocked down and under the front bumper the camera-only AI can no longer see them. It isn’t aware an accident happened. It sees a clear road so tries to proceed.
Watch the video and make your own judgement.
I just don’t see how any more neural net training would solve this if it has not already. As human drivers, we have all sorts of cues that an accident has happened or we hit something. A bump. People screaming. Honking horns. A scraping sound from under the car. Both Tesla and Cruise seem to have none of this and their driver-less car will proceed to try to complete the trip if it can because that is what is programmed to do because that is what pays. It will attempt to do so with blood on the fender because it can’t see it.
The fact that Tesla and the others are failing very basic tests like this at this late stage — years and years after robotaxies were promised — does not give me a lot of confidence that true self-driving cars are shipping soon.
The latest software version does better than most, but it still hits (oh so gently) the simulated kid and dog and once they are on the ground then proceeds to run over them and continue the trip!
I suspect that the driver monitoring system noticed that the driver was both (1) fully attentive and (2) not shocked when the car drove over the simulated kid, and correctly deduced from these clues that the kid wasn’t real.
A wiser car would have played along with the humans trying to fool it.
Then what’s a better test? Asking a real kid to run across the road?
No, just a driver who thinks he ran over a real kid.
So you think it’s ok for the car to splatter a kid and dog because it “detects” the mood of the driver?
Wut?
When it’s an AI driving the car, it needs to be a double blind test.
When it’s an AI driving the car it needs to not run over Timmy and his dog.
Did you even spend 30 seconds on the OP’s link? Try it. Watch Timmy get flattened a few times and you’ll have a different perspective on AI driving safety.
https://www.youtube.com/watch?… [youtube.com]
Did you even spend 30 seconds on the OP’s link? Try it. Watch Timmy get flattened a few times
Timmy was a doll.

I suspect that the driver monitoring system noticed that the driver was both (1) fully attentive and (2) not shocked when the car drove over the simulated kid

I suspect that the driver monitoring system noticed that the driver was both (1) fully attentive and (2) not shocked when the car drove over the simulated kid
1 I believe, 2 I don’t.

The latest software version does better than most, but it still hits (oh so gently) the simulated kid and dog and once they are on the ground then proceeds to run over them and continue the trip!

I suspect

The latest software version does better than most, but it still hits (oh so gently) the simulated kid and dog and once they are on the ground then proceeds to run over them and continue the trip!
I suspect
Wrongly, oh so wrongly.

that the driver monitoring system noticed that the driver was both (1) fully attentive and (2) not shocked when the car drove over the simulated kid, and correctly deduced from these clues that the kid wasn’t real.

that the driver monitoring system noticed that the driver was both (1) fully attentive and (2) not shocked when the car drove over the simulated kid, and correctly deduced from these clues that the kid wasn’t real.
Ok, lets ignore the insanity of #1 where you think it’s fine to hit an unexpected object because the driver was seemingly paying attention.
You think the car is deciding “oh, the driver doesn’t look too concerned about the kid looking object I just hit, so I’ll drive right over it“.
I’ll tell you exactly what happened in that test.
The Tesla weirdly under reacted to the unexpected obstacle in the road, which a human driver would also likely do, but it’s a big disappointment since instant
Seems like the limiting factor that is going to be that final difficult push to making these viable is emulating the vast amount of not just visual and auditory information but the mass amount of context to that information that human brain is able to deal with in short order that can lead us to act on information that even our eyes and ears don’t necessarily perceive.
Since we all inherently operate that way sometimes it’s easy to forget just what a massively complex instrument we are working with, irreplac

their driver-less car will proceed to try to complete the trip if it can because that is what is programmed to do because that is what pays. It will attempt to do so with blood on the fender because it can’t see it.

their driver-less car will proceed to try to complete the trip if it can because that is what is programmed to do because that is what pays. It will attempt to do so with blood on the fender because it can’t see it.
It occurs to me there’s a similarity between the above and what chatbots do that’s labeled “hallucinating”, which is that the chatbots spew lies/bullshit just as confidently as they “recite facts”… similar to the car just deciding to go forward. And the reasons are basically the same: neither system has a true understanding of what’s transpiring.

The fact that Tesla and the others are failing very basic tests like this at this late stage — years and years after robotaxies were promised — does not give me a lot of confidence that true self-driving cars are shipping soon.

The fact that Tesla and the others are failing very basic tests like this at this late stage — years and years after robotaxies were promised — does not give me a lot of confidence that true self-driving cars are shipping soon.
Who cares? As long as the numbers add up in the AI’s driving favor, that is all that matters. Kids get run over by humans all the time. What does it matter if we let an AI do it too?
(this is not MY argument, it was forced on me)
I genuinely want to know how many accidents per passenger mile Cruise was encountering compared to the average human being. From what I’ve been hearing, it’s orders of magnitude better.
So a human got dragged underneath a car in some terrible accident. You wanna talk about accidents, when I was six years old, a friend’s mom got pinned between two vehicles and suffocated to death. Accidents happen. What matters the most is objectively observing the rate at which accidents happen. And if Cruise has a lower rate of accident, then let’s encourage the growth of this business, not throw the baby out with the bathwater. Let’s just make sure Cruise has a good insurance policy that takes care of these situations.
If we really want to save lives, we’d require everybody to drive smaller, lighter, and slower cars. If we’re not willing to make that sacrifice, then teaching computers how to respond faster than humans to threats to human safety is the best compromise available.
> Let’s just make sure Cruise has a good insurance policy
That’s what Nicole said.
If I ran over your mom I’d likely go to jail on a vehicular manslaughter charge.
If a robot runs over your mom, nothing happens to anyone. I guess the family could sue. Whatever.
A workd where people can put 3 ton automated machines in public and not be held responsible when they kill people is an ugly horrible world.
The number of accidents/mile is irrelevant and a very nerdy way of looking at things. Your mother is still dead from a robot and you should just suck it up.

I genuinely want to know how many accidents per passenger mile Cruise was encountering compared to the average human being. From what I’ve been hearing, it’s orders of magnitude better.

I genuinely want to know how many accidents per passenger mile Cruise was encountering compared to the average human being. From what I’ve been hearing, it’s orders of magnitude better.
You won’t be able to find out because Cruise just got kicked off the road for lying to the DMV. You can’t trust their numbers.

Self-driving car research is great, but it should be done with a safety driver.
There are two organizational lobbying arms that will make sure that there will NEVER EVER be driverless vehicles on public roadways in the US.
They are the insurance lobby, and the law enforcement lobby.
Together those two will make sure that SOMEONE can always be held accountable. It affects the insurance industry’s bottom line, and it affects LEO ability to harass and arrest and ask stupid questions of motorists. Argue all you want, but cops don’t want a car they can’t pull a driver out of, or if they fee
Arguably there already are [partially] self-driving cars on the road.
Sure, you have safety drivers, systems that attempt to make sure the driver is attentive but ignoring that is ignoring the reality.
Systems will keep improving to a point it will make financial sense to mandate self-driving… like mandating emergency braking, rear view cameras. Legal frameworks and infrasructure (like v2v) are already being worked on.
Sure, it’s not yet full self driving and things like cruise will be setbacks… but give i
> Systems will keep improving to a point it will make financial sense to mandate self-driving
You base this future improvement on what? How do you know we haven’t already peaked out on self drive performance?
Or the sufficient level of performance would be so expensive no one can afford it?
Driving is easy. Until it’s not. There are a zillion random events and edge cases every day. The current methods simply can not account for all those cases. Human can because we can think and know context and recogn

Anecdote: I live in an area with lots of wildlife. I ran over a snake last week. I saw it but it was obviously low to the ground and hard to see and looked like a big stick until it started moving. I was going 50 with cars behind me, oncoming traffic and one lane each way. What should the robot do and why?

Anecdote: I live in an area with lots of wildlife. I ran over a snake last week. I saw it but it was obviously low to the ground and hard to see and looked like a big stick until it started moving. I was going 50 with cars behind me, oncoming traffic and one lane each way. What should the robot do and why?
How did you make your decision about what to do? Did you mentally review the relevant traffic laws, identify the species of snake that was in your path, checks its status wrt the Endangered Species Act, run Monte Carlo simulations of each option, etc? Or did you just go with your gut instinct that it would be better to run over the snake than to swerve or panic-brake and risk causing an accident?
Because if it was the latter (and let’s face it, it was; you only had a few hundred milliseconds available to d
>There are two organizational lobbying arms that will make sure that there will NEVER EVER be driverless vehicles on public roadways in the US.
>They are the insurance lobby, and the law enforcement lobby.
They don’t have the power to hold it back, if it is at all viable. There’s TOO MUCH MONEY other places.
>Together those two will make sure that SOMEONE can always be held accountable.
Of course someone will always have to be accountable. You say that like it’s a bad thing.
But if your car is “in the
Note the new info that the victim was dragged because the robot thought it would be a good idea to pull over after it hit her.
And the obstacles in the way are not a matter of engineering and incremental increases in processor power. The obstacles are on the conceptual end. There’s no technology available today that it is reasonable to believe will ever be sufficient to the task.
> Especially considering the root cause of the incident was in fact a human driver.
Maybe that’s how it started. But what happened after that seems pretty awful:
“after the traffic light turned green, giving the Cruise car and other car — which had been waiting side-by-side for the light — the right to enter the intersection where a woman was walking … in the crosswalk … The other car struck the woman and she rolled off its side and into the path of the driverless taxi, which was carrying n

But what actually happened sounds like it could be pretty awful behaviour even for a human driver that’s way over the alcohol limit : after being hit by the preceding vehicle the woman “rolled off its side and into the path of the driverless taxi”, the taxi then hits her, rolls over her and proceeds dragging her for 20 more feet “attempting to move off the road”, and all this happening it seems at very low speed.

But what actually happened sounds like it could be pretty awful behaviour even for a human driver that’s way over the alcohol limit : after being hit by the preceding vehicle the woman “rolled off its side and into the path of the driverless taxi”, the taxi then hits her, rolls over her and proceeds dragging her for 20 more feet “attempting to move off the road”, and all this happening it seems at very low speed.
No, it really doesn’t. The information provided says they were at a stop, the light turns green so they start moving, and the vehicle plowed her over into its lane, it’s unlikely there was much warning, and then it stopped with the tire on top of her, hence she was pinned under it. Then it begins moving to pull off to the side of the road, unbeknownst that it was somehow dragging her along.
This sounds very mild compared to other incidents I’ve already heard of before. 20 feet? Try getting dragged for miles
And if a cop was there he would have arrested her for hit n run.
If a robo car does that, no one gets arrested. Or even loses their job.
Huge difference between being run over by a human who might be held responsible and a robot that has zero responsibility anywhere in the chain from development through hitting someone or after.

If a robo car does that, no one gets arrested. Or even loses their job.

Huge difference between being run over by a human who might be held responsible and a robot that has zero responsibility anywhere in the chain from development through hitting someone or after.

If a robo car does that, no one gets arrested. Or even loses their job.
Huge difference between being run over by a human who might be held responsible and a robot that has zero responsibility anywhere in the chain from development through hitting someone or after.
No, in this case the vehicle is treated like any other dangerous appliance and it is subject to a recall. If a company builds and sales a dangerous crib, baby swing, trampoline, etc the item is recalled and pulled from the shelves. There is no operator in the self-driving cars so they must be certified for autonomous use and if they fail that purpose then their certification to operate is revoked.
There are already laws for faulty appliances and just as California did, methods to punish companies that cause injuries.
And that’s exactly the problem. They can kill a persons on a public street with no risk to themselves.
But the crib or other product is something I have to choose to buy.
I can be killed by a product I had absolutely nothing to do with and they get away with it. I was never given a paycheck to be a live on the street beta tester.
Nor was little Timmy or his dog in this video from only 3 months ago: https://www.youtube.com/watch?… [youtube.com]
You think it’s ok to have that shit on the streets? Why? It’s completely not
There are tons of examples of product malfunctions that killed people who did not buy the malfunctioning product. Cars too, for example the tire and brake malfunctions, or even the car just spontaneously catching fire — all of those have killed people who didn’t choose to buy the product (God knows that makes someone more deserving of dying). Seriously, the “who has liability?” question is a stupid reason to keep allowing humans to cause over 40,000 deaths per year, most of whom are pedestrians, other driv
> Anyway, the question should be how many deaths will be reduced by having autonomous vehicles on the road.
I don’t know. Maybe if we could ask that but I really don’t think we can now.
And maybe the data for the cars should be public, it seems too easy for the companies to hide their safety performance.
The 40,000 deaths question is just a stupid reason to advocate for self driving cars. The real issue at hand is the loss of freedoms that is sure to ensue once self driving cars take hold. Cars will not become smarter, but the world will change to accommodate them, reducing human freedoms. The world in which those 40,000 deaths will be avoided will not be comparable to the world now.
Remember, the concept (and the word too) of jaywalking was invented by motor car companies, because they knew accidents mean l

Shit, I’ve had a car that should have seen me from no less than 100 feet away in broad daylight hit me while I was in a crosswalk and she ignored a stop sign. I put my palms on the hood and kicked my feet in the air and landed softly about 5 feet back so I didn’t receive any kind of injury, and right after I got out of the way she just took off.

Shit, I’ve had a car that should have seen me from no less than 100 feet away in broad daylight hit me while I was in a crosswalk and she ignored a stop sign. I put my palms on the hood and kicked my feet in the air and landed softly about 5 feet back so I didn’t receive any kind of injury, and right after I got out of the way she just took off.
And then everyone stood up and clapped.
> then it stopped with the tire on top of her, hence she was pinned under it. Then it begins moving to pull off to the side of the road, unbeknownst that it was somehow dragging her along.
Ok, that might be right, it stopped then started rolling again. I assumed it didn’t stop twice, didn’t make much sense to me and we’re still trying to piece things together, like how or why had the taxi “attempted to move off the road”.
But is that worse : the taxi is stopped at the light, then crosses the intersection,
I love the idea of autonomous vehicles, but let’s take the blinders off. The car didn’t just run over the pedestrian, it dragged him and then parked on his leg.
Unlike a human driver, it had no ability to understand if someone pounded on the window and yelled “Get off of that guy’s leg dumbass!”.
That is a real corner case, but it’s important to be able to handle those when maneuvering a several pound vehicle in public.

That is a real corner case, but it’s important to be able to handle those when maneuvering a several pound vehicle in public.

That is a real corner case, but it’s important to be able to handle those when maneuvering a several pound vehicle in public.
Yes, even Cozy Coupe drivers can generally handle that kind of situation.
(We know what you meant.)

Based on the incident involving the pedestrian who was run over, it sounds a bit like people are expecting autonomous vehicles to operate at a much higher standard than they expect human drivers to operate at.

Based on the incident involving the pedestrian who was run over, it sounds a bit like people are expecting autonomous vehicles to operate at a much higher standard than they expect human drivers to operate at.
Cruise lied to the DMV. They hid information about the accident. That is why they got kicked off the road.

If you haven’t read about the story in a couple days, then they lied to you, too. You should probably look at the updated account of what happened.
“although the new Corvette E-Ray is innovative”
You mean the Chevrolet NSX?
The “someday” was 30 years ago. This technology was worked out by MIT. Want to know both the secret to getting the technology to work and also the reason it was never implemented? Infrastructure. The researchers found that by embedding a steel spike in the road every 6 feet and putting magnetic moment sensors under the front and rear bumpers, the car always knew when and where it was on the road and how to traverse it. You can even embed traffic signage in the roadway for the cars. This only leave obstacle
Lol

tesla is 99% autonomous already

tesla is 99% autonomous already
I have a Tesla and when using FSD in town, it seems to be as autonomous as a very drunk driver.
It drives itself fine on a freeway, which is nice on long trips.
Get rid of buses and put in automated streetcars. Make everything else illegal.
There may be more comments in this discussion. Without JavaScript enabled, you might want to turn on Classic Discussion System in your preferences instead.
Are Amazon Warehouse Injuries More Widespread Than Thought?
When Supermarket Freezer Doors Have Screens With Ads
“If people are good only because they fear punishment, and hope for reward, then we are a sorry lot indeed.” — Albert Einstein

source

Leave a Comment