Tesla recall, robotaxi crash cause self-driving car crisis
It seemed like the ultimate tech dream.
Want to go out to dinner? Order a robotaxi. Roadtrip? Hop in your Tesla, don’t worry, it will drive for you.
But now the mixture of science fiction and marketing boosterism has collided with reality — and some in the car industry believe the dream will be stuck in the garage for a long time to come.
Last week virtually every Tesla on the road was recalled over regulators’ concerns that its “Autopilot” system is unsafe, part of a years-long National Highway Traffic Safety Administration investigation into Elon Musk’s cars.
Musk had been the biggest public booster of the idea of the car doing the driving, first promising immediate “full self-driving” in 2016.
The Autopilot system allows Teslas to self-steer, accelerate and brake, but needs a driver in the front seat. Tesla is updating the software but says the system is safe.
Tesla has not gone as far as others, which have launched live trials on city streets of cars with empty driving seats, among them GM’s subsidiary Cruise; Google’s Waymo unit; Volkswagen ADMT; Subaru; and Uber.
But a hair-raising crash in San Francisco on Oct. 2 involving a Cruise AV called Panini — a driverless Chevy Bolt — has thrown the “autonomous vehicle” project into crisis.
It started when a woman was hit by a normal car on the city’s Market St. and thrown into Panini’s path, ending up literally sandwiched under it.
Panini stopped — but incredibly, then restarted and dragged her for 20 feet towards the curb at 7mph. Firefighters had to use the jaws of life to free her.
On its own, it might have been dismissed as an error.
But it was just the latest in a long run of crashes, injuries and death in states which have allowed driverless trials.
This year alone, self-driving cars crashed 128 times in California; one Cruise AV hit a firetruck; another hit a bus; a Waymo car delayed a firefighter rushing to a 911 call by seven minutes; a Cruise Origin embedded itself in a building in Austin, Tex., then couldn’t be moved because it didn’t have a steering wheel; and in San Francisco a Waymo car killed a dog while a Cruise AV got stuck in wet concrete.
Now industry analysts warn there’s no way AV manufacturers can quickly convince the public that driverless transportation is safe.
James Meigs, a senior fellow at the Manhattan Institute and former editor-in-chief of Popular Mechanics, said Panini’s San Francisco crash shows autonomous vehicles — “AVs” — aren’t “ready for the wild.”
“It’s kind of like everyone’s nightmare — you know, the robot doesn’t stop,” Meigs told The Post.
The issue isn’t that autonomous cars are objectively more unsafe than regular ones. They can’t be impaired by drink and drugs — up to half the drivers in serious or fatal crashes are, a federal study found — and they have not killed anything close to the 42,795 lives claimed by cars in 2022 in the US.
In fact, said Jason Stein, former publisher of industry bible Automotive News and host of podcast “Cars & Culture with Jason Stein,” they’re far safer than putting a human behind the wheel.
“Humans might not like to hear it, but the technology that’s come out in the last decade is far superior to anything humankind could ever accomplish with driver training,” Stein said.
“That doesn’t mean today’s drivers trust it any more, but one day we may look back and wonder why we ever let anyone on the road in the first place.”
The real crisis is in making humans trust the tech — something Cruise appears to have failed to do.
After Panini’s crash, executives at Cruise, California’s DMV alleged, only showed officials video of the first part of the accident, during which it stopped — not the segment when it restarted and dragged the woman.
The DMV suspended Cruise’s licenses to operate in California on Oct. 24 and is now considering a $1.5m fine and more sanctions.
As a result, GM put the 950-strong Cruise robotaxi fleet, also found in Austin and Houston, in park. It called off preparatory work in Atlanta, Los Angeles, Seattle, San Diego, and forced out the CEO Kyle Vogt in November.
Last week it fired nine top leaders and 900 workers, a quarter of its staff.
Cruise was a cross of Silicon Valley’s “move fast and break things” culture and Detroit’s marketing genius: GM bought the West Coast start-up in 2016 for more than $1 billion and its CEO Mary Barra called her first driverless ride through San Francisco in 2022 “surreal.”
GM, Honda, Microsoft, Softbank and Walmart pumped in $10 billion and Cruise pumped out good news — even when Panini crashed.
It said its reaction time of 460 milliseconds, was “faster than most human drivers,” but then had to admit that Panini dragged the crash victim because its human programmers had told it to move to the curb after any impact.
Prof. Krzysztof Czarnecki, who leads the Intelligent Systems Engineering Lab at the University of Waterloo in Ontario, Canada, told The Post that programming a car to move after hitting a pedestrian was a basic error.
“The need to be extra cautious after hitting a pedestrian – such as considering the possibility of them being under the car – should come up during even a simple brainstorming exercise of a safety engineering team,” he said.
“Even my students pointed this out to me. Failure to do so indicates that not enough attention was given to safety engineering.”
The lack of a “strong safety culture” within some automated driving companies like Cruise is the primary issue plaguing the burgeoning industry, Czarnecki alleged.
Uber abandoned its attempt at robotaxis entirely in 2020 after its autonomous Volvo killed a woman in Arizona in 2018, prompting damning allegations of an “inadequate safety culture” by the National Transportation Safety Board.
The National Highway Traffic Safety Administration is now probing Cruise, not just over Panini, but three other incidents — one of which involved another pedestrian being injured.
“To save themselves, Cruise needs to radically rethink its approach to safety,” Czarnecki said, adding the company could publish all its internal data.
Google’s Waymo subsidiary, in contrast to Cruise, has a “strong safety record,” thanks to researchers who publish automated driving data, unlike other AV companies, according to Czarnecki.
Undeterred by Cruise’s troubles, Waymo is still offering driverless taxi rides in San Francisco, Phoenix and Los Angeles County, with Austin coming soon.
The company is testing operations in Buffalo, New York, to improve performance in the snow — although it does not plan robotaxis there, spokeswoman Katherine Barna said.
Waymo’s fleet of electric Jaguar I-PACEs operate with “fully autonomous technology,” with 29 cameras, and detailed custom maps with real-time sensor data to determine its exact location.
On the roof is the “lidar” — light detection and ranging — system and the computing unit, its size hinting at the sheer amount of processing power needed to drive on city streets.
But Waymo, which launched in 2009, has no current plans to tackle the US’s toughest streets, those of New York City, Barna said.
Experts said the ability of machines to safely react to countless variables on crammed roads is improving exponentially, in part due to advances in artificial intelligence.
But pedestrians darting into crosswalks, abrupt traffic changes or even wayward animals pose unique hurdles for the programmers, said Jonathan Hill, dean of Pace University’s Seidenberg School of Computer Science and Information Systems.
“You know, you have millions and millions and millions of exceptions – and that gets really tough to test for,” Hill said. “A decade into testing these vehicles and there’s still problems. They’re still cars, they’re not perfect machines.”
Hill predicts that the growing power of artificial intelligence will lead to relatively accident-free self-driving cars becoming available in some US cities within 12 to 18 months, but cautioned that the industry is not trusted by the people it wants to use them.
“Science being up to the task is one thing, but human acceptance is another,” Hill said.
The problem which is perplexing the driverless car movement most of all is how to get Americans to believe in them — when the idea itself is basically un-American, said Stein.
“What happens in the case of autonomous vehicles is you’re turning freedom or the decision-making ability over to something else that you don’t quite trust yet,” Stein told The Post.
“And it’s not on your terms; it’s on the terms of the microchip in the vehicle. I think that runs against the grain of the American philosophy and American freedom.
“Americans want to control freedom on their own terms.”