A woman walks her bicycle across a highway and sees an approaching car. She waves at the driver, thinking the car will brake to allow her to pass safely.
Instead, the car plows into her, knocking her off her bicycle. The driver of the car never even bothered to slow down.
A classic hit-and-run accident? Not exactly. The car didn’t have a “live” driver behind the wheel. It was a modern “self-driving” vehicle that pioneering car manufacturers like Uber and Tesla claim is a safer and more affordable alternative to today’s human-driven cars.
There’s no doubt that human error plays a major role in the tens of thousands of driving accidents that occur on the nation’s roads and highways. Drivers may be drunk, driving too aggressively or simply distracted and in a split second disaster occurs
But will self-driving cars make things better or worse? Worse, say, critics, because self-driving technology can’t possibly account for the range of contingencies that can lead to traffic accidents. They want the introduction of self-driving cars delayed indefinitely – and maybe even scrapped.
But supporters of self-driving cars say the technology is improving and that the prospective advantages of self-driving cars are too great to be ignored. Who’s right?
Bugs and more bugs
In fact, the hypothetical scenario cited above is that not far off from reality.
In early tests of self-driving cars, real accidents – and fatalities — have occurred.
All told, there have been 5 major fatal accidents involving self-driving vehicles. In most of these cases, a human driver, not a passenger or pedestrian, perished.
Supporters of “autonomous vehicles” or AVs say that humans –not the self-driving technology — are largely to blame for these accidents. Some official data supports their claim.
For example, in a study conducted by Axios, based on incident reports filed with the State of California Department of Motor Vehicles between 2014 and 2018, the company found that people were responsible for 81 of the 88 accidents involving AVs.
But other evidence points the finger squarely at the AVs. For example, one highly-publicized in Tempe, AZ a woman with a bicycle was killed when a self-driving Volvo with a human driver at the wheel failed to see her in time to stop or swerve.
The National Transportation Safety Board (NTSB) investigated the incident and concluded in a report released last week that basic flaws in the AV technology were to blame.
According to the report, Volvo’s technology could not distinguish a jay-walker from a pedestrian moving in a designated crosswalk. In effect, the car couldn’t “see” the bicyclist, who was crossing an open highway, until it was too late..
The NSTB also blamed flawed AV technology for a 2018 fatal accident involving a Tesla self-driving car that ran into a stationary fire truck.
Manufacturers say such incidents are unfortunate but claim that they have rectified the problems in the AV technology to eliminate a re-occurrence. But have they?
“We have the technology”
The basic technology behind self-driving cars is fairly simple – too simple, critics say.
Tesla uses a radar-processing system known as Auto-Pilot, which allows for self-driving on the open road but with human drivers still on hand. The technology can also allow drivers to see ahead of cars in front of them to anticipate possible problems.
Some Tesla drivers have credited Auto-Pilot with allowing them to avoid accidents.
Google calls its technology LIDAR because it relies on light waves rather than radio waves. It’s superior to radar in its ability to detect objects in high resolution but dust, rain or fog can reduce its functionality.
There are five different levels of self-driving cars, depending on how much human intervention is still involved. Only the top two levels, 4 and 5, envision completely autonomous cars. The other levels require humans to be on hand to intervene when problems arise.
Currently, no level 4 or 5 cars are operating full-time on the roadways but they could be within a decade.
Many of today’s cars already involve a degree of automation, including cruise control and self-braking and self-parking features that use sensors to measure distances and guide vehicle navigation.
But critics say these partial self-driving features can’t compare with the sophistication – and risk – involved with a driverless car operating the open road, especially without the benefit of a stand-by driver to intervene as needed.
The public still has doubts
What do American consumers think about AVs? They’re still deeply skeptical.
In a poll of 2,586 people, 65% of survey respondents said that they were not likely to purchase a self-driving car, and only 6% responded “extremely likely.”
The main reason? Safety. Even as prospective passengers, just 40% of survey respondents said they would feel “safe” in a self-driving car.
Some of these polling responses are likely based on the negative publicity that has surrounded the Volvo and Tesla crashes.
In fact, other surveys suggest that if AV technology improves, consumers would likely welcome the introduction of self-driving cars.
For example, a survey of 5,500 consumers and 280 automotive executives conducted by the consulting group Capgemini found that consumers were attracted to the idea of saving an average of 6.5 hours on time spent driving by relying on self-driving cars.
“It’s not just safety and the technical aspects of autonomous cars that will determine their adoption rate — it’s also the consumer experience,” Markus Winkler, vice president, global head of automotive at Capgemini, said in an interview with Automotive News.
Testing the precautionary principle
Ultimately, the debate over self-driving cars comes down to the level of acceptable risk.
There are two schools of thought about how safe new technology must be before it is formally introduced and blessed by government regulators.
Free market fundamentalists believe consumers should be left to decide these matters for themselves, based on available product information and their own predilections.
Supporters of vaping, for example, have long maintained that governments have no business trying to regulate vaping products as long as there is a consumer demand and no blatant false advertising is involved.
But under the classic “precautionary” principle, governments have a responsibility to ensure that new products are “safe” prior to their introduction into the marketplace.
Vaping, which enjoyed relative freedom in the marketplace until recently, has come under fire after a string of deaths and hospitalizations involving teen vapers appeared to confirm the fears of anti-vapers – and the results of scientific studies – suggesting that vaping mist could damage the user’s lungs.
One reason self-driving cars may have gotten introduced so quickly without prior testing is that the same kind of technology has already proven its worth in other industries – for example, merchant shipping.
But critics say that cars are not ships. Many “robot” ships are still piloted remotely offshore and their routes and trajectory are calculated precisely in relation to all other shipping traffic. Course corrections may be fairly easily made, and potential collisions averted.
But on busy streets, there are far too many vehicles – and variables — in play, critics say. Sudden changes in plans and movements may occur and there is no way of predicting – or adjusting to — the wide range of contingencies that might arise.
Right now, the jury is still out on self-driving cars. Economic imperatives – and lobbying from Tesla, Uber and a host of other prospective AV manufacturers — are creating pressure for their introduction even in the face of consumer doubts and the acknowledged risk to human life.
Overall, there’s a general perception among car industry stakeholders that the prospective advantages of driverless cars will eventually outweigh the perception of risk.
But given a host of obstacles – including the right of each state to impose its own “rules of the road” on the vehicles operating in its jurisdiction — mass production and full-scale public acceptance of AVs may still be years away.