Self-driving cars are often touted as the way of the future. Researchers, designers and some safety advocates say they will make roads safer by eliminating human error. While it is true that most motor vehicle accidents happen as result of human error, there are a number of issues that still need to be ironed out regarding how self-driving cars operate. If an accident happens and it is the fault of the self-driving car, the people injured in the action might be able to file a product liability lawsuit.
Already, people have died in accidents involving self-driving cars. In March 2018, a pedestrian in Tempe, Arizona was the first to be killed in an accident involving a self-driving car. Uber, the company that owned the car, quickly settled a lawsuit with the family out of court. One issue that arose in this case was that the car only detected an obstacle in the road in the final seconds and identified it three different ways. Each identification called for a different course of action. Furthermore, the car’s emergency braking system was not enabled, and the human in the car was not alerted.
Another fatality involving a self-driving vehicle occurred in 2016 when the driver of a Tesla vehicle died after it crashed into a tractor-trailer. However, in that incident, the driver was alerted about the need to turn off the autopilot and did not respond.
One of the biggest challenges facing designers of self-driving car is how to handle situations in which the car must take action, but any course of action may lead to some injuries or fatalities. For example, a car might need to turn or brake in order to avoid a person or vehicle just ahead, but doing so might endanger might endanger other vehicles or the driver and passengers in the self-driving car.
These dangers assume that the car’s artificial intelligence is all working perfectly. Unfortunately, anyone who has ever worked on a computer can confirm that even the best-designed software has bugs and errors or simply inexplicably does not operate as it should all the time. Self-driving cars are likely to have multiple layers of fail-safes built in, including a way to alert a human operator that an action must be taken, but nothing performs perfectly 100 percent of the time. There is even a possibility that a self-driving vehicle could be controlled remotely by a hacker.
Before self-driving cars can take to the road in any significant numbers, legislation will also have to consider who will be at fault if there is an accident. The tendency is already leaning toward holding the manufacturer responsible.
The idea behind product liability laws is that companies have an obligation to design products that are safe for consumers. One element of a product liability lawsuit is whether the company was negligent in some way in ensuring the safety of the product. However, lawsuits can generate significant negative publicity for a company, and like Uber in the case with the pedestrian, some manufacturers may prefer to settle quickly rather than go to court and draw the case out. By design, self-driving cars are complex mechanisms, and this greater complexity means a greater likelihood of a failure in some part of the system. Therefore, product liability lawsuits may increase along with the widespread adoption of self-driving cars.