Who’s To Blame When Driverless Cars Crash?
In 2015, there were more than 186,209 casualties in reported road traffic accidents. Of these, 22,137 people were seriously injured and 1,732 died. With human error often blamed for the majority of car accidents, the rise in driverless vehicles is predicted to change our roads for the better and save lives in the process.
However, although self-driving cars are set to reduce the number of accidents on the roads, concerns have been raised regarding the ethics of self driving cars along with the impact they’ll have on the insurance and legal sector.
Can We Trust Machines to Get us From A to B safely?
Although experts predict that driverless cars will significantly reduce the number of casualties on our roads, self-driving vehicles can certainly make mistakes.
On the 7th May 2016, Joshua Brown died when his Tesla driverless car crashed into a lorry while on ‘autopilot’ mode. An investigation concluded that the car failed to apply the brakes due to an inability to distinguish between the white lorry and the sky.
Although Tesla noted that this was the first fatality in 130 million miles of autopilot driving, it certainly wasn’t the first accident. There have been numerous crashes involving self-driving cars, including a collision between a Google vehicle and a bus in February 2016.
A Google spokesperson said: “On February 14, our vehicle was driving autonomously and had pulled toward the right-hand curb to prepare for a right turn. It then detected sandbags near a storm drain blocking its path, so it needed to come to a stop. After waiting for some other vehicles to pass, our vehicle, still in autonomous mode, began angling back toward the center of the lane at around 2 mph and made contact with the side of a passing bus traveling at 15 mph.
“Our car had detected the approaching bus, but predicted that it would yield to us because we were ahead of it. Our test driver, who had been watching the bus in the mirror, also expected the bus to slow or stop. And we can imagine the bus driver assumed we were going to stay put. Unfortunately, all these assumptions led us to the same spot in the lane at the same time.”
Who’s to Blame When Driverless Cars Crash?
The uncertainty demonstrated in the example above highlights just how difficult it could be to determine who is responsible following a driverless car collision, particularly if a self-driving car collides with a vehicle operated by a human driver.
In the wake of an accident involving a self-driving car, police officers, lawyers and insurers will need to assess how much control the people within the car had over the vehicle. At present, Google’s cars are truly self-driving and the intention is that they need no human intervention. Tesla’s vehicles, however, do require some assistance.
When the owner of a Tesla filmed the moment his car collided with a van parked in the fast lane of a highway, he blamed both the driver of the parked van and his self-driving car for failing to react. However, as one particular comment on his Youtube video pointed out, the manual for the car in question warns drivers that the vehicle might not brake/decelerate for stationary vehicles, meaning some intervention is required.
The Tesla owner said: “Yes, I could have reacted sooner, but when the car slows down correctly 1,000 times, you trust it to do it the next time too.”
This highlights the importance for consistency and clarity. Drivers need to know whether they can trust their vehicle to react in the face of danger or whether they need to take control themselves. After all, if a driver doesn’t trust their car to move out of another vehicle’s way, they may intervene and make the situation even worse. The driver may swerve into oncoming traffic mere seconds before the car was programmed to react in a much safer way.
In an essay about pilots and automation, William Langewiesche writes: “Automation has made it more and more unlikely that ordinary airline pilots will ever have to face a raw crisis in flight – but also more and more unlikely that they will be able to cope with such a crisis if one arises.”
With different manufacturers programming their vehicles based on their own research, there’s also a risk that two self-driving vehicles may collide as a result of conflicting programming.
Why Are Self-Driving Cars an Ethical Minefield?
When humans are faced with a crisis, it’s only natural for panic to set in and impulse to take over. In contrast, machines aren’t going to react on an emotional level and should, in theory, respond in the way they’ve been programmed to.
However, although the mass-adoption of driverless cars should lead to calmer roads free from stress, fear and aggression, it could result in accidents that humans deem unethical.
For example, one ethics-related scenario that manufacturers are facing is whether to prioritise passengers or pedestrians in the event of a crisis. If a car was to turn around a sharp bend on a single lane road and a child was to step in front of the car, leaving the vehicle no time to break, should the car continue going forward and hit the child or should it swerve into oncoming traffic, putting the family and other motorists in danger.
Of course, this scenario would be difficult even if a human was in control of the vehicle. But when a machine is programmed to make a certain decision months before it occurs, fingers are likely to point towards manufacturers in the event of a severe or fatal crash.
Although experts are confident that self-driving cars will make the roads safer, when accidents undoubtedly do occur, whether it’s the result of poor programming, a malfunction or conflicting decisions from the machine and the driver, these collisions are sure to drum up concerns and make people question the safety of driverless vehicles. After all, statistics show that flying is by far the safest way to travel. Yet that doesn’t stop people worrying about their next flight in the wake of a high profile – yet rare – plane crash.