Ethical Issues of Autonomous Cars
October 31st, 2016
Whether we like it or not, we are witnessing the birth of a new automotive era. With global warming, increased awareness and global consciousness at an all-time high, the automotive landscape is changing rapidly.
First, there was the electric car boom. You may think that it has something to do with the recent advances in electric motors and batteries, and youíd be right to a certain extent, but the truth of the matter is that electric vehicles have been around for as long as internal-combustion engine cars, if not slightly more. What is the reason we didnít have any significant improvements in electric vehicles until now? Some would speculate it is because the technology didnít benefit oil companies, and indirectly, the powerful countries exporting oil.
With electric vehicles being on the brink of replacing the normal, internal-combustion engine cars, significant improvements are expected, but every change inevitably has both positive and negative aspects.
There will eventually come a point where cars become so sophisticated that theyíll be able to drive themselves with no human intervention. For example, Tesla's "Autopilot" mode (which is technically still in "beta testing") is nearly able to drive itself, it just needs the driver to keep an eye on it and be ready to take over if necessary. At what point exactly does a self-driving car stop being just that, a car, and rather becomes an artificial intelligence that needs to make make life and death decisions?
Putting aside the joy driving brings to many people, and the amount of disappointed enthusiasts there are going to be once driving your own car is relegated to race tracks, letís look at it from a different angle.
Currently, the driver is the primary party held responsible for any actions he or she undertakes with their car. For all intents and purposes, itís under their control and ultimately, their responsibility. If they crash or take an innocent life away due to carelessness or substance abuse, itís their fault.
Letís imagine the following situation: a lethal threat appears in your lane and it requires you to either crash into it, swerve right into a motorcyclist or left into a large SUV. Now, if thereís a person operating the vehicle, it really depends on the circumstances. Itís a natural reaction over which you will have no control, your instincts simply take over and what happens, happens. But what if the car in question is autonomous and there are occupants inside it? How does it choose whatís right and whatís wrong?
The simple fact of the matter is that it canít, not without assistance at least. It will have been programmed to act one way or another, either saving its occupants inside or swerving and hitting someone else at the chance that you survive. Itís no longer a reaction, but rather a decision. The car calculates all chances and outcomes of each possible scenario and does what itís been programmed to do. When you think about it that way, you might say that the outcome of that particular situation has been known for months or even years beforehand, when the car was being programmed. Itís kind of creepy to think of it that way, isnít it?
But whoís to blame in a situation like that? Is it you, the owner? Highly unlikely, you werenít operating the vehicle. Is it the programmer? Not really, the car did exactly what it was programmed to do. That leaves the manufacturer, but we all know they wonít take the fall for it. If a manufacturer shifts responsibility over to the owners and makes them sign a contract stating that the owners assume liability for everything, is anyone really going to buy a vehicle from them?
Then, how do we decide who gets to live and die? If the car is programmed to protect its occupants, it means that it will perhaps deliberately crash into another person or persons to protect its owner. If itís designed to save as many lives as possible at all costs, even at the expense of the lives of the occupants, is anyone really going to purchase the vehicle? Competition would rise between manufacturers, with the majority of sales going towards those who offer cars which protect its occupants.
Perhaps some manufacturers will choose to target consumers who believe in the principle of do the least harm even it if means that their lives would be sacrificed for the greater good. On the other hand, I can imagine luxury automotive manufacturers going the opposite route and pledging to save the lives of their well heeled owners at all costs.
The only real solution to this issue is to make all autonomous vehicles save as many lives as possible, even if it means risking the occupants inside. If we look at some statistics, weíll conclude that more than 90% of all crashes happen because of human errors, effectively making them avoidable. Therefore, the more autonomous vehicles we have on the roads, the safer it gets. If we have only self-driving cars driving on the streets, the number of accidents will plummet. The only unpredictable objects on the road will be pedestrians, bicyclists, children and animals.
However, a small risk will always remain, and the vehicle gets to decide what happens. Robots may never be able to have cognitive thinking, making them dependent on information. For self-driving cars, they are dependent on road condition, weather, distance and speed data collected from radar, cameras, LiDAR and other sensors. A robot may know more data than a human being and be able to solve problems faster, but thatís all it can do. Given a new problem which it doesnít have enough data for (or hasnít been programmed to solve), it will completely freeze, not knowing what to compute. By contrast, a person can react in all situations, whether it is the right or wrong decision.
Imagine the following situation: a sudden obstacle appears in the middle of the road, and itís blocking off your lane. For a human, the solution is simple. If there are cars behind, he will swerve around it and avoid it, even it if means crossing a double yellow line and breaking the law. For the computer, it might not be so easy. If we have law-abiding autonomous cars, the choice of crossing the line and swerving may not be in its options (or memory). Its only other option will be to perform a sudden stop, making the cars behind possibly crash into it. This is just an example, but you get the idea. Because a computer canít think ďoutside the boxĒ, or so to say, it will never be able to perform the decisions a human can make.
One possible solution is to implement moral principles into the computerís algorithm, rather than logical ones, to make it more similar to human reactions. Easier said than done, we know. But what if it has to make a choice between swerving into a biker wearing a helmet, and one not wearing anything? Does it go for the one with the helmet because he has better chances of surviving? In which case, weíre effectively penalizing the law-obeying rider unfairly. Likewise, what if it swerves into the one not wearing helmet? The car will then be dealing out street justice, becoming something it isnít. In both cases, it has to ďtargetĒ someone, and thatís really the biggest issue in all of this.
How do we make autonomous cars behave like they had human drivers operating them in dangerous and reaction-based situations? Perhaps we arenít supposed to. If we consider the fact that accidents will decrease by a massive 90% in a fully autonomous world, maybe thatís all it takes. By simply having a lot less casualties, weíll be negating the fact that cars are the ones who decide who gets to live and not us.
Ask yourself this question: would you ever get into your self-driving vehicle, fully entrusting it with your life, knowing full well that itís designed to save as many lives as it can rather than just protect you at all costs?
As a person living with Multiple Sclerosis which might take me out of the driver's seat much earlier in my life than my parents or grandparents, I'm probably much more willing to take that risk than the average person.
I'm looking forward to discussing these issues with the automotive manufacturers and other self driving car enthusiasts at CES 2017 from January 5th to 8th in Las Vegas, Nevada.
Copyright 2023 © PaulsTravelPictures.com
All Rights Reserved ģ