Responsibility in the Age of Driverless Automation

Responsibility in the Age of Driverless Automation

By Justin Schaefer

Self-driving cars are coming, and they will almost undoubtedly be the predominant mode of transportation on our roads in the next few decades. The first driverless cars hit the streets in 2015, and already, they are beginning to boom in controlled areas. For instance, NuTonomy will be introducing driverless taxis to Singapore in 2018, and in 10 other cities around the world by 2020.

Industry Giants Taking on the Driverless Innovation Challenge

Major players in the car market such as Volvo, Ford, Tesla, GM, and even Google are all making great strides in driverless innovation. Google and Ford, for instance, agree that self-driving cars need to be fully autonomous rather than a hybrid human and robot driver, due to the fact that drivers start to trust the technology and stop paying attention. By literally handing over the wheel to robots, it becomes clear that new conversations and laws about safety and responsibility will need to be processed.

The US Energy Information Agency predicts that electric vehicles will make up 8 percent of the US market share by 2025, and we can expect those car models to have integrated self-driving systems as standard options relatively soon.

Even before that, however, aftermarket self-driving systems will be available for purchase and the combination of newer technology, price differences and varying efficacies of these systems will cause standardization or even legal issues (for instance the legal questions raised by the fatal Tesla accident in May of 2016).

Can Man and Machine ‘Coexist’ Safely on the Same Road?

During this period, when manually-driven and automatically driven cars of many different types and grades coexist on the same streets, an interaction between the two groups is bound to be less than cordial. Early on in their inception, driverless cars had higher accident rates than regular cars, despite being touted as the ultimate in safe-driving features. The programming of these vehicles is simply not capable of dealing with the limitless, sometimes chaotic, variables seen on the road. It is then apparent that problems will arise when an algorithmically-driven car encounters the decidedly un-algorithmic driving style of humans.

For a portent of things to come, one need look only to your local freeway on-ramp, and the long-haul trucker that is trying to merge into traffic. Today, truckers in heavily congested areas routinely have to force themselves into a lane by slowly moving over and leaving it to the ‘4-wheelers’ to stop in time or get out of the way. The reason this happens is that drivers choose not to let the truck in. It is safe to say that an automatically driven truck would not be programmed to merge somewhat forcefully. It is also safe to say that for every one person kind enough to slow down to allow the truck to merge, there are thirty people who have less charity in their hearts. So, our lamentable truck sits and waits, with traffic backing up until someone comes along to relieve the situation.

How Will Driverless Cars Make Decisions?

This is just one example. One thing we can all bet on is that there will be many more instances of uncharted robot-human interactions. The kinks in these systems will inevitably be worked out, and as machine learning and artificial intelligence begin to come into their own, the problems will become less common.

Before that happens, though, I believe that driving algorithms should be made public and the public should be trained on how driverless cars will react in any given situation. For example, the ethical dilemma of whether a self-driving car will respond to prioritize the safety of a pedestrian or the driver is still up for consideration. But once that decision is made in the courts, people need to be aware of it so they can plan accordingly.

Will an out-of-control car on an icy road drift aimlessly and hope to regain traction, or will it work to get to the side of the road as quickly as possible to arrest its momentum? And what of the man walking his dog on the sidewalk that is hit when the car eventually hits that curb? If the vehicle is programmed to swerve immediately and forcefully, that dog walker should be aware that that is a real possibility. If the car is programmed to stop trying to correct and just drift until traction is restored, then the cars in front of it should be aware of that scenario. Either way, knowing what will happen in any given situation might not prevent accidents, but it might save lives by making other motorists and non-motorists more prepared.

Driverless cars and trucks are here, and as they are being tested, it is up to us to know how to react in situations that might be out of the car’s control. It is up to the programmers to give us that information.

Image courtesy of

Never miss an insight

Get insights delivered right to your inbox

More of Our Insights & Work

Never miss an insight

Get insights delivered right to your inbox

You have successfully subscribed to our newsletter.

Too many subscribe attempts for this email address.