The EU needs independent oversight of self-driving technologies, argues Antonio Avenoso. The risk is of a kind of lawless Wild West for the early years of automated cars, not unlike the early years of motoring itself – before speed limits, traffic lights and driver license tests started to set the rules of the road.
Antonio Avenoso is executive director of the European Transport Safety Council (ETSC), a Brussels-based non-profit organisation dedicated to reducing the numbers of deaths and injuries in transport.
A few weeks ago my laptop decided, of its own accord, to do an automated update just minutes before I was set to give a presentation. Two hours later, having survived the ordeal with a borrowed machine, a colleague put things into perspective with that favourite aphorism of modern office life: ‘nobody died’.
Quite so. It is a rare thing for a computer malfunction to be fatal.
But our societies are now on the verge of putting computers in charge of cars, vans and lorries that drive in our cities – among cyclists, pedestrians and other road users – and take life or death decisions on our behalf.
And carmakers, in the absence of regulatory guidance, are already making fundamental choices that will decide what happens next.
In a recent interview for the US magazine and website Car and Driver, a Mercedes-Benz executive in charge of automated systems said a Mercedes automated vehicle’s ‘first priority’ would be to save its occupants.
Whether one agrees or disagrees with this ethical choice, it simply shouldn’t be up to carmakers to decide.
A fatal crash last summer in Florida is another case in point. A Tesla-owner, reportedly driving his vehicle in semi-automated ‘Autopilot’ mode, crashed into the underside of a large truck that was crossing the highway in front of him.
Several facts of the case are disturbing. Firstly, the car was reportedly exceeding the speed limit at the time. Automated systems should not be able to break laws in place for safety reasons.
Second, the car’s automated system was activated despite the fact that the car was not driving on a suitable road. Tesla’s system was apparently unable to cope with a large white truck crossing a highway on a sunny day in Florida, which must be a rather common occurrence. As with the theoretical ‘occupant first’ rule set by Mercedes, Tesla is currently deciding for itself what rules apply in every conceivable situation.
The risk is of a kind of lawless Wild West for the early years of automated cars, not unlike the early years of motoring itself – before speed limits, traffic lights and driver license tests started to set the rules of the road. This could be a disaster. And not least for the nascent industry.
A likely outcome is that in a few years, if independent regulation and step-by-step approval of automated systems is not in place, a number of high profile deaths caused by automated vehicles will so horrify and appal the public, that the vehicles will be withdrawn from use. Rebuilding trust could be a huge challenge.
Regardless of the overall likelihood that deaths could eventually go down as computers gradually remove human error, and recklessness from driving, a small number of so-called ‘false positives’ where the vehicle makes an error and causes a fatal collision, could devastate the entire industry overnight.
The fears of automotive killing machines would be felt in a similar way to terrorism. To be stopped at any cost.
What’s needed is a step by step approach, starting with approvals for systems that have been proven to work in specific scenarios such as motorways without cross junctions or roadworks.
In Europe, it should be national governments, together with the European Union that set the rules, oversee testing, and independently investigate collisions. The current regulatory environment is not set up for any of these tasks in the vastly more complex world of automated cars. It’s time Europe woke up to the risks as well as the opportunities of automation.