AI-driven cars (also known as driverless cars or self-driving cars) will revolutionize transportation in the 2020s. They will be far safer and faster than human-driven cars. They will ease urban congestion and save humans billions of hours spent steering.
Google, Apple and a host of other companies – including traditional auto manufacturers seeing the writing on the wall – are already experimenting with driverless cars. Cameras and radar enhance cars’ existing arrays of electronic sensors. The core challenge now is software development. How should a self-driving car interpret and act on its sensory inputs?
One way to interpret and act is to write a lot of “if-then” rules. If the car receives inputs A, B and C, then it does X, Y and Z. If the car receives inputs A, B and D, then it does X, W and Y. Continue until done. Only that won’t work, not by a long shot.
To appreciate why not, let’s do some simple math. Suppose we manage to filter all essential inputs down to a 100 x 100 screen, with cell reading on or off. The number of distinct combinations is 21000, or approximately 10 followed by 300 zeroes. For comparison, the observable universe is thought to contain less than 1085 fundamental particles.
Instead, a self-driving car needs to recognize patterns, and to link those patterns to measures of safe and efficient response – more precisely, to some “this seems safe or efficient with X% probability” conjectures. Both recognition and linkage involve learning. In early AI, humans would feed computers useful patterns and let the computers work out core linkages. Thanks to a new technique called Convolutional Neural Networks (CNNs), computers are starting to identify the patterns for themselves.
What our future AI drivers most need now is millions of hours of real-world driving experience, to better out which patterns to recognize and the probabilistically best response. They especially need unexpected, unplanned experience: the passing truck spraying water in a storm, the little kids chasing balls into a street, the approaching motorist racing through a red light, the policeman waving a detour.
Naturally, people are fearful of others’ tests. That’s not a novel problem with cars. Horseless carriages encountered similar resistance when they entered streets of horse-drawn carriages. Despite many tragic accidents, the tests continued, as human drivers liked their cars and asserted their rights to drive them. Despite their negatives, horseless carriages were far more efficient and produced far less waste than the horses they replaced.
Compared to early horseless carriages, early AI-driven cars have far fewer defects in either hardware or guidance. Yet they’ve had much harder time getting approved for large-scale testing. Why? The main difference is that AI beings aren’t treated as citizens; there are no rights they can assert. Most humans still regard that as a good thing and I won’t argue the case here. Rather, I want to highlight two regulatory consequences. First, AI-driven cars are often required to carry backup human drivers. Second, AI-driven cars are often required to prove they’re safe, either as an explicit precautionary principle or implicitly through exorbitant liabilities for accidents.
Requiring AI-driven cars to carry backup human drivers is like requiring horseless carriages to cart along a horse. An AI-driven car will leave its human backup bored stiff most of the time, unlikely to snap to attention when needed, and unlikely to quickly realize what to do. What AI-driven cars need instead is to have their own AI backups, who constantly monitor for systemic failures and implement overrides immediately. If human backup is needed, let it be a disciplined, well-trained team at a central control center.
As for the precautionary principle, AI-driven cars aren’t yet fully safe, may never be fully safe, and can’t prove their degree of safety for years to come. Demanding a nonexistent proof won’t make them safer. All it accomplishes is to delay technological progress, which in turn condemns even more people to injury and death in human-driven accidents.
In most of Europe, the biggest single obstacle to testing AI-driven cars is the Vienna Convention on Road Traffic, as it requires cars to carry human drivers. In the US, with its fetish for tort trials, the main obstacle is financial liability. Nevertheless, testing is making headway. The US Department of Transportation recently announced seven finalists for a “Smart City” challenge. It will fund USD 40 million to support integration of AI-driven or AI-assisted cars into local transportation networks. In the UK, which never signed the Vienna Convention, the cities of London, Coventry, Bristol and Milton Keynes are experimenting with AI-driven cars. Singapore has partnered with MIT to test AI cars.
Since Bulgaria is a long Vienna Convention drive from the centers of AI-driven innovation, inertia will leave it a laggard. To leap ahead, Bulgaria needs to remake itself quickly into the best public AI-driven testing ground in the world. Useful steps include:
- 1. Declare the Vienna Convention clause on human drivers inapplicable in Bulgaria and legally shield and insuring companies against associated tort exposure.
- 2. Form a blue-ribbon commission of domestic and foreign experts in AI, transportation or urban planning to set clear standards for allowing a company to test AI-driven cars in Bulgaria and for monitoring safety.
- 3. Establish a clear protocol for investigating accidents, for publicizing discoveries, for assigning and disputing fault, and for requiring specific payouts for property damages, injuries and deaths from accidents.
- 4. Limit criminal liability to obfuscation of known risks or failure to implement agreed measures, not for “should have known this might happen” risk.
- 5. Invite every major AI-driving venture in the world to visit Bulgaria, to share its wish list of testing needs, and to connect with authorities who might assist.
- 6. Host international competitions of AI-driven cars in Sofia, demonstrating flexibility and safety as well as speed, and make them holiday spectacles.
One thing I haven’t mentioned is showering AI-driven car makers with money. Governments aren’t good at picking winners, and showers of free money attract the wrong crowds. However, I don’t think Bulgaria should impose big entrance fees either. Its big payoffs will come from the local jobs and training that the testers provide, the improvements they bring to Bulgarian traffic networks, and the inspiration to other AI-related ventures in Bulgaria.
By Kent Osband