Read next
Autonomous driving
Where driverless cars still have some catching up to do
Autonomous driving can work well in precisely defined areas of use. Yet if driverless cars are let loose into the free-for-all of everyday road traffic unprepared, difficulties can crop up that these vehicles are not prepared to cope with. Three examples.
© iStock/ Kulpreya Chaichatpornsuk
Most people don’t realize just how much experience and intuition motorists use when driving on the public roads and highways every day. Only when autonomous systems are behind the steering wheel does it gradually become clear what a driverless car driven with the help of artificial intelligence (AI) needs to do to catch up with human drivers.
This is especially true if the car isn’t used in a narrowly defined area of use but is instead required to drive around in an open world scenario in which anything can happen. In these scenarios, we may encounter the long-tail distribution problem of autonomous driving. In other words, no matter how many situations everyone in the development of the vehicle has thought of, there remain many scenarios, events and situations that can simply overwhelm the vehicle.
Traffic signs can confuse driverless cars
On the freeway, for example. Road signs are frequently positioned between the actual freeway and the exit lane to indicate a maximum speed of 50 mph. To human drivers, it is clear from the context that the speed limit applies to the exit lane only. For us, calculations of this kind are intuitive. But how does an autonomous vehicle react at this point? The same applies to the allocation of traffic lights to driving lanes.
Roadworks and temporary barriers offer another example which occasionally brings human drivers to the brink of despair. Yet how is a driverless car supposed to respond when it needs to decide the right route to take? This is another task that human drivers ultimately solve intuitively.
And finally, »the classic« event that comes up in discussions about autonomous driving: an unidentifiable object on the road. How is a driverless car supposed to know how to deal with random objects at a moment’s notice? Is a just empty plastic bag that can be driven over without any issues? Or is it a heavy package that could cause considerable damage to both the vehicle itself and the surroundings if it hits a car? In the worst case, it could even be a person carrying a large box or wearing a fancy dress costume, for example, who isn’t even recognized as person.
Two approaches for one problem
These scenarios present self-driving cars with problems that follow the evolutionary approach to autonomous driving, i.e. an approach that builds on level 2 advanced driver-assistance systems (ADAS) and proceeds to continuously develop the cars until they reach levels 3, 4 and finally level 5, namely unrestricted autonomous driving. In this evolutionary approach, the vehicles are, as mentioned above, driving in the »open world«.
The disruptive approach, by contrast, assumes that autonomous driving is fundamentally something other than ‘simply’ an ADAS that needs to be developed further. At this point, the findings from decades of research into artificial intelligence (AI) and robotics come into play. AI applications are realized in »narrow domains«, namely for very precisely and narrowly defined areas of application. Problems with road signs, traffic lights and roadworks are solved by recording information about them on the map for the given geonet in which the vehicles are moving. Changes, including temporary roadworks and barriers, are made in agreement between the fleet operator and the local city or county. In the case of an unknown object on the road, the disruptive approach relies on what is known as »remote guidance«, in which a remote operator views a camera image and possibly other sensor data, and instructs the vehicle accordingly on how it should respond.
There is still a lot of work to do for both approaches. In other words, self-driving cars have a lot of practicing to do before they are able to fully match the performance of human drivers. Research at the Fraunhofer Institute for Cognitive Systems IKS supports both approaches for bringing safe autonomous vehicles onto the roads.