We are encouraging questions from readers about electric vehicles, and charging, and whatever else you want to learn. So please send them through and we will get our experts to respond, and invite other people to contribute through the comments section.
Hi Bryce –To future proof an EV purchase, which models available in Australia have an autonomous mode that can be switched on when the law of the land allows it to happen?
Hi John – you ask an interesting question, although I think I’ll reframe it slightly to ask ‘what is autonomous driving, and is it safe?’
At the end of that explanation, I am hoping you will be able to answer your own question without my help!
To the general public (or for that matter, anyone outside of the auto industry), autonomous driving is likely to be viewed as hopping into a car, telling it where to go and arriving at one’s journey having travelled much more safely than on a road populated with human drivers.
Some people will even go so far as to imagining a car without steering wheel or pedals and the car interior more reminiscent of a lounge-room than a set of forward facing car seats.
However, the engineers working on the issue know that currently, no self-driving system is capable of doing this safely in all road and driving conditions.
Consequently a set of internationally accepted levels have been defined as the stepping stones towards the development of safe, ‘driverless’ travel. These levels are outlined in figure 1.
Note to figure 1.
For the purposes of this article I am using the international 6 level SAE model, rather the US centric 5 level model developed by their NHTSA (National Highway Traffic Safety Administration).
A quick look at figure 1 shows the complexity of the issue – given cars currently cannot reliably drive themselves, it is seen as imperative that a level of driver vigilance is needed to maintain vehicle safety until the systems become well enough developed to reach at least Level 4.
It is only at Level 4 that drivers can truly ‘tune-out’ to the process, and even then only in defined use cases. Outside of those (for example outside geofenced areas where dirt roads or unmarked country roads may exist) the driver must retake control.
In fact, according to Navigant Research (now Guidehouse Insights) “…with the distinct exception of Tesla, 2019 was the year that most of the automated driving (AD) sector acknowledged that the self-driving problem was turning out to be significantly more difficult than previously believed.”
So where are current driver assistance systems at according to these definitions?
The standard car, unchanged (until recently) in terms of driver assistance for a century or more!
This includes such features as lane keeping assistance, auto emergency braking and self-parking systems. Most new cars come with at least one of these fitted as standard. Also, as safety star rating systems evolve, more and more of them are becoming mandatory.
This is where Tesla’s Autopilot system fits. This system wraps together lane centering, adaptive cruise control, self-parking, automatic lane changes, semi-autonomous navigation (in some freeway situations), plus the ability to summon the car from a garage or parking spot.
Autopilot, despite the hype is not (yet) beyond Level 2. Also, as will be noted below, Tesla through its design choices may be painting itself into a corner to get beyond Level 2.
The Audi A8 ‘Traffic Jam Pilot’ system was intended to be the first Level 3 system offered in a production vehicle.
Operating at the push of a button, it was to manage starting, steering, throttle and braking in slow-moving traffic at up to 60km/h on major roads – provided that a physical barrier separated the two carriageways.
At the limits of the system, the driver would be alerted to take back control.
However, due to the regulatory and liability issues it threw up, Audi has now scrapped Traffic Jam Pilot, preferring to skip Level 3 entirely and jump to Level 4 when it is possible to do so.
These will be the first truly self-driving cars (albeit still with a steering wheel and pedals). At Level 4, even if something goes wrong and the driver fails to intervene, the system is supposed to continue manoeuvring safely out of the problem. Level 4 however only applies to well-defined use cases.
Outside of these, the car will not allow the selection of autonomous mode. The prototype Google/Waymo self-driving cars are operating at this level – and Waymo have even just started trialling a small number of Level 4 fully driverless taxis in a geo-fenced 130 square km area around Phoenix Arizona (USA).
These are what most people would see as a true ‘driverless’ car. At this level, steering wheel and pedals are absent and the vehicle can go anywhere, anytime, with or without a human passenger aboard.
Whilst there are several concept Level 5 cars (such as the VW Group SeDriC or Self-Driving Car), the industry consensus is that they are a long way off becoming commercially available.
This is because autonomous system developers are beginning to recognise that in reality, humans make plans in response to things happening up to several kilometres ahead of the current range of the current radar, camera and LIDAR systems.
So where are we at in the path towards Level 5 autonomy?
A closer look at figure 1 shows the proportion of driver involvement versus the car rising at Level 2 and effectively swapping at around level 3 – i.e. as autonomous systems improve, driver vigilance can be reduced – but how much is too much?
A couple of recent incidents highlight the difficulty of having the grey area of a shared point of control between an autonomous system and the driver.
The first is the 2016 report of a Tesla Model S on Autopilot failing to distinguish a large white 18-wheel truck and trailer crossing the highway against a bright spring sky.
In that case, the driver who was killed had previously posted on YouTube, after removing his hands from the wheel: “You get to your destination slightly slower but at least now you don’t have to worry about anything. Just let it go.”
In other words, he was treating Autopilot as a Level 4 or 5 system when it clearly wasn’t – and sadly wore the consequences of that misunderstanding.
The second is the 2018 death of Elaine Herzberg, the first pedestrian to be hit and killed by a self-driving car.
Whilst it is difficult to determine the exact autonomy Level the experimental self-driving Uber was operating at, long-term plans by Uber are to remove drivers from their cars so there is at least pressure in the tests to operate at Level 4 or even 5.
Assuming Level 4, the Uber system, by being unable to detect a pedestrian crossing the road in the dark was either operating outside its defined use case, or was simply not up to the task it was expected to cope with. Either way, Uber has suspended all further self-driving tests with no plans yet to resume.
Autonomous driving is now generally considered to be a harder nut to crack than originally expected. Due to technical, regulatory and legal issues, no production vehicle (including Tesla) offers anything above Level 2.
Many industry experts also suggest that all three of radar, cameras and LIDAR are needed to make autonomous vehicles reliably safe in all conditions (and some even suggest these may not be enough).
This latter point is interesting, as Tesla may be backing itself into a corner due to their insistence that autonomous vehicles only need radar and camera systems. (As a result of this design decision, Tesla cars do not have LIDAR).
Elon Musk has even gone on the record to say that other manufacturers are on an expensive fool’s errand to add the expense of LIDAR systems.
In summary: no car currently on the market has even close to true self-driving capabilities, and Level 5 autonomy may in fact be a very long way off.
In addition, the current sensing and control systems allowing Level 2 and some early signs of Level 3 autonomy are likely to need a lot more work before they can support truly ‘self-driving’ cars.