RMIT University has been expanding resources on researching autonomous cars, the issue is, they never even got close to an autonomous vehicle, despite many being available globally.
Research led by RMIT University looked at what happens if a driver is suddenly required to take control of an automated vehicle, such as in an emergency.
In a press release today, they say that early data suggests 3 types of distractions (work, social media and rest) impacted the driver’s ability to respond.
Study lead author in the School of Engineering, Dr Neng Zhang, takes this data and then makes the conclusion that authorities need to begin drafting policies to regulate the responsible use of automated vehicles before Level 3 and 4 automated vehicles appear on Australian roads.
In early 2022, transport and infrastructure ministers approved the national in-service safety framework for automated vehicles in Australia.
Commonly, the levels of vehicle automation come from the Society of Automotive Engineers and as part of the J3016 standard, identified 6 levels which are as follows:
- Level 0:No Driving Automation
- Level 1:Driver Assistance
- Level 2:Partial Driving Automation
- Level 3:Conditional Driving Automation
- Level 4:High Driving Automation
- Level 5:Full Driving Automation
The standard is a fairly lengthy and dry document and as such, a graphic was developed and updated in 2021 in an effort for more people to understand the definitions.
As it turns out, even this was complicated and in February 2022, Australia’s National Transport Commission released a regulatory framework for automated vehicles in Australia which included this definition of the higher level of autonomy:
- Level 3 vehicles: the ADS undertakes the entire DDT within its operational design
domain (ODD, defined below). When the ADS is driving, the human operator does not
have to monitor the driving environment or the driving task but must be receptive to
ADS requests to intervene and any system failures.
- Level 4 vehicles: the ADS undertakes the entire DDT within its ODD. When the ADS
is driving, the human operator is not required to monitor the driving environment or the
driving task, nor are they required to intervene because the ADS can bring the vehicle
to a safe stop unassisted.
- Level 5 vehicles: the ADS undertakes all aspects of the DDT and monitoring of the
driving environment. The ADS can operate on all roads at all times. No human
operator is required.
With Level 5 vehicles not requiring any human intervention, the focus of RMIT research focused on Levels 3 and 4. A vehicle offered with this level of automation could reach the end of its capabilities and require the human to take over.
While the SAE levels of autonomy were good as originally designed back in 2014, the reality is autonomy is playing out very differently in reality.
In the US, we have robotaxi services in operation, with Waymo and Cruise offering driverless rides to passengers. Both of these offerings use cars that still contain a steering wheel and pedals, but riders are prevented from entering the driver’s seat, so they physically couldn’t take over, even if they had to. If the car gets into trouble, they call for remote assistance to get out of a situation, requiring the assistance of either a remote driver or even support staff to attend the site, get into the driver’s seat and manually drive the car.
Tesla’s Full Self Driving Beta software is also very capable, but as it stands today, requires a driver to start and stop the journey, with most of the drive being done by their vision-only system. This does rely on humans as a backup (for now) but is considered level 3, but with a software update could leap to be a level 4 or 5 system.
When it comes to China, there’s Baidu’s Apollo, Pony.ai, WeRide, AutoX, DeepRoute.ai and many more that have autonomous offerings, which OEMs like Xpeng and NIO that are also climbing the AV ladder and nothing falls neatly into the levels mentioned above.
This means we should not rely on these level systems to create legislation around vehicles outside the most simple requirement, where handing back control to the driver, do so as early as possible and use every technique possible to get the driver’s attention back on the road.
If an autonomous system can’t monitor the driver and their reaction to ensure it is adequate to respond in time, then the car is setting them up for failure.
A study of distraction
Now for the actual testing which I think uses really flawed assumptions. At no time during this study did anyone involved actually get in an autonomous or semi-autonomous vehicle. Instead, the study was conducted in a virtual environment, where responses may be able to be repeatedly tested and measured, but what you don’t get in the lab are real responses from humans in potentially dangerous situations.
This is effectively a video game where the consequence of getting it wrong, isn’t riding off your car, a car you’ve spent tens of thousands of dollars on, but rather a digital crash that is easy reset and humans inherently know this, something I think calls into question the results.
Using a Level 3 automated vehicle simulation, the researchers tested participants’ speed and effectiveness in taking over the vehicle in the event of an emergency.
The cross-disciplinary research team brought together RMIT expertise in human body vibration, automotive engineering and cognitive psychology from the School of Engineering, School of Health and Biomedical Sciences and School of Science.
Biomedical researcher and author of the papers, Professor Stephen Robinson, warned that emergencies require a high level of cognition.
Young drivers struggle with emergency takeovers
In addition to distractions, the study looked at the experience of drivers with a focus on young people.
For me, this sounds like typical distracted driving issues, not at all isolated to autonomous vehicles or responses to realistic warning signals from autonomous vehicles requesting the driver’s attention.
Last week, I had the chance to review the Audi RS e-tron GT and when you didn’t pay attention for more than ~10 seconds, you’d get a visual warning (orange hands on the wheel icon) and an audible ding to get your attention. Pushing past this, if you still didn’t take control (put torque into the heel), the car then showed a red graphic, played a louder sound and pulled repeatedly on the seatbelt to get your attention. Where was this kind of system represented in their study? It wasn’t.
The paper, “Is driving experience all that matters? Drivers’ takeover performance in conditionally automated driving” (DOI 10.1016/j.jsr.2023.08.003), with lead author Neng Zhang, was published in the Journal of Safety Research this month.
The test was done using the York Driving Simulator Software (terrible graphics by the way). This was used to simulate a two-lane straight highway with only the simulated car driving on it.
The latest version of the York driving simulator software can simulate autonomous driving, including a takeover request scenario, however, this wasn’t used as it did not have such a function when this study was conducted.
Instead, the autopilot mode of York driving simulator was programmed to simulate autonomous driving. This function allowed the software to simulate cruising on a straight highway at a constant speed. This condition was not changed unless there was interference from the driver.
The simulated car was in auto-pilot mode, cruising in the highway’s left lane at 110 km/h before the takeover request occurred. The takeover involved the avoidance of a stationary vehicle. A stationary vehicle suddenly appeared in front of the simulated car with the time to collision being four seconds.
The results show, particularly in figure 5d that the average reaction time was around 2.5 seconds, followed by up to another second to perform an emergency lane change to avoid the collision. This resulted in a total time of around 3.5 seconds.
Given the participants were asked to fully engage in working on a laptop, watching entertainment on a laptop or effectively sleeping (aka a resting condition), these results don’t seem too bad.
Today, we don’t have any vehicle in Australia that allows you to take your eyes off the road and let the car drive, then hand back control to you, which would require these use cases. We have seen Mercedes offer a level 3 driving experience in other markets, but it was compromised, limited to a top speed of 40mph (65km/hr). At that speed, you have more time to react, but given the system only worked on highways, there’s also a lot less that can go wrong with those guard rails.
With a road toll that continues to hover around similar numbers each year, Australia should run towards autonomy, and make sure the right checks and balances are there to make it safe, but let’s not put artificial constraints on automakers to solve this, we need safer roads, that should be something we can all agree on.