Head up display of traffic information and weather as seen by the driver

Researchers have developed an adaptable algorithm that could improve road safety by predicting when drivers are able to safely interact with in-vehicle systems or receive messages, such as traffic alerts, incoming calls or driving directions.

There is a lot of information that a vehicle can make available to the driver, but it’s not safe or practical to do so unless you know the status of the driver

Bashar Ahmad

The researchers, from the University of Cambridge, working in partnership with Jaguar Land Rover (JLR) used a combination of on-road experiments and machine learning as well as Bayesian filtering techniques to reliably and continuously measure driver ‘workload’. Driving in an unfamiliar area may translate to a high workload, while a daily commute may mean a lower workload.

The resulting algorithm is highly adaptable and can respond in near real-time to changes in the driver’s behaviour and status, road conditions, road type, or driver characteristics.

This information could then be incorporated into in-vehicle systems such as infotainment and navigation, displays, advanced driver assistance systems (ADAS) and others. Any driver-vehicle interaction can be then customised to prioritise safety and enhance the user experience, delivering adaptive human-machine interactions. For example, drivers are only alerted at times of low workload, so that the driver can keep their full concentration on the road in more stressful driving scenarios. The results are reported in the journal IEEE Transactions on Intelligent Vehicles.

“More and more data is made available to drivers all the time. However, with increasing levels of driver demand, this can be a major risk factor for road safety,” said co-first author Dr Bashar Ahmad from Cambridge’s Department of Engineering. “There is a lot of information that a vehicle can make available to the driver, but it’s not safe or practical to do so unless you know the status of the driver.”

A driver’s status – or workload – can change frequently. Driving in a new area, in heavy traffic or poor road conditions, for example, is usually more demanding than a daily commute.

“If you’re in a demanding driving situation, that would be a bad time for a message to pop up on a screen or a heads-up display,” said Ahmad. “The issue for car manufacturers is how to measure how occupied the driver is, and instigate interactions or issue messages or prompts only when the driver is happy to receive them.”

There are algorithms for measuring the levels of driver demand using eye gaze trackers and biometric data from heart rate monitors, but the Cambridge researchers wanted to develop an approach that could do the same thing using information that’s available in any car, specifically driving performance signals such as steering, acceleration and braking data. It should also be able to consume and fuse different unsynchronised data streams that have different update rates, including from biometric sensors if available.

To measure driver workload, the researchers first developed a modified version of the Peripheral Detection Task to collect, in an automated way, subjective workload information during driving. For the experiment, a phone showing a route on a navigation app was mounted to the car’s central air vent, next to a small LED ring light that would blink at regular intervals. Participants all followed the same route through a mix of rural, urban and main roads. They were asked to push a finger-worn button whenever the LED light lit up in red and the driver perceived they were in a low workload scenario.

Video analysis of the experiment, paired with the data from the buttons, allowed the researchers to identify high workload situations, such as busy junctions or a vehicle in front or behind the driver behaving unusually.

The on-road data was then used to develop and validate a supervised machine learning framework to profile drivers based on the average workload they experience, and an adaptable Bayesian filtering approach for sequentially estimating, in real-time, the driver’s instantaneous workload, using several driving performance signals including steering and braking. The framework combines macro and micro measures of workload where the former is the driver’s average workload profile and the latter is the instantaneous one.

“For most machine learning applications like this, you would have to train it on a particular driver, but we’ve been able to adapt the models on the go using simple Bayesian filtering techniques,” said Ahmad. “It can easily adapt to different road types and conditions, or different drivers using the same car.”

The research was conducted in collaboration with JLR who did the experimental design and the data collection. It was part of a project sponsored by JLR under the CAPE agreement with the University of Cambridge.

“This research is vital in understanding the impact of our design from a user perspective, so that we can continually improve safety and curate exceptional driving experiences for our clients,” said JLR’s Senior Technical Specialist of Human Machine Interface Dr Lee Skrypchuk. “These findings will help define how we use intelligent scheduling within our vehicles to ensure drivers receive the right notifications at the most appropriate time, allowing for seamless and effortless journeys.”

The research at Cambridge was carried out by a team of researchers from the Signal Processing and Communications Laboratory (SigProC), Department of Engineering, under the supervision of Professor Simon Godsill. It was led by Dr Bashar Ahmad and included Nermin Caber (PhD student at the time) and Dr Jiaming Liang, who all worked on the project while based at Cambridge’s Department of Engineering.

 

Reference:
Nermin Caber et al. ‘Driver Profiling and Bayesian Workload Estimation Using Naturalistic Peripheral Detection Study Data.’ IEEE Transactions on Intelligent Vehicles (2023). DOI: 10.1109/TIV.2023.3313419


Creative Commons License.
The text in this work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.