Community Research and Development Information Service - CORDIS

Final Report Summary - HFAUTO (Human Factors of Automated Driving)

Human Factors of Automated Driving,

Road transport is an essential part of society, but the burden of crashes, congestion, and pollution is enormous. Automated driving has the potential to resolve these problems. However, before automated driving can be deployed we have to address imminent human factors questions regarding safety, human machine interfaces (HMI), driver state assessment, and traffic flow efficiency.
In the HFauto project, experiments in driving simulators and on the road were performed to investigate human interaction with various levels of automation including full-range ACC, SAE level 2 automation with drivers monitoring the automation, and SAE level 3 automation allowing drivers to take their eyes off the road. Studies focussed on passenger cars but also included truck platooning.
HFauto bridged the gap between engineers and psychologists through a multidisciplinary research and training programme. HFauto trained 13 Early Stage Researchers (ESRs) and 1 Experienced Researcher (ER), clustered in five synergistic work packages (WPs), learning and collaborating by means of secondments in the automotive industry, road safety institutes, and academia. Three HFAuto ESRs obtained their PhD degree in 2017 and other ESRs are expected to complete their PhD in 2018.

Results show that the human capabilities during platooning and transient manoeuvres are intricately linked: When being transported in a platoon drivers gradually exhibit a reduction of workload, task engagement, and situation awareness. During a transition from automated to manual driving, drivers should regain their alertness, task engagement and situation awareness as soon as possible. Results of one of our animation-based experiments suggest that participants need about 10 seconds to judge how many cars there are in the vicinity, but require more time to estimate the relative speed of these surrounding road users.

Visual support systems were designed using augmented reality, supporting users of automation in take-over requests (TORs). These interfaces improved task completion in terms of braking and lane changes after receiving a TOR. However, even with a substantial time-budget of 12 s, the best guiding interface resulted in a number of unsafe lane changes (~16%) in the condition where braking was the desired response.
Blindfolded driving was used as an ‘ultimate’ test for auditory interfaces guiding the steering task. Experiments showed that auditory directional support was effective in a lane-keeping task.
A vibrotactile seat was developed that could present spatiotemporal patterns using 48 motors in the seat bottom and seat back. It was found that complex patterns could not be reliably recognized in take-over situations. However, simple seat vibration TORs elicited fast and robust reaction times, similar to responses to auditory TORs, and faster than responses to visual-only TORs. Bimodal vibrotactile and auditory TORs elicited slightly quicker reaction times than their unimodal constituents. Auditory and vibrotactile TORs were rated as useful, whereas visual-only TORs were not. In short, auditory and vibrotactile stimuli can be used to convey warnings, and visual interfaces can be used to convey more complex messages, like trajectory advice.

Literature reviews on driver state monitoring (DSM) and psychological vigilance in driving made clear that results of laboratory-based vigilance studies (the ‘Mackworth Clock’) do not directly generalize to the open road because driving is a complex task. On the market DSM systems for conditional/partial automated driving were found to be constrained as reactive, non-situated, and only beginning to make use of cameras.
Eye tracking, while presenting inherent complications and challenges, is promising for DSM in automated vehicles. In laboratory experiments, eye tracking measures in relation to specific driving scene properties, were correlated with subjective driving difficulty. In a quasi-naturalistic on-road experiment, the eyes of in-the-loop active drivers could be discriminated from passive passengers. From naturalistic data with adaptive cruise control (ACC), it was found that drivers were already looking on the road at the onset of a critical situation — they anticipated the lead-vehicle conflict before the ACC issued a forward-collision warning (FCW). Visual (e.g., brake lights) and deceleration cues were found to be relevant for capturing driver attention to the forward path in anticipation of the threat.
Based on empirical data, the last step was to progress towards the simulation of drivers’ visual scanning, decision making and behaviours with the cognitive driver model COSMODRIVE.

WP4 focussed on automation use, monitoring of SAE level 2 automation, and transitions of control. Where other studies often focus on “average” responses, WP4 also analysed the variance within and between drivers, and explained variance as a function of task and driving condition.
Based on an on-road experiment, a driver behaviour model was developed describing the use of full-range ACC, including the probability of ACC deactivation, ACC overruling using the gas pedal, and ACC target speed regulation as a function of perceived level of risk feeling and task difficulty, characteristics of the freeway segment and driver characteristics. This continuous-discrete choice driver model can be implemented in microscopic traffic simulations to forecast the effects of automation on traffic flow.
An on-road study with a level 2 automated consumer vehicle showed that an additional task interfered more with level 2 automated than with manual driving.
Methods have been developed to assess attention and workload using event-related potentials (ERPs) in brain activity. Such techniques, along with other DSM methods such as eye tracking, can support the further development and validation of driver models including inferred driver states such as driver attention and workload. These can result in models and driver state monitoring systems capturing “driver readiness” in transitions of control.
A literature review and meta-analysis showed that many aspects related to the driver, the vehicle, the situation and experiment methodologies could potentially influence the driver take-over time (TOT). Participants were slower in taking over control when the urgency of the situation was lower, or when they were holding a device in their hands. In addition, the participants’ anticipation and familiarization of the scenarios associated with shorter TOTs, while the mean age of the participants did not appear to influence the TOTs significantly.

HFAuto provided new insights and models of human-automation interaction and new interface concepts. Many results are positive; interfaces and human interaction were positively evaluated by non-expert participants. Transitions of control generally resulted in timely and adequate human reactions. However, safety concerns remain for take-over requests (TOR) where the automation requests the driver to take back control. Thus, we tend to conclude that even with substantial time-budgets, advanced interfaces and DSM, human drivers will not always regain control of their vehicles safely, where lane changes involving other road users are a particular concern. When drivers do not react appropriately to the TOR, it is desired that the automation detects this and transitions to minimal risk. If such minimal risk solutions are implemented for all possible events, this will result in SAE level 4 automation. However, as long as drivers can resume manual control human factors remain an essential aspect of the safety design and evaluation of automation, where new dilemmas emerge regarding the authority of drivers to overrule the automation.
Given the recent legal developments, we expect that vehicles up to SAE level 4 with a human driver in the vehicles will be legal in the near future, to be followed by legal acceptance of driverless vehicles with remote human supervision.

Results include (1) a comprehensive understanding of human capabilities and side effects of automated driving, both in monotonous and transient situations, (2) HMI that interact with the driver of a highly automated car, for situations of different criticality, (3) an ‘ecological’ driver monitor that estimates the operator’s vigilance level and hazard awareness, (4) driver models that predict the use of ACC, (5) a roadmap for market introduction of highly automated driving, and (6) trained researchers having the multidisciplinary and generalizable knowledge, skills, and vision required to address human factors challenges of automated driving. We expect our findings to be instrumental in research and development regarding automated vehicles.

Reported by



Life Sciences
Follow us on: RSS Facebook Twitter YouTube Managed by the EU Publications Office Top