ICADAR

Improving autonomous vehicle drivers’ confidence via Augmented Reality visualization of artificial intelligence decision-making context

Thesis project

To improve the acceptability of autonomous vehicles, trust in AI driven systems is a key issue for autonomous driving (Wintersberger et al., 2019) (von Sawitzky et al., 2019). Although theoretically allowing a smoother and safer flow of traffic, the apparently unpredictable behavior patterns of the automated vehicle are also likely to make the occupants feel uncomfortable. Another key issue is that no car is at the moment fully autonomous; in an unknown and potentially dangerous situation, the autonomous driving surrenders and switches back to manual driving and it is important the driver is ready to drive again. A major axis to improve this trust lies in improving the situational awareness of the occupants, using appropriate visualization techniques. Though, it is also possible to use adapted techniques to improve the transition from autonomous to manual driving. Augmented reality can be a useful tool as suggested by (Bark et al., 2014). In both cases, the challenge is to visualize and explain what the autonomous vehicle is doing, so that the occupant can understand it, accept it and interact with it. Solving those issues are of crucial interest to automotive players such as manufacturers and OEMs and would make an important step towards putting on the market level 4 and 5 autonomous cars.

Scientific goal

The main scientific goal of this thesis is to improve trust in autonomous driving thanks to driver situation awareness. The main research hypothesis is that augmented reality will be a useful tool to explain to the driver what the autonomous system is doing and therefore when it is likely to hand back control when required. While augmented reality has already demonstrated its potential as a facilitator for improving user acceptance in autonomous driving (Wintersberger et al., 2019), we believe there remains many open research questions. We assume that a step forward can be made if we provide a better perception of the driver’s environment. The improved perception is achieved by showing them potential hazards along with the route and/or tactical choices (such as lane change and crossing intersections). For example, if a sensor detects a kid on the sidewalk but likely to cross the road, the car will most probably be slowed down and will drive a bit further away from the sidewalk. The challenge is to make the driver aware of the potential hazard as well as the reaction of the car. However, this is not a guarantee that this will increase trust in the autonomous system. Another key challenge of the project will be to measure trust in the system, if possible, in real-time. That is why having access to a motion platform enabling a much more accurate perception of the car reaction to a potential hazard is likely to be an important benefit. Augmented reality provides additional visual information to the driver for this enhanced understanding. This additional information could consist in highlighting the child on the side of the road and inform the driver of the unexpected motion of the car. With a high level of autonomy, drivers may not be required to focus on operating the vehicle, and they will occupy their attention to other tasks, watching videos, work, or speaking on the phone. The time issue is also challenging: in the former example, the child may be seen at a given moment and hidden a few seconds later when the driver looks for the danger which raises the research question of visualizing both visible and invisible dangers and taking into account the uncertainty of such hazards. Therefore, this research will investigate appropriate augmented reality cues for the driver and assess them in a driving simulator. Working with cognitive psychologists in the Crossings IRL, we will develop real-time assessment tools for trust.

Our project is characterized by its focus on the visualization of the AI decision process and as a consequence of uncertainty. This investigation aims at increasing situational awareness via augmented reality representations for co-located elements visible or invisible to the occupant (e.g. traffic conditions several kilometers away, pedestrians hidden by a vehicle about to cross, some kind of abnormal behavior from cars). The augmented reality visualizations will represent “what could be”, that is to say they will represent the potentialities forecast by the AI when making its choices. In order to represent these processes, it will be necessary to deal with original problems, such as the real-time presentation of uncertainty and risk visualization. In addition, the use of AR will allow this display to be co-located with the driver’s other activities and the physical world. On the visualization side, a few locks will also have to be addressed on this point: • Risk and potential hazard visualization already exist in the scientific visualization community, see (Cheong et al., 2016; Goda and Song, 2016) for example, but current approaches barely take into account the real-time issues: hazard zones are generally static whereas fully dynamic in our situation. Emergency situations also have to be taken into account and of course the driver should not panic. • In aircraft flying, alarms and attention tunneling have been largely tackled (Karar and Ghosh, 2014; Wickens and Alexander, 2009) with but not only they remain an open issue as there is no definite solution to handle multiple alarms on the dashboard but using AR in flight also raises several issues (Yeh and Wickens, 2001) and is still missing studies with ecological validity (Bayle, Elodie, 2021). co-located events are also much more complex to handle as the bare studies of using AR in flight tend to show. It must be also noted that a new challenge may arise from the introduction of level 4&5 autonomous vehicles where the driver may do something else than driving which has not been studied with aircrafts pilots. In this case, the system should (progressively?) be able to do a fast sitrep (situation representation) including only the required information for immediate action. Selection of this information is of course cruciale. • The last challenge is a bit more abstract and raises the question of visualization of decisions: the AI system will make tactical decisions which are invisible by nature and that need to be show to the driver. What is the best way to convey such information?

References

• Bark, K., Tran, C., Fujimura, K., Ng-Thow-Hing, V., 2014. Personal Navi: Benefits of an Augmented Reality Navigational Aid Using a See-Thru 3D Volumetric HUD, in: Proceedings of the 6th International Conference on Automotive User Interfaces and Interactive Vehicular Applications. Presented at the AutomotiveUI ’14: 6th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, ACM, Seattle WA USA, pp. 1–8. https://doi.org/10.1145/2667317.2667329 • Bayle, Elodie, 2021. Entre fusion et rivalité binoculaire : impact des caractéristiques des stumuli visuels lors de l’utilisation d’un système de réalité semi-transparent monoculaire (Thèse de doctorat). Université Paris Saclay. • Cheong, L., Bleisch, S., Kealy, A., Tolhurst, K., Wilkening, T., Duckham, M., 2016. Evaluating the impact of visualization of wildfire hazard upon decision-making under uncertainty. Int. J. Geogr. Inf. Sci. 30, 1377–1404. https://doi.org/10.1080/13658816.2015.1131829 • Goda, K., Song, J., 2016. Uncertainty modeling and visualization for tsunami hazard and risk mapping: a case study for the 2011 Tohoku earthquake. Stoch. Environ. Res. Risk Assess. 30, 2271–2285. https://doi.org/10.1007/s00477-015-1146-x • Karar, V., Ghosh, S., 2014. Attention Tunneling: Effects of Limiting Field of View Due to Beam Combiner Frame of Head-Up Display. J. Disp. Technol. 10, 582–589. https://doi.org/10.1109/JDT.2014.2311159 • von Sawitzky, T., Wintersberger, P., Riener, A., Gabbard, J.L., 2019. Increasing trust in fully automated driving: route indication on an augmented reality head-up display, in: Proceedings of the 8th ACM International Symposium on Pervasive Displays. Presented at the PerDis ’19: The 8th ACM International Symposium on Pervasive Displays, ACM, Palermo Italy, pp. 1–7. https://doi.org/10.1145/3321335.3324947 • Wickens, C.D., Alexander, A.L., 2009. Attentional Tunneling and Task Management in Synthetic Vision Displays. Int. J. Aviat. Psychol. 19, 182–199. https://doi.org/10.1080/10508410902766549 • Wintersberger, P., Frison, A.-K., Riener, A., Sawitzky, T. von, 2019. Fostering User Acceptance and Trust in Fully Automated Vehicles: Evaluating the Potential of Augmented Reality. PRESENCE Virtual Augment. Real. 27, 46–62. https://doi.org/10.1162/pres_a_00320 • Yeh, M., Wickens, C.D., 2001. Display Signaling in Augmented Reality: Effects of Cue Reliability and Image Realism on Attention Allocation and Trust Calibration. Hum. Factors J. Hum. Factors Ergon. Soc. 43, 355–365. https://doi.org/10.1518/001872001775898269

Etienne Peillard
Etienne Peillard
Associate Professor

My research interests include human perception issues in Virtual and Augmented Reality, spatial perception in virtual and augmented environments, and more generally, the effect of perceptual biases in mixed environments.