Visual Guides for Learning in Augmented Reality

The principle of augmented reality (AR) lies in the fact of displaying virtual elements and real elements in a superimposed way. Thus, it allows to add virtual elements to reality or to modify real elements, and this in a co-located way. In such an application, the user can be surrounded by virtual elements linked to his environment. However, only the elements placed in front of him, in his field of vision, will be visible. All the other virtual elements will be invisible as long as they are not looked at; the information which is not in the field of vision is thus not perceived. The representation of information outside the field of vision is therefore a strong issue for AR in order to make such information perceptible.

This problem is in fact twofold. On the one hand, it is a question of representing objects that are outside the user’s field of view in order to “inform” him of their presence (Petford 2019). On the other hand, techniques aimed not at representing objects but at guiding the user’s gaze can also be used (Biocca, 2007).

Several works have already addressed this problem of representing objects outside the field of view in both virtual and augmented reality. Moreover, the limited field of view of augmented reality devices further amplifies this problem (Kishishita, 2014). However, this subject remains relatively untreated and various barriers still remain to be removed from a technological and perceptual point of view. Indeed, it is necessary both to manage to represent these elements on a limited display but also not to multiply the information which would overload the user.

On the other hand, AR environments also allow us to offer a learner multiple stimuli: verbal, visual, haptic, etc. Because the sensory system and working memory have limited capacities, processing information can be cognitively expensive. In the case of too much data to be coded and processed simultaneously, the sensory and memory systems can become cognitively overloaded (Roda and Thomas, 2006).

It is possible to compensate for the difficulties of processing visual (or multimedia) information. To do so, it is important to focus on the user’s interaction with AR in a learning situation. Thus, the use of visual cues could help to guide the learner in the execution of the actions of the procedure, while the virtual environment helps to maintain and direct his attention through methods of information display (Rapp, 2006).

In this context, the objective of this project will be to propose visual guides and methods for representing objects outside the field of vision in a specific situation: that of learning procedures in augmented reality. We will first rely on existing proposals in terms of AR representation, evaluating them with respect to our application case in order to highlight their shortcomings and possible improvements. We will then focus on an evaluation based on cognitive psychology and ergonomics to propose representation and guidance techniques adapted to a learning task. These results will finally allow us to propose a set of techniques and best practices for the development of learning applications in AR. These recommendations will be particularly adapted to an industrial use of AR, which allows the learning of complex procedures (Bottani, 2019). 


Références :

  • Biocca, F., Owen, C., Tang, A., & Bohil, C. (2007). Attention issues in spatial information systems: Directing mobile users’ visual attention using augmented reality. Journal of Management Information Systems, 23(4), 163–184.
  • Bottani, E., & Vignali, G. (2019). Augmented reality technology in the manufacturing industry: A review of the last decade. IISE Transactions, 51(3), 284–310.
  • Kishishita, N., Kiyokawa, K., Orlosky, J., Mashita, T., Takemura, H., & Kruijff, E. (2014). Analysing the effects of a wide field of view augmented reality display on search performance in divided attention tasks. ISMAR 2014 - IEEE International Symposium on Mixed and Augmented Reality - Science and Technology 2014, Proceedings, 177–186.
  • Petford, J., Carson, I., Nacenta, M. A., & Gutwin, C. (2019, May 2). A comparison of guiding techniques for out-of-view objects in full-coverage displays. Conference on Human Factors in Computing Systems - Proceedings.
  • Rapp, D. N. (2006). The value of attention aware systems in educational settings. Computers in Human Behavior, 22(4), 603–614.
  • Roda, C., & Thomas, J. (2006). Attention aware systems: Theories, applications, and research agenda. Computers in Human Behavior, 22(4), 557–587.
Etienne Peillard
Etienne Peillard
Associate Professor

My research interests include human perception issues in Virtual and Augmented Reality, spatial perception in virtual and augmented environments, and more generally, the effect of perceptual biases in mixed environments.