Perceptual Characteristics of Visualizations for Occluded Objects in Handheld Augmented Reality
Handheld augmented reality (AR) systems have become increasingly popular in recent years. In mobile information browsing, being able to see hidden objects can provide advantages to the user’s situation awareness and navigation capabilities. AR visualizations for occluded objects, such as Edge-overlay X-ray, Saliency-based X-ray, and Melt, enable users to see hidden objects. This thesis investigates the effects of AR visualizations for occluded objects using handheld devices in outdoor locations on the human ability of performing multiple perceptual and cognitive tasks, including depth perception, target selection, and pedestrian navigation.
I executed six user studies. The first study investigated Edge-overlay X-ray versus Melt visualizations. Melt was advantageous for egocentric (distance to the target object from the observer) depth perception when a synthetic depth cue was present, and egocentric depth was underestimated for both visualizations. The second study investigated the effect of egocentric distance to the target object and rendering techniques of X-ray visualizations. Depth was underestimated in medium- to far-field distances, and different rendering techniques did not influence depth perception. The third and fourth studies investigated the effects of size and resolution of handheld displays on egocentric, exocentric (distance between two target objects), and ordinal (relative ordering of two target objects) depth perception. Smaller displays caused less depth underestimation than bigger displays, and bigger displays were advantageous for ordinal perception. The fifth study investigated Edge-overlay versus Saliency-based X-ray visualizations, and found both of the visualization complement each other in different use cases. The sixth study investigated X-ray visualization versus mobile maps in an outdoor navigation task. X-ray visualization caused less context switches than mobile maps, and self-aligning visualizations provided better performance for navigation.
The key conclusions of this thesis are: (1) Similar to virtual reality interfaces, both egocentric and exocentric distances are underestimated in handheld AR interfaces. (2) Visual clutter in AR visualizations reduces the visibility of the occluded object and deteriorates depth perception. Depth perception can be improved by providing clear visibility of the occluded objects. (3) To support different usage scenarios, handheld AR systems should have multiple visualizations for occluded objects to complement each other. For example, Edge-overlay X-ray should be used for highly salient foregrounds, Saliency-based X-ray is advantageous for surfaces with dense edges, and Melt should be used where a clear view of the occluded regions is required. (4) Depth perception will improve, if handheld AR systems can be developed to be able to dynamically adapt their geometric field of view (FOV) to match the display FOV. (5) Understandably, big handheld displays are hard to carry and use, however, displays should be reasonably large for AR systems where multiple graphical objects are presented simultaneously. (6) For pedestrian navigation, self-aligning tools provide advantages, and among them egocentric tools like X-ray visualization reduce cognitive load.