Translucency is an important appearance attribute. The caustic patterns that are cast by translucent objects onto another surface encapsulate information about subsurface light transport properties of a material. A previous study demonstrated that objects placed on a white surface are considered more translucent by human observers than identical objects placed on a black surface. The authors propose the lack of caustics as a potential explanation for these discrepancies — since a perfectly black surface, unlike its white counterpart, does not permit observation of the caustics. We hypothesize that caustics are salient image cues to perceived translucency, and they attract the visual attention of the human observers when assessing translucency of an object. To test this hypothesis, we replicated the experiment reported in the previous study, but in addition to collecting the observer responses, we also conducted eye tracking during the experiment. This study has revealed that although gaze fixation patterns differ between white and black floor images, the objects’ body still attract most of the fixations, while caustics might be a cue of only secondary importance.
We present a state of the art and scoping review of the literature to examine embodied information behaviors, as reflected in shared gaze interactions, within co-present extended reality experiences. Recent proliferation of consumer-grade head-mounted XR displays, situated at multiple points along the Reality-Virtuality Continuum, has increased their application in social, collaborative, and analytical scenarios that utilize data and information at multiple scales. Shared gaze represents a modality for synchronous interaction in these scenarios, yet there is a lack of understanding of the implementation of shared eye gaze within co-present extended reality contexts. We use gaze behaviors as a proxy to examine embodied information behaviors. This review examines the application of eye tracking technology to facilitate interaction in multiuser XR by sharing a user’s gaze, identifies salient themes within existing research since 2013 in this context, and identifies patterns within these themes relevant to embodied information behavior in XR. We review a corpus of 50 research papers that investigate the application of shared gaze and gaze tracking in XR generated using the SALSA framework and searches in multiple databases. The publications were reviewed for study characteristics, technology types, use scenarios, and task types. We construct a state-of-the field and highlight opportunities for innovation and challenges for future research directions.