Eye tracking is used by psychologists, neurologists, vision researchers, and many others to understand the nuances of the human visual system, and to provide insight into a person’s allocation of attention across the visual environment. When tracking the gaze behavior of an observer immersed in a virtual environment displayed on a head-mounted display, estimated gaze direction is encoded as a three-dimensional vector extending from the estimated location of the eyes into the 3D virtual environment. Additional computation is required to detect the target object at which gaze was directed. These methods must be robust to calibration error or eye tracker noise, which may cause the gaze vector to miss the target object and hit an incorrect object at a different distance. Thus, the straightforward solution involving a single vector-to-object collision could be inaccurate in indicating object gaze. More involved metrics that rely upon an estimation of the angular distance from the ray to the center of the object must account for an object’s angular size based on distance, or irregularly shaped edges - information that is not made readily available by popular game engines (e.g. Unity© /Unreal© ) or rendering pipelines (OpenGL). The approach presented here avoids this limitation by projecting many rays distributed across an angular space that is centered upon the estimated gaze direction.
Computational complexity is a limiting factor for visualizing large-scale scientific data. Most approaches to render large datasets are focused on novel algorithms that leverage cutting-edge graphics hardware to provide users with an interactive experience. In this paper, we alternatively demonstrate foveated imaging which allows interactive exploration using low-cost hardware by tracking the gaze of a participant to drive the rendering quality of an image. Foveated imaging exploits the fact that the spatial resolution of the human visual system decreases dramatically away from the central point of gaze, allowing computational resources to be reserved for areas of importance. We demonstrate this approach using face tracking to identify the gaze point of the participant for both vector and volumetric datasets and evaluate our results by comparing against traditional techniques. In our evaluation, we found a significant increase in computational performance using our foveated imaging approach while maintaining high image quality in regions of visual attention.