Traffic simulation is a critical tool used by psychologists and engineers to study road behavior and improve safety standards. However, the creation of large 3D virtual environments requires specific technical expertise that traditionally trained traffic researchers may not have. This research proposes an approach to utilize fundamental image processing techniques to identify key features of an environment from a top-down view such as satellite imagery and map. The segmented data from the processed image is then utilized to create an approximate 3D virtual environment. A mesh of the detected roads is automatically generated while buildings and vegetation are selected from a library based on detected attributes. This research would enable traffic researchers with little to no 3D modeling experience to create large complex environments to study a variety of traffic scenarios.
Eye tracking is used by psychologists, neurologists, vision researchers, and many others to understand the nuances of the human visual system, and to provide insight into a person’s allocation of attention across the visual environment. When tracking the gaze behavior of an observer immersed in a virtual environment displayed on a head-mounted display, estimated gaze direction is encoded as a three-dimensional vector extending from the estimated location of the eyes into the 3D virtual environment. Additional computation is required to detect the target object at which gaze was directed. These methods must be robust to calibration error or eye tracker noise, which may cause the gaze vector to miss the target object and hit an incorrect object at a different distance. Thus, the straightforward solution involving a single vector-to-object collision could be inaccurate in indicating object gaze. More involved metrics that rely upon an estimation of the angular distance from the ray to the center of the object must account for an object’s angular size based on distance, or irregularly shaped edges - information that is not made readily available by popular game engines (e.g. Unity© /Unreal© ) or rendering pipelines (OpenGL). The approach presented here avoids this limitation by projecting many rays distributed across an angular space that is centered upon the estimated gaze direction.