In this paper, I present the proposal of a virtual reality subjective experiment to be performed at Texas State University, which is part of the VQEG-IMG test plan for the definition of a new recommendation for subjective assessment of eXtended Reality (XR) communications (work item ITU-T P.IXC). More specifically, I discuss the challenges of estimating the user quality of experience (QoE) for immersive applications and detail the VQEG-IMG test plan tasks for XR subjective QoE assessment. I also describe the experimental choices of the audio-visual experiment to be performed at Texas State University, which has the goal of comparing two possible scenarios for teleconference meetings: a virtual reality representation and a realistic representation.
Concerns about head mounted displays have led to numerous studies about their potential impact on the visual system. Yet, none have investigated if the use of Virtual Reality (VR) Head Mounted Displays with their reduced field of view and visually soliciting visual environment, could reduce the spatial spread of the attentional window. To address this question, we measured the useful field of vision in 16 participants right before playing a VR game for 30 minutes and immediately afterwards. The test involves calculation of a presentation time threshold necessary for efficient perception of a target presented in the centre of the visual field and a target presented in the periphery. The test consists of three subtests with increasing difficulty. Data comparison did not show significant difference between pre-VR and post-VR session (subtest 2: F(1,11) = .7 , p = .44; subtest 3 F(1,11) = .9 , p = .38). However, participants’ performances for central target perception decreased in the most requiring subtest (F(1,11) = 8.1, p = .02). This result suggests that changes in spatial attention could be possible after prolonged VR presentation.
At present, the research on emotion in the virtual environment is limited to the subjective materials, and there are very few studies based on objective physiological signals. In this article, the authors conducted a user experiment to study the user emotion experience of virtual reality (VR) by comparing subjective feelings and physiological data in VR and two-dimensional display (2D) environments. First, they analyzed the data of self-report questionnaires, including Self-assessment Manikin (SAM), Positive And Negative Affect Schedule (PANAS) and Simulator Sickness Questionnaire (SSQ). The result indicated that VR causes a higher level of arousal than 2D, and easily evokes positive emotions. Both 2D and VR environments are prone to eye fatigue, but VR is more likely to cause symptoms of dizziness and vertigo. Second, they compared the differences of electrocardiogram (ECG), skin temperature (SKT) and electrodermal activity (EDA) signals in two circumstances. Through mathematical analysis, all three signals had significant differences. Participants in the VR environment had a higher degree of excitement, and the mood fluctuations are more frequent and more intense. In addition, the authors used different machine learning models for emotion detection, and compared the accuracies on VR and 2D datasets. The accuracies of all algorithms in the VR environment are higher than that of 2D, which corroborated that the volunteers in the VR environment have more obvious skin electrical signals, and had a stronger sense of immersion. This article effectively compensated for the inadequacies of existing work. The authors first used objective physiological signals for experience evaluation and used different types of subjective materials to make contrast. They hope their study can provide helpful guidance for the engineering reality of virtual reality.
Modern virtual reality (VR) headsets use lenses that distort the visual field, typically with distortion increasing with eccentricity. While content is pre-warped to counter this radial distortion, residual image distortions remain. Here we examine the extent to which such residual distortion impacts the perception of surface slant. In Experiment 1, we presented slanted surfaces in a head-mounted display and observers estimated the local surface slant at different locations. In Experiments 2 (slant estimation) and 3 (slant discrimination), we presented stimuli on a mirror stereoscope, which allowed us to more precisely control viewing and distortion parameters. Taken together, our results show that radial distortion has significant impact on perceived surface attitude, even following correction. Of the distortion levels we tested, 5% distortion results in significantly underestimated and less precise slant estimates relative to distortion-free surfaces. In contrast, Experiment 3 reveals that a level of 1% distortion is insufficient to produce significant changes in slant perception. Our results highlight the importance of adequately modeling and correcting lens distortion to improve VR user experience.
We analyzed the impact of common stereoscopic three-dimensional (S3D) depth distortion on S3D optic flow in virtual reality environments. The depth distortion is introduced by mismatches between the image acquisition and display parameters. The results show that such S3D distortions induce large S3D optic flow distortions and may even induce partial/full optic flow reversal within a certain depth range, depending on the viewer’s moving speed and the magnitude of S3D distortion. We hypothesize that the S3D optic flow distortion may be a source of intra-sensory conflict that could be a source of visually induced motion sickness.
This paper describes a comparison of user experience of virtual reality (VR) image format. The authors prepared the following four conditions and evaluated the user experience during viewing VR images with a headset by measuring subjective and objective indices; Condition 1: monoscopic 180-degree image, Condition 2: stereoscopic 180-degree image, Condition 3: monoscopic 360-degree image, Condition 4: stereoscopic 360-degree image. From the results of the subjective indices (reality, presence, and depth sensation), condition 4 was evaluated highest, and conditions 2 and 3 were evaluated to the same extent. In addition, from the results of the objective indices (eye and head tracking), a tendency to suppress head movement was found in 180-degree images.
The increased replication of human behavior with virtual agents and proxies in a multi-user or collaborated virtual reality environment (CVE) has influenced the eruption of scholastic research and training. The capabilities of the user experience for emergency response and training in emergency and catastrophic situations may be highly influenced by the use of computer bots, avatar, and virtual agents. Our contribution and proposal for the concerted collaborated Virtual Reality nightclub environment consequently warrants the flexibility to run manifold scenarios and evacuation drills in reaction to emergency and disaster preparedness. Modeling such an environment is very essential because it helps emulate the emergencies we experience in our routine lives and provide a learning platform towards the preparation of extreme events. The results of the user study to measure presence in the VE using presence questionnaire (PQ) are discussed in detail, and it was found that there is a consistent positive relation between presence and task performance in VEs. The results further suggest that most users feel that this application could be a good tool for education and training purposes.
Developing an augmented reality (AR) system involves a multitude of interconnected algorithms such as image fusion, camera synchronization and calibration, and brightness control, each having diverse parameters. This abundance of features, while beneficial in nature for its applicability to different tasks, is detrimental to developers as they try to navigate different combinations and pick the most suitable configuration for their application. Additionally, the temporally inconsistent nature of the real world hinders the development of reproducible and reliable testing methods for AR systems. To help address these issues, we develop and test a virtual reality (VR) environment [1] that allows the simulation of variable AR configurations for image fusion. In this work, we improve our system with a more realistic AR glass model adhering to physical light and glass properties. Our implementation combines the incoming real-world background light and the AR projector light at the level of the AR glass.
The possible achievements of accurate and intuitive 3D image segmentation are endless. For our specific research, we aim to give doctors around the world, regardless of their computer knowledge, a virtual reality (VR) 3D image segmentation tool which allows medical professionals to better visualize their patients’ data sets, thus attaining the best understanding of their respective conditions.We implemented an intuitive virtual reality interface that can accurately display MRI and CT scans and quickly and precisely segment 3D images, offering two different segmentation algorithms. Simply put, our application must be able to fit into even the most busy and practiced physicians’ workdays while providing them with a new tool, the likes of which they have never seen before.
This paper proposes the method of interaction design to present haptic experience as intended in virtual reality (VR). The method that we named “Augmented Cross-Modality” is to translate the physiological responses, knowledge and impression about the experience in real world into audio-visual stimuli and add them to the interaction in VR. In this study, as expressions for presenting a haptic experience of gripping an object strongly and lifting a heavy object, we design hand tremor, strong gripping and increasing heart rate in VR. The objective is, at first, to enhance a sense of strain of a body with these augmented cross-modal expressions and then, change the quality of the total haptic experience and as a result, make it closer to the experience of lifting a heavy object. This method is evaluated by several rating scales, interviews and force sensors attached to a VR controller. The result suggests that the expressions of this method enhancing a haptic experience of strong gripping in almost all participants and the effectiveness were confirmed.