Regular
Augmented RealityArtistic Research
Cognitive SciencesCyber sicknessCOVID-19 testing
Data VisualizationDICOM
emergent augmented reality platforms
gRPC
Immersive Experiencesimmersive VR
Mobile DevicesMixed RealityMRI
situational awareness
training module
Virtual RealityVirtual realityVR and AR in education, learning, gaming, artvirtual reality UI and UXVRVRI modulevirtual and augmented reality systemsVolume Rendering
 Filters
Month and year
 
  4  0
Image
Pages A13-1 - A13-5,  © Society for Imaging Science and Technology 2021
Digital Library: EI
Published Online: January  2021
  25  6
Image
Pages 167-1 - 167-11,  © Society for Imaging Science and Technology 2021
Volume 33
Issue 13

This paper analyses the use of Immersive Experiences (IX) within artistic research, as an interdisciplinary environment between artistic, practice based research, visual pedagogies, social and cognitive sciences. This paper examines IX in the context of social shared spaces. It presents the Immersive Lab University of Malta (ILUM) interdisciplinary research project. ILUM has a dedicated, specific room, located at the Department of Digital Arts, Faculty of Media & Knowledge Sciences, at University of Malta, appropriately set-up with life size surround projection and surround sound so as to provide a number of viewers (located within the set-up) with an IX virtual reality environment.

Digital Library: EI
Published Online: January  2021
  96  12
Image
Pages 168-1 - 168-5,  © Society for Imaging Science and Technology 2021
Volume 33
Issue 13

Purpose: Virtual Reality (VR) headsets are becoming more and more popular and are now standard attractions in many places such as museums and fairs. Although the issues of VR induced cybersickness or eye strain are well known, as well as the associated risks factors, most studies have focused on reducing it or assessing this discomfort rather than predicting it. Since the negative experience of few users can have a strong impact on the product or an event's publicity the aim of the study was to develop a simple questionnaire that could help a user to rapidly and accurately self-assess personal risks of experiencing discomfort before using VR. Methods: 224 subjects (age 30.44±2.62 y.o.) participated to the study. The VR experience was 30 minutes long. During each session, 4 users participated simultaneously. The experience was conducted with HTC Vive. It consisted in being at the bottom of the ocean and observing surroundings. Users could see the other participants' avatars, move in a 12 m2 area and interact with the environment. The experience was designed to produce as little discomfort as possible. Participants filled out a questionnaire which included 11 questions about their personal information (age, gender, experience with VR, etc.), good binocular vision, need for glasses and use of their glasses during the VR session, tendencies to suffer from other conditions (such as motion sickness, migraines) and the level of fatigue before the experiment, designed to assess their susceptibility to cybersickness. The questionnaire also contained three questions through which subjects self-assessed the impact of the session on their level of visual fatigue, headache and nausea, the sum of which produced the subjective estimate of “VR discomfort” (VRD). 5-point Likert scale was used for the questions when possible. The data of 29 participants were excluded from the analysis due to incomplete data. Results: The correlation analysis showed that five questions' responses correlated with the VRD: sex (r = -.19, p = .02 (FDR corrected)), susceptibility to head aches and migraines (r = -.25, p = .002), susceptibility to motion sickness (r = -.18, p = .02), fatigue or a sickness before the session (r = -.26, p < .002), and the stereoscopic vision issues (r = .23, p = .004). A linear regression model of the discomfort with these five questions as predictors (F(5, 194) = 9.19, p < 0.001, R2 = 0.19) showed that only the level of fatigue (beta = .53, p < .001) reached statistical significance. Conclusion: Even though answers to five questions were found to correlate with VR induced discomfort, linear regression showed that only one of them (the level of fatigue) proved to be useful in prediction of the level of the discomfort. The results suggest that a tool whose purpose is to predict VR-induced discomfort can benefit from a combination of subjectve and objective measures. Conclusion: Even though answers to five questions were found to correlate with VR induced discomfort, linear regression showed that only one of them (the level of fatigue) proved to be useful in prediction of the level of the discomfort. The results suggest that a tool whose purpose is to predict VR-induced discomfort can benefit from a combination of subjectve and objective measures.

Digital Library: EI
Published Online: January  2021
  128  10
Image
Pages 177-1 - 177-6,  © Society for Imaging Science and Technology 2021
Volume 33
Issue 13

Situational awareness provides the decision making capability to identify, process, and comprehend big data. In our approach, situational awareness is achieved by integrating and analyzing multiple aspects of data using stacked bar graphs and geographic representations of the data. We provide a data visualization tool to represent COVID pandemic data on top of the geographical information. The combination of geospatial and temporal data provides the information needed to conduct situational analysis for the COVTD-19 pandemic. By providing interactivity, geographical maps can be viewed from different perspectives and offer insight into the dynamical aspects of the COVTD-19 pandemic for the fifty states in the USA. We have overlaid dynamic information on top of a geographical representation in an intuitive way for decision making. We describe how modeling and simulation of data increase situational awareness, especially when coupled with immersive virtual reality interaction. This paper presents an immersive virtual reality (VR environment and mobile environment for data visualization using Oculus Rift head-mounted display and smartphones. This work combines neural network predictions with human-centric situational awareness and data analytics to provide accurate, timely, and scientific strategies in combatting and mitigating the spread of the coronavirus pandemic. Testing and evaluation of the data visualization tool have been done with realtime feed of COVID pandemic data set for immersive environment, non-immersive environment, and mobile environment.

Digital Library: EI
Published Online: January  2021
  100  4
Image
Pages 178-1 - 178-6,  © Society for Imaging Science and Technology 2021
Volume 33
Issue 13

Healthcare practitioners, social workers, and care coordinators must work together seamlessly, safely and efficiently. Within the context of the COVID-19 pandemic, understanding relevant evidence-based and best practices as well as identification of barriers and facilitators of care for vulnerable populations are of crucial importance. A current gap exists in the lack of specific training for these specialized personnel to facilitate care for socially vulnerable populations, particularly racial and ethnic minorities. With continuing advancements in technology, VR based training incorporates real-life experience and creates a "sense of presence" in the environment. Furthermore, immersive virtual environments offer considerable advantages over traditional training exercises such as reduction in the time and cost for different what-if scenarios and opportunities for more frequent practice. This paper proposes the development of Virtual Reality Instructional (VRI) training modules geared for COVID-19 testing. The VRI modules are developed for immersive, non-immersive, and mobile environment. This paper describes the development and testing of the VRI module using the Unity gaming engine These VRI modules are developed to help increase safety preparedness and mitigate the social distancing related risks for safety management.

Digital Library: EI
Published Online: January  2021
  94  6
Image
Pages 179-1 - 179-7,  © Society for Imaging Science and Technology 2021
Volume 33
Issue 13

Digital Imaging and Communications in Medicine (DICOM) is an international standard to transfer, store, retrieve, print, process and display medical imaging information. It provides a standardized method to store medical images from many types of imaging devices. Typically, CT and MRI scans, which are composed of 2D slice images in DICOM format, can be inspected and analyzed with DICOM-compatible imaging software. Additionally, the DICOM format provides important information to assemble cross-sections into 3D volumetric datasets. Not many DICOM viewers are available for mobile platforms (smartphones and tablets), and most of them are 2D-based with limited functionality and user interaction. This paper reports on our efforts to design and implement a volumetric 3D DICOM viewer for mobile devices with real-time rendering, interaction, a full transfer function editor and server access capabilities. 3D DICOM image sets, either loaded from the device or downloaded from a remote server, can be rendered at up to 60 fps on Android devices. By connecting to our server, users can a) get pre-computed image quality metrics and organ segmentation results, and b) share their experience and synchronize views with other users on different platforms.

Digital Library: EI
Published Online: January  2021

Keywords

[object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object]