Aviation Maintenance Technicians (AMTs) play an important role in guaranteeing the safety, reliability, and readiness of aviation operations worldwide. Per Federal Aviation Administration (FAA) regulations, certified AMTs must document mechanic-related experience to maintain their certification. Currently, aviation maintenance training methods are centered around classroom instruction, printed manuals, videos, and on-the-job training. Due to the constantly evolving digital landscape, there is an opportunity to modernize the way AMTs are trained, remain current, and conduct on-the-job training. This research explores the implementation of Virtual Reality (VR) platforms as a method for enhancing the aviation training experience in the areas of aircraft maintenance and sustainability. One outcome of this research is the creation of a virtual training classroom module for aircraft maintenance, utilizing a web- based, open-source, immersive platform called Mozilla Hubs. While there is a general belief that VR enhances learning in general, very few controlled experiments have been conducted to show that this is the case. The goal of this research is to add to the general knowledge on the use of VR for training and specifically for aircraft maintenance training.
This study evaluates user experiences in Virtual Reality (VR) and Mixed Reality (MR) systems during task-based interactions. Three experimental conditions were examined: MR (Physical), MR (CG), and VR. Subjective and objective indices were used to assess user performance and experience. The results demonstrated significant differences among conditions, with MR (Physical) consistently outperforming MR (CG) and VR in various aspects. Eye-tracking data revealed that users spent less time observing physical objects in the MR (Physical) condition, primarily relying on virtual objects for task guidance. Conversely, in conditions with a higher proportion of CG content, users spent more time observing objects but reported increased discomfort due to delays. These findings suggest that the ratio of virtual to physical objects significantly impacts user experience and performance. This research provides valuable insights into improving user experiences in VR and MR systems, with potential applications across various domains.
Pixels per degree (PPD) alone is not a reliable predictor for high-resolution experience in VR and AR. This is because "high-resolution experience" not only depends on PPD but also display fill factor, pixel arrangement, graphics rendering, and other factors. This complicates architecture decisions and design comparisons. Is there a simple way to capture all the contributors and match user experience? In this paper, we present a system level model, system MTF, to predict perceptual quality considering all the key VR/AR dimensions: pixel shape (display), pixel per degree (display), fill factor (display), optical blur (Optics), and image processing (graphics pipeline). The metric can be defined in much the same way of traditional MTF for imaging systems by examining image formation of a point source and then performing Fourier transform over the response function, with special mathematical treatment. One application is presented on perceived text quality, where two weight functions depending on text orientation and frequency incorporated into the above model. A perceptual study about text quality was performed to validate the system MTF model.
Many extended reality systems use controllers, e.g. near-infrared motion trackers or magnetic coil-based hand-tracking devices for users to interact with virtual objects. These interfaces lack tangible sensation, especially during walking, running, crawling, and manipulating an object. Special devices such as the Tesla suit and omnidirectional treadmills can improve tangible interaction. However, they are not flexible for broader applications, builky, and expensive. In this study, we developed a configurable multi-modal sensor fusion interface for extended reality applications. The system includes wearable IMU motion sensors, gait classification, gesture tracking, and data streaming interfaces to AR/VR systems. This system has several advantages: First, it is reconfigurable for multiple dynamic tangible interactions such as walking, running, crawling, and operating with an actual physical object without any controllers. Second, it fuses multi-modal sensor data from the IMU and sensors on the AR/VR headset such as floor detection. And third, it is more affordable than many existing solutions. We have prototyped tangible extended reality in several applications, including medical helicopter preflight walking around checkups, firefighter search and rescue training, and tool tracking for airway intubation training with haptic interaction with a physical mannequin.
Design education has special requirements on using virtual studio setups. The Zoom fatigue during the world-wide COVID pandemic let our institution to test new tools which can help improve digital design education by supporting features like idea sharing, collaborative making, serendipitous discussion and group forming. For this purpose, we tested different tools during our Master program Innovation Design Engineering and collected student feedback. We found that immersion is a key factor impacting the effectiveness of group work in distance learning. This paper presents our applications and analysis of different platforms and contributes insights on how to build a virtual studio environment for an interdisciplinary master programme in design engineering. In this work, we will focus on our studies on Gather.town, Meta Horizon Workrooms and Spatial.
Since 2016 the RCA Grand Challenge is an interdisciplinary event that runs each year across the entire School of Design at the Royal College of Art. Around 400 Master and PhD students with design background participate in this event, focusing on tackling key global challenges through collaboration. In 2022, the Grand Challenge was focusing on elaborating topics in the context of Ocean-driven activities. In this context, a VR workshop was conducted with around 40 students with mostly no background in using VR platforms. In the end – after a 4-week design sprint – 10 UNREAL engine-based projects were presented of which we will discuss a selection here. One of the projects managed to be among the overall winning teams of the Grand Challenge 2022.
Healthcare professionals, just like any other community, can exhibit implicit biases. These biases adversely impact patients’ health outcomes. Promoting awareness of both social determinants of health (SDH) and the impact of implicit/explicit biases assists healthcare professionals to understand their patients well and improve care experiences. In addition, it helps to augment the long-lasting empathy and compassion in healthcare professionals towards patients for care treatments while maintaining better healthcare professional-patient relationships. Thus, this research provides Computer-Supported Expert-Guided Experiential Learning (CSEGEL) tools or mobile applications that facilitate healthcare professionals with a first-person learning experience to augment advanced healthcare skills (e.g., professional communication, cultural humility, awareness of both SDH and impact of biases on health outcomes). The CSEGEL tools in the form of mobile applications incorporate virtual reality-based serious role-playing scenarios along with a novel Life Course module to deliver first-person experiential learning capability to augment the advanced healthcare skills of healthcare professionals and public awareness. Finally, a preliminary data analysis is provided to demonstrate the positive influence of CSEGEL tools and measure the required number of sample sizes for concrete evidence to show effective results.
This project presents a virtual reality (VR) Interactive Narrative aiming to leave users reflecting on the perspectives one chooses to view life through. The narrative is driven by interactions designed using the concept of procedural rhetoric, which explores how rules and mechanics in games can persuade people about an idea, and Shin’s cognitive model, which presents a dynamic view of immersion in VR. The persuasive nature of procedural rhetoric in combination with immersion techniques such as tangible interfaces and first-person elements of VR can effectively work together to immerse users into a compelling narrative experience with an intended emotional response output. The narrative is experienced through a young woman in a state between life and death, who wakes up as her subconscious-self in a limbo-like world consisting of core memories from her life, where the user is tasked with taking photos of the protagonist’s memories for her to come back to life. Users primarily interact with and are integrated into the narrative through a photography mechanic, as they have the agency to select â€perspective†filters to apply to the woman’s camera from which to view a core memory through, ultimately choosing which perspectives of her memories become permanent when she comes back to life.
This paper analyses the use of Immersive Experiences (IX) within artistic research, as an interdisciplinary environment between artistic, practice based research, visual pedagogies, social and cognitive sciences. This paper examines IX in the context of social shared spaces. It presents the Immersive Lab University of Malta (ILUM) interdisciplinary research project. ILUM has a dedicated, specific room, located at the Department of Digital Arts, Faculty of Media & Knowledge Sciences, at University of Malta, appropriately set-up with life size surround projection and surround sound so as to provide a number of viewers (located within the set-up) with an IX virtual reality environment.
The immersive Virtual Reality (VR) environment transforms the way people learn, work, and communicate. While the traditional modeling tools such as Blender and AutoCAD are commonly used for industrial design today, handling 3D objects on a two-dimensional screen with a keyboard and mouse is very challenging. In this work, we introduce a VR modeling system named The Virtual Workshop supporting the design and manipulation of various 3D objects in virtual environments. The proposed system was developed for the Oculus Rift platform allowing a user to efficiently and precisely design 3D objects using two hands directly. The friendly system GUI supports the user to create new 3D objects from scratch using premade “basic objects” or alternatively to import an existing 3D object made in other applications. Meanwhile, the finished 3D models are stored in the standard OBJ file format and exported later for the development of 3D scenarios in other applications such as the Unity engine. From concept to design, the VR modeling system provides an open platform where the designers and the clients can better share their ideas as well as interactively refine the rendered virtual models in collaboration.