Aviation Maintenance Technicians (AMTs) play an important role in guaranteeing the safety, reliability, and readiness of aviation operations worldwide. Per Federal Aviation Administration (FAA) regulations, certified AMTs must document mechanic-related experience to maintain their certification. Currently, aviation maintenance training methods are centered around classroom instruction, printed manuals, videos, and on-the-job training. Due to the constantly evolving digital landscape, there is an opportunity to modernize the way AMTs are trained, remain current, and conduct on-the-job training. This research explores the implementation of Virtual Reality (VR) platforms as a method for enhancing the aviation training experience in the areas of aircraft maintenance and sustainability. One outcome of this research is the creation of a virtual training classroom module for aircraft maintenance, utilizing a web- based, open-source, immersive platform called Mozilla Hubs. While there is a general belief that VR enhances learning in general, very few controlled experiments have been conducted to show that this is the case. The goal of this research is to add to the general knowledge on the use of VR for training and specifically for aircraft maintenance training.
This study evaluates user experiences in Virtual Reality (VR) and Mixed Reality (MR) systems during task-based interactions. Three experimental conditions were examined: MR (Physical), MR (CG), and VR. Subjective and objective indices were used to assess user performance and experience. The results demonstrated significant differences among conditions, with MR (Physical) consistently outperforming MR (CG) and VR in various aspects. Eye-tracking data revealed that users spent less time observing physical objects in the MR (Physical) condition, primarily relying on virtual objects for task guidance. Conversely, in conditions with a higher proportion of CG content, users spent more time observing objects but reported increased discomfort due to delays. These findings suggest that the ratio of virtual to physical objects significantly impacts user experience and performance. This research provides valuable insights into improving user experiences in VR and MR systems, with potential applications across various domains.
In August 2023, a series of co-design workshops were run in a collaboration between Kyushu University (Fukuoka, Japan), the Royal College of Art (London, UK), and Imperial College London (London, UK). In this series of workshops, participants were asked to create a number of drawings visualising avian-human interaction scenarios. Each set of drawings demonstrated a specific interaction each participant had with an avian species in three different contexts: the interaction from the participants perspective, the interaction from the birds perspective, and how the participant hopes interaction will be embodied in 50 years' time. The main purpose of this exercise was to co-imagine a utopian future with more positive interspecies relations between humans and birds. Based on these drawings, we have created a number of visualisations presenting those perspectives in Virtual Reality. This development allows viewers to visualise participants' perspective shifts through the subject matter depicted in their workshop drawings, allowing for the observation of the relationship between humans and non-humans (here: avian species). This research tests the hypothesis that participants perceive Virtual Reality as furthering their feelings of immersion relating to the workshop topic of human-avian relationships. This demonstrates the potential of XR technologies as a medium for building empathy towards non-human species, providing foundational justification for this body of work to progress into employing XR as a medium for perspective shifts.
Virtual and augmented reality technologies have significantly advanced and come down in price during the last few years. These technologies can provide a great tool for highly interactive visualization approaches of a variety of data types. In addition, setting up and managing a virtual and augmented reality laboratory can be quite involved, particularly with large-screen display systems. Thus, this keynote presentation will outline some of the key elements to make this more manageable by discussing the frameworks and components needed to integrate the hardware and software into a more manageable package. Examples for visualizations and their applications using this environment will be discussed from a variety of disciplines to illustrate the versatility of the virtual and augmented reality environment available in the laboratories that are available to faculty and students to perform their research.
Pixels per degree (PPD) alone is not a reliable predictor for high-resolution experience in VR and AR. This is because "high-resolution experience" not only depends on PPD but also display fill factor, pixel arrangement, graphics rendering, and other factors. This complicates architecture decisions and design comparisons. Is there a simple way to capture all the contributors and match user experience? In this paper, we present a system level model, system MTF, to predict perceptual quality considering all the key VR/AR dimensions: pixel shape (display), pixel per degree (display), fill factor (display), optical blur (Optics), and image processing (graphics pipeline). The metric can be defined in much the same way of traditional MTF for imaging systems by examining image formation of a point source and then performing Fourier transform over the response function, with special mathematical treatment. One application is presented on perceived text quality, where two weight functions depending on text orientation and frequency incorporated into the above model. A perceptual study about text quality was performed to validate the system MTF model.
Many extended reality systems use controllers, e.g. near-infrared motion trackers or magnetic coil-based hand-tracking devices for users to interact with virtual objects. These interfaces lack tangible sensation, especially during walking, running, crawling, and manipulating an object. Special devices such as the Tesla suit and omnidirectional treadmills can improve tangible interaction. However, they are not flexible for broader applications, builky, and expensive. In this study, we developed a configurable multi-modal sensor fusion interface for extended reality applications. The system includes wearable IMU motion sensors, gait classification, gesture tracking, and data streaming interfaces to AR/VR systems. This system has several advantages: First, it is reconfigurable for multiple dynamic tangible interactions such as walking, running, crawling, and operating with an actual physical object without any controllers. Second, it fuses multi-modal sensor data from the IMU and sensors on the AR/VR headset such as floor detection. And third, it is more affordable than many existing solutions. We have prototyped tangible extended reality in several applications, including medical helicopter preflight walking around checkups, firefighter search and rescue training, and tool tracking for airway intubation training with haptic interaction with a physical mannequin.
Design education has special requirements on using virtual studio setups. The Zoom fatigue during the world-wide COVID pandemic let our institution to test new tools which can help improve digital design education by supporting features like idea sharing, collaborative making, serendipitous discussion and group forming. For this purpose, we tested different tools during our Master program Innovation Design Engineering and collected student feedback. We found that immersion is a key factor impacting the effectiveness of group work in distance learning. This paper presents our applications and analysis of different platforms and contributes insights on how to build a virtual studio environment for an interdisciplinary master programme in design engineering. In this work, we will focus on our studies on Gather.town, Meta Horizon Workrooms and Spatial.
Since 2016 the RCA Grand Challenge is an interdisciplinary event that runs each year across the entire School of Design at the Royal College of Art. Around 400 Master and PhD students with design background participate in this event, focusing on tackling key global challenges through collaboration. In 2022, the Grand Challenge was focusing on elaborating topics in the context of Ocean-driven activities. In this context, a VR workshop was conducted with around 40 students with mostly no background in using VR platforms. In the end – after a 4-week design sprint – 10 UNREAL engine-based projects were presented of which we will discuss a selection here. One of the projects managed to be among the overall winning teams of the Grand Challenge 2022.
Healthcare professionals, just like any other community, can exhibit implicit biases. These biases adversely impact patients’ health outcomes. Promoting awareness of both social determinants of health (SDH) and the impact of implicit/explicit biases assists healthcare professionals to understand their patients well and improve care experiences. In addition, it helps to augment the long-lasting empathy and compassion in healthcare professionals towards patients for care treatments while maintaining better healthcare professional-patient relationships. Thus, this research provides Computer-Supported Expert-Guided Experiential Learning (CSEGEL) tools or mobile applications that facilitate healthcare professionals with a first-person learning experience to augment advanced healthcare skills (e.g., professional communication, cultural humility, awareness of both SDH and impact of biases on health outcomes). The CSEGEL tools in the form of mobile applications incorporate virtual reality-based serious role-playing scenarios along with a novel Life Course module to deliver first-person experiential learning capability to augment the advanced healthcare skills of healthcare professionals and public awareness. Finally, a preliminary data analysis is provided to demonstrate the positive influence of CSEGEL tools and measure the required number of sample sizes for concrete evidence to show effective results.
This project presents a virtual reality (VR) Interactive Narrative aiming to leave users reflecting on the perspectives one chooses to view life through. The narrative is driven by interactions designed using the concept of procedural rhetoric, which explores how rules and mechanics in games can persuade people about an idea, and Shin’s cognitive model, which presents a dynamic view of immersion in VR. The persuasive nature of procedural rhetoric in combination with immersion techniques such as tangible interfaces and first-person elements of VR can effectively work together to immerse users into a compelling narrative experience with an intended emotional response output. The narrative is experienced through a young woman in a state between life and death, who wakes up as her subconscious-self in a limbo-like world consisting of core memories from her life, where the user is tasked with taking photos of the protagonist’s memories for her to come back to life. Users primarily interact with and are integrated into the narrative through a photography mechanic, as they have the agency to select â€perspective†filters to apply to the woman’s camera from which to view a core memory through, ultimately choosing which perspectives of her memories become permanent when she comes back to life.