In 2024, we brought London closer to the ocean: Nearly 700 design students at the Royal College of Art participated in the Grand Challenge 2023/24. In teams of five, the students were tasked with tackling challenges around London and the Ocean from a design perspective. The challenge involved multiple methodologies, including design engineering, speculative design, service design, materials- and fashion-related approaches. Each team had one month to develop a compelling proposal. Fifty students participating in the Grand Challenge were able to join the Extended Reality/XR Stream: Ten teams of five students each came up with different design solutions using the Unreal Engine 5. This paper presents how Unreal Engine 5 was introduced to students through lectures and hand-on sessions, and how XR technologies were employed to creatively interpret the original brief of the Grand Challenge and how it inspired our students to come up with unique design propositions. In particular, we discuss here two case studies in detail: XRiver and SuDScape. These two student projects were among the top 13 teams exhibited at the final Grand Challenge show, offering insights and recommendations for incorporating XR into design education.
The NEMO project (New Economic Model for the Oceans) tackles ocean-related issues from a design perspective. The initial NEMO project is based on an experimental data gathering using 4K cameras to capture objects along a ship voyage of 6,070 nautical miles from Kangerlussuaq (Greenland) to Poole (UK) via Sardinia (Italy). This data was combined with recorded GPS coordinates to visualize the journey using the Cesium.js environment. Several objects were identified and mapped to the corresponding locations of the ship trajectory. In this work, we are using this initial web visualization as starting point to create a first prototype of an UNREAL engine-based immersive environment which can be explored within the new RCA SNAP Visualization Lab facility. Based on a previous co-design workshop locally ran at the RNLI headquarters in Poole, we started to explore opportunities to enable similar experiences in an immersive 3D environment. The ultimate aim is to enable collaborative data visualization and exploration in the context of co-design workshops. In this way, we are combining Macro-Meso-Micro perspectives with the purpose to provide on one hand a holistic overview of the journey, and on the other hand enable place-based exploration of ocean- and coast-related scenarios.
Virtual Reality (VR) Head-Mounted Displays (HMDs), also known as VR headsets, are powerful devices that provide interaction between people and the virtual 3D world generated by a computer. For an immersive VR experience, the realistic facial animation of the participant is crucial. However, facial expression tracking has been one of the major challenges of facial animation. Existing face tracking methods often rely on a statistical model of the entire face, which is not feasible as occlusions arising from HMDs are inevitable. In this paper, we provide an overview of the current state of VR facial expression tracking and discuss bottlenecks for VR expression re-targeting. We introduce a baseline method for expression tracking from single view, partially occluded facial infrared (IR) images, which are captured by the HP reverb G2 VR headset camera. The experiment shows good visual prediction results for mouth region expressions from a single person.
In the early phases of the pandemic lockdown, our team was eager to share our collection in new ways. Using an existing 3D asset and advancements in AR technology we were able to augment a 3D model of a collection object with the voice of a curator to add context and value. This experience leveraged the unique capabilities of the open Pixar USD format USDZ extension. This paper documents the workflow behind creating an AR experience as well as other applications of the USD/USDZ format for cultural heritage applications. This paper will also provide valuable information about developments, limitations and misconceptions between WebXR glTF and USDZ.
Purpose: Virtual Reality (VR) headsets are becoming more and more popular and are now standard attractions in many places such as museums and fairs. Although the issues of VR induced cybersickness or eye strain are well known, as well as the associated risks factors, most studies have focused on reducing it or assessing this discomfort rather than predicting it. Since the negative experience of few users can have a strong impact on the product or an event's publicity the aim of the study was to develop a simple questionnaire that could help a user to rapidly and accurately self-assess personal risks of experiencing discomfort before using VR. Methods: 224 subjects (age 30.44±2.62 y.o.) participated to the study. The VR experience was 30 minutes long. During each session, 4 users participated simultaneously. The experience was conducted with HTC Vive. It consisted in being at the bottom of the ocean and observing surroundings. Users could see the other participants' avatars, move in a 12 m2 area and interact with the environment. The experience was designed to produce as little discomfort as possible. Participants filled out a questionnaire which included 11 questions about their personal information (age, gender, experience with VR, etc.), good binocular vision, need for glasses and use of their glasses during the VR session, tendencies to suffer from other conditions (such as motion sickness, migraines) and the level of fatigue before the experiment, designed to assess their susceptibility to cybersickness. The questionnaire also contained three questions through which subjects self-assessed the impact of the session on their level of visual fatigue, headache and nausea, the sum of which produced the subjective estimate of “VR discomfort” (VRD). 5-point Likert scale was used for the questions when possible. The data of 29 participants were excluded from the analysis due to incomplete data. Results: The correlation analysis showed that five questions' responses correlated with the VRD: sex (r = -.19, p = .02 (FDR corrected)), susceptibility to head aches and migraines (r = -.25, p = .002), susceptibility to motion sickness (r = -.18, p = .02), fatigue or a sickness before the session (r = -.26, p < .002), and the stereoscopic vision issues (r = .23, p = .004). A linear regression model of the discomfort with these five questions as predictors (F(5, 194) = 9.19, p < 0.001, R2 = 0.19) showed that only the level of fatigue (beta = .53, p < .001) reached statistical significance. Conclusion: Even though answers to five questions were found to correlate with VR induced discomfort, linear regression showed that only one of them (the level of fatigue) proved to be useful in prediction of the level of the discomfort. The results suggest that a tool whose purpose is to predict VR-induced discomfort can benefit from a combination of subjectve and objective measures. Conclusion: Even though answers to five questions were found to correlate with VR induced discomfort, linear regression showed that only one of them (the level of fatigue) proved to be useful in prediction of the level of the discomfort. The results suggest that a tool whose purpose is to predict VR-induced discomfort can benefit from a combination of subjectve and objective measures.
A virtual reality (VR) driving simulation platform has been built for use in addressing multiple research interests. This platform is a VR 3D engine (Unity © ) that provides an immersive driving experience viewed in an HTC Vive © head-mounted display (HMD). To test this platform, we designed a virtual driving scenario based on a real tunnel used by Törnros to perform onroad tests [1] . Data from the platform, including driving speed and lateral lane position, was compared the published on-road tests. The correspondence between the driving simulation and onroad tests is assessed to demonstrate the ability of our platform as a research tool. In addition, the drivers’ eye movement data, such as 3D gaze point of regard (POR), will be collected during the test with an Tobii © eye-tracker integrated in the HMD. The data set will be analyzed offline and examined for correlations with driving behaviors in future study.
This document provides an overview of the 31st Stereoscopic Displays and Applications conference and an introduction to the conference proceedings.
The most common sensor arrangement of 360 panoramic video cameras is a radial design where a number of sensors are outward looking as in spokes on a wheel. The cameras are typically spaced at approximately human interocular distance with high overlap. We present a novel method of leveraging small form-factor camera units arranged in stereo pairs and interleaved to achieve a fully panoramic view with fully parallel sensor pairs. This arrangement requires less keystone correction to get depth information and the discontinuity between images that have to be stitched together is smaller than in the radial design. The primary benefit for this arrangement is the small form factor of the system with the large number of sensors enabling a high resolving power. We highlight mechanical considerations, system performance and software capabilities of these manufactured and tested imaging units. One is based on the Raspberry Pi cameras and a second based on a 16 camera system leveraging 8 pairs of 13 megapixel AR1335 cell phone sensors. In addition several different variations on the conceptual design were simulated with synthetic projections to compare stitching difficulty of the rendered scenes.
Immersive Virtual Reality (VR) has been shown to work as a non-pharmacological analgesic by inducing [sic] cognitive distraction in acute pain patients. Researchers have shown that VR games have the potential to distract patients cognitively and function as a type of pain management therapy. In this paper, we introduce the gameplay and design metaphors of Mobius Floe (MF), an immersive VR pain distraction game for acute and chronic pain patients. MF introduces an experimental approach with more engaging game interactivity to improve cognitive distraction for pain relief. In MF, we designed game mechanics with specific pain metaphors and therapeutic elements in immersive VR. In this paper, we analyze and explain the overall gameplay design principles and each pain metaphor element implemented in the game. We believe the design procedures and the way we implemented pain metaphors will inspire game design ideas in VR health game design and provide potentially useful references for other researchers and game designers.