
Virtual Reality (VR) has recently attracted more attention in mental health applications due to its ability to immerse users in controlled and interactive environments. This paper presents a VR-Based AI Mental Health Companion, a multimodal system designed to support therapy, mindfulness, and real-time stress detection within immersive virtual reality (VR) environments. The system integrates artificial intelligence (AI) techniques by including natural language processing, emotion recognition, and physiological signal analysis by creating personalized mindfulness experiences and interactive meditation coaching. GPT-powered non-player characters (NPCs) are designed with specific therapeutic roles in mind, such as guided mindfulness facilitation, emotional support, and stress-aware conversational therapy. The system employs pose estimation to identify key body points and apply rule-based logic to assess posture accuracy during guided yoga exercises, providing real-time feedback to support correct movement execution. The work also includes biometric integration, such as EEG monitoring for enhanced emotional sensing. Expanding language support, increasing the diversity of pose datasets, and incorporating feedback from clinical professionals help refine the system. By combining immersive VR environments, GPT-driven therapeutic NPCs, and real-time posture validation, the proposed VR-Based AI Mental Health Companion demonstrates the potential of AI–VR convergence as a scalable approach to mental health care, with promising applications in stress management and preventative therapy.

The use of Virtual Reality (VR) in education is rapidly increasing due to the immersive and interactive environments it provides that support the nature of engineering learning experiences that can be difficult, hazardous, or resource-intensive. However, this rapid growth of VR-based educational systems has produced a fragmented literature. This paper presents a scoping review of 32 peer-reviewed studies published between 2012 and 2025 that examine the use of VR in engineering education contexts. Using a structured extraction and descriptive synthesis approach, the review analyzes trends across engineering domains, learning objectives, VR modalities, instructional roles, learner populations, and evaluation methods. The results reveal a strong emphasis on procedural learning objectives, including laboratory skill rehearsal, equipment operation, and safety-oriented training. Despite this focus on procedural tasks, evaluation practices are frequently limited to short-term, perception-based measures, highlighting a structural misalignment between learning objectives and assessment methods. Furthermore, VR is most often deployed as a supplementary instructional tool rather than as a fully integrated component of course design.

This paper outlines a study exploring the potential of implementing a behavior change intervention via virtual reality (VR) to further sustainability communication. A prototype experience was created and tested utilizing the distinctive possibilities of VR, in the context of doubling as entertainment for cruise guests. Tailoring sustainability information to a specific audience while being entertaining, utilizing the features and understanding the limitations of VR in this context were some of the challenges faced. The methods utilized to overcome these barriers should provide valuable insight on the practical application of VR, and understanding the interplay of sustainability communication and the features of VR has the potential to help create powerful tools for fighting climate change. Participants (n = 70) played through the interactive VR story experience of building a ship, choosing between sustainable and unsustainable options. The survey filled after the experience employed both traditional and tailored information gathering methods. Analysis of this wide range of survey questions revealed avenues for improvement such as tutorializing and limiting VR sickness, while also proving success in creating an interesting VR sustainability story with a user experience evaluated as good. Indicative success as a behavior change intervention was found, with participants reporting increase in key determinants of green purchase behavior. As sustainability behavior change applications have previously largely utilized long-term, non-VR applications, the results of this novel multidisciplinary study should prove meaningful.

Utilizing a Value Engineering (VE) approach towards solving educational student throughput bottlenecks caused by equipment and space capacity issues in university machine shop learning, Virtual Reality (VR) presents an opportunity to provide scalable, customizable, and cost-effective means of easing these constraints. An experimental method is proposed to demonstrate applying VR towards increasing the output of the value function of an educational system. This method seeks to yield a high Transfer-Effectiveness-Ratio (TER) such that traditional educational strategies are supplemented by VR sufficiently so that further growth in classroom enrollment is enabled.

Virtual reality (VR) has increasingly become a popular tool in education and is often compared with traditional teaching methods for its potential to improve learning experiences. However, VR itself holds a wide range of experiences and immersion levels within, from less interactive environments to fully interactive, immersive systems. This study explores the impact of different levels of immersion and interaction within VR on learning outcomes. The project, titled Eureka, focuses on teaching the froth flotation process in mining by comparing two VR modalities: a low-interaction environment that presents information through text and visuals without user engagement, and an immersive, high-interaction environment where users can actively engage with the content. The purpose of this research is to investigate how these varying degrees of immersion affect user performance, engagement, and learning outcomes. The results of a user study involving 12 participants revealed that the high-interaction modality significantly improved task efficiency, with participants completing tasks faster than those of the low-interaction modality. Both modalities were similarly effective in conveying knowledge, as evidenced by comparable assessment scores. However, qualitative feedback highlighted design considerations, such as diverse user preferences for navigation and instructional methods. These findings suggest that, while interactive immersion can improve efficiency, effective VR educational tools must accommodate diverse learning styles and needs. Future work will focus on scaling participant diversity and refining VR design features.

Virtual Reality technologies are on the rise! Commercial Head-mounted Display Devices made VR applications affordable and available to a wide user range. Still, VR companies are far from being satisfied with the market penetration of their VR devices and related software. VR companies and VR enthusiasts are waiting for VR to become omnipresent! But what if everything is different? What, if VR has already taken over our world – we just are not aware of this fact? In this essay, I will discuss the Trojan Horses of Virtual Reality – those technologies and approaches, which started taking over our life years ago, we just do not acknowledge this factum. In this essay I will argue that Virtual Reality is already part of our daily life and that the still pending takeover of VR technologies will only be the final casing stone on top of the pyramid.

Virtual Reality (VR) technology has experienced remarkable growth, steadily establishing itself within mainstream consumer markets. This rapid expansion presents exciting opportunities for innovation and application across various fields, from entertainment to education and beyond. However, it also underscores a pressing need for more comprehensive research into user interfaces and human-computer interaction within VR environments. Understanding how users engage with VR systems and how their experiences can be optimized is crucial for further advancing the field and unlocking its full potential. This project introduces ScryVR, an innovative infrastructure designed to simplify and accelerate the development, implementation, and management of user studies in VR. By providing researchers with a robust framework for conducting studies, ScryVR aims to reduce the technical barriers often associated with VR research, such as complex data collection, hardware compatibility, and system integration challenges. Its goal is to empower researchers to focus more on study design and analysis, minimizing the time spent troubleshooting technical issues. By addressing these challenges, ScryVR has the potential to become a pivotal tool for advancing VR research methodologies. Its continued refinement will enable researchers to conduct more reliable and scalable studies, leading to deeper insights into user behavior and interaction within virtual environments. This, in turn, will drive the development of more immersive, intuitive, and impactful VR experiences.

Emergency response and active shooter training drills and exercises are necessary to train for emergencies as we are unable to predict when they do occur. There has been progress in understanding human behavior, unpredictability, human motion synthesis, crowd dynamics, and their relationships with active shooter events, but challenges remain. With continuing advancements in technology, virtual reality (VR) based training incorporates real-life experience that creates a “sense of presence” in the environment and becomes a viable alternative to traditional based training. This paper presents a collaborative virtual reality environment (CVE) module for performing active shooter training drills using immersive and non-immersive environments. The collaborative immersive environment is implemented in Unity 3D and is based on run, hide, and fight modes for emergency response. We present two ways of modeling user behavior. First, rules for AI agents or NPCs (Non-Player Characters) are defined. Second, controls to the users-controlled agents or PCs (Player characters) to navigate in the VR environment as autonomous agents with a keyboard/ joystick or with an immersive VR headset are provided. The users can enter the CVE as user-controlled agents and respond to emergencies like active shooter events, bomb blasts, fire, and smoke. A user study was conducted to evaluate the effectiveness of our CVE module for active shooter response training and decision-making using the Group Environment Questionnaire (GEQ), Presence Questionnaire (PQ), System Usability Scale (SUS), and Technology Acceptance Model (TAM) Questionnaire. The results show that the majority of users agreed that the sense of presence intrinsic motivation, and self-efficacy was increased when using the immersive emergency response training module for an active shooter evacuation environment.

This research presents a novel post-processing method for convolutional neural networks (CNNs) in character recognition, specifically designed to handle inconsistencies and irregularities in character shapes. Convolutional Neural Networks (CNNs) are powerful tools for recognizing and learning character shapes directly from source images, making them well-suited for recognition of characters that contain inconsistencies in their shapes. However, when applied to multi-object detection for character recognition, CNNs require post-processing to convert the recognized characters into code sequences, which has so far limited their applicability. The developed method solves this problem by directly post-processing the inconsistent characters identified by the convolutional neural model into labels corresponding to the source image. An experiment with real pharmaceutical packaging images demonstrates the functionality of the method, showing that it can handle different numbers of characters and labels effectively. As a scientific contribution to the fields of imaging and deep learning, this research opens new possibilities for future studies, particularly in the development of more accurate and efficient multi-object character recognition with post-processing and their application to new areas.

Sublimity has long been a theme in aesthetics, as one of the human emotions experienced when perceiving vastness, terror, or ambiguity. This study investigated the visual conditions that evoke sublimity using virtual reality (VR). Participants observed sublime content, developed based on previous research, under two factors conditions of (1) wide or narrow fields of view (FOV), and (2) 2D or 3D video presentations. We collected psycho-physiological evaluations from the participants. The results demonstrated that a wider FOV enhanced the perception of sublimity and pleasantness. In particular, gaze fixation time tended to increase under conditions of wider FOV and 3D presentation, supporting the effect of sublimity in VR. This suggests the potential of VR as a valuable tool for amplifying the experience of sublimity.