Regular
Augmented RealityARaugmented realityAndroidAR engineARCore
building evacuation
Collaborative Virtual environment
data analysisDisplaydata miningDepth Image-Based Rendering
emotionEmergent Augmented Reality Platforms
First Response
hololensHelmetHeads-Up Display
immersive ARInteractionimmersive VRImmersive video 6DoF
LabelingLow VisionLocal MultiplayerLidar
Mirror arraymulti-userMicrosoft HoloLensMPEG-I Visual
Navigation
Object Detection
physiological signals
Real-time view synthesisRisk Prediction
smart phonesspherical harmonicsSpatialized ambisonic audio
Training DatasetsTransparentTrainingTraffic Accident
VR goggleVirtual RealityVirtual Reality UI and UXVirtual and Augmented Reality in Education, Learning, Gaming, Artvirtual realityVisual SearchVirtual and Augmented Reality Systemsvariance analyisisVirtual realityVirtual imageVR Arena
360 degrees video3D imaging
 Filters
Month and year
 
  20  1
Image
Pages A13-1 - A13-5,  © Society for Imaging Science and Technology 2020
Digital Library: EI
Published Online: January  2020
  211  13
Image
Pages 223-1 - 223-7,  © Society for Imaging Science and Technology 2020
Volume 32
Issue 13

During active shooter events or emergencies, the ability of security personnel to respond appropriately to the situation is driven by pre-existing knowledge and skills, but also depends upon their state of mind and familiarity with similar scenarios. Human behavior becomes unpredictable when it comes to making a decision in emergency situations. The cost and risk of determining these human behavior characteristics in emergency situations is very high. This paper presents an immersive collaborative virtual reality (VR) environment for performing virtual building evacuation drills and active shooter training scenarios using Oculus Rift headmounted displays. The collaborative immersive environment is implemented in Unity 3D and is based on run, hide, and fight mode for emergency response. The immersive collaborative VR environment also offers a unique method for training in emergencies for campus safety. The participant can enter the collaborative VR environment setup on the cloud and participate in the active shooter response training environment, which leads to considerable cost advantages over large-scale real-life exercises. A presence questionnaire in the user study was used to evaluate the effectiveness of our immersive training module. The results show that a majority of users agreed that their sense of presence was increased when using the immersive emergency response training module for a building evacuation environment.

Digital Library: EI
Published Online: January  2020
  174  15
Image
Pages 224-1 - 224-7,  © Society for Imaging Science and Technology 2020
Volume 32
Issue 13

During emergencies communicating in multi-level built environment becomes challenging because architectural complexity can create problems with visual and mental representation of 3D space. Our Hololens application gives a visual representation of a building on campus in 3D space, allowing people to see where exits are in the building as well as creating alerts for anomalous behavior for emergency response such as active shooter, fire, and smoke. It also gives path to the various exits; shortest path to the exits as well as directions to a safe zone from their current position. The augmented reality (AR) application was developed in Unity 3D for Microsoft HoloLens and also is deployed on tablets and smartphones. It is a fast and robust marker detection technique inspired by the use of Vuforia AR library. Our aim is to enhance the evacuation process by ensuring that all building patrons know all of the building exits and how to get to them, which improves evacuation time and eradicates the injuries and fatalities occurring during indoor crises such as building fires and active shooter events. We have incorporated existing permanent features in the building as markers for the AR application to trigger the floor plan and subsequent location of the person in the building. This work also describes the system architecture as well as the design and implementation of this AR application to leverage HoloLens for building evacuation purposes. We believe that AR technologies like HoloLens could be adopted for all building evacuating strategies during emergencies as it offers a more enriched experience in navigating large-scale environments.

Digital Library: EI
Published Online: January  2020
  63  4
Image
Pages 338-1 - 338-7,  © Society for Imaging Science and Technology 2020
Volume 32
Issue 13

In this paper, we present a novel Lidar imaging system for heads-up display. The imaging system consists of the onedimensional laser distance sensor and IMU sensors, including an accelerometer and gyroscope. By fusing the sensory data when the user moves their head, it creates a three-dimensional point cloud for mapping the space around. Compared to prevailing 2D and 3D Lidar imaging systems, the proposed system has no moving parts; it’s simple, light-weight, and affordable. Our tests show that the horizontal and vertical profile accuracy of the points versus the floor plan is 3 cm on average. For the bump detection the minimal detectable step height is 2.5 cm. The system can be applied to first responses such as firefighting, and to detect bumps on pavement for lowvision pedestrians.

Digital Library: EI
Published Online: January  2020
  105  31
Image
Pages 339-1 - 339-6,  © Society for Imaging Science and Technology 2020
Volume 32
Issue 13

According to the CDC, over three thousand people die every year from drowning in the United States. Many of these fatalities are preventable with properly trained lifeguards. Traditional lifeguard training relies on videos and mock rescues. While these methods are important, they have their shortcomings. Videos are static and do not build muscle memory. Mock rescues are labor-intensive and potentially put others in danger. Virtual reality (VR) can be used as an alternative training tool, building muscle memory in a fully controlled and safe environment. With full control over variables such as weather, population, and other distractions, lifeguards can be better equipped to respond to any situation. The single most important aspect of life guarding is finding the victim. This head rotation skill can be practiced and perfected in VR before guards ever get onto the stand. It also allows guards to practice in uncommon but nevertheless dangerous conditions such as fog and large crowds. VR also allows the user to get immediate feedback about performance and where they can improve.

Digital Library: EI
Published Online: January  2020
  43  7
Image
Pages 340-1 - 340-6,  © Society for Imaging Science and Technology 2020
Volume 32
Issue 13

There is a demand to display image on a transparent medium. If we can show image superimposed on a window, it can attract interest of people. However, a current achieved transparent displays show images only on the display surface, hence it is difficult to show image superimposed on an object that exists behind the screen. Thus, it doesn't understand intuitively. On the other hand, a display which can be perceived the image behind the screen should be placed inclined. Therefore, it is difficult to apply to advertising and exhibition. The purpose of this study is to generate transparent display to show image superimposed on near an object that exists behind the screen. We propose a display which shows image not on display surface but at a different image of depth. In this system, to show the image at behind the screen, display device is reflected on the transparent screen which incorporates a half mirror array. In this study, we use the mirror array as screen in order to locate the display device at an appropriate place. We optimize the surface of display device and angle of all mirrors so as to minimize the virtual image distortion. To confirm the practicality of the proposed method, we conducted a simulation. From the result, we confirmed the virtual image was able to display designated position.

Digital Library: EI
Published Online: January  2020
  209  8
Image
Pages 360-1 - 360-9,  © Society for Imaging Science and Technology 2020
Volume 32
Issue 13

Data-driven use scenarios for virtual and augmented reality are increasingly social, multiplayer and integrated in real world environments, yet these remain limited player experiences in that each player wears a device that enables their immersion and removes them in some sense from the broader physical space and social interactions in which it is occurring. Our work explores one possibility for overcoming these limitations by integrating the virtual environment with the physical space it is occupying through the use of a VR Arena design. We explore the design and development of blended virtual– physical spaces for local multiplayer experiences in which players collaboratively interact with a virtual world created from digital data, and simultaneously perform that data as a soundscape for attendees in a physical space.

Digital Library: EI
Published Online: January  2020
  236  20
Image
Pages 363-1 - 363-6,  © Society for Imaging Science and Technology 2020
Volume 32
Issue 13

This publication reports on a research project in which we set out to explore the advantages and disadvantages augmented reality (AR) technology has for visual data analytics. We developed a prototype of an AR data analytics application, which provides users with an interactive 3D interface, hand gesture-based controls and multi-user support for a shared experience, enabling multiple people to collaboratively visualize, analyze and manipulate data with high dimensional features in 3D space. Our software prototype, called DataCube, runs on the Microsoft HoloLens - one of the first true stand-alone AR headsets, through which users can see computer-generated images overlaid onto realworld objects in the user’s physical environment. Using hand gestures, the users can select menu options, control the 3D data visualization with various filtering and visualization functions, and freely arrange the various menus and virtual displays in their environment. The shared multi-user experience allows all participating users to see and interact with the virtual environment, changes one user makes will become visible to the other users instantly. As users engage together they are not restricted from observing the physical world simultaneously and therefore they can also see non-verbal cues such as gesturing or facial reactions of other users in the physical environment. The main objective of this research project was to find out if AR interfaces and collaborative analysis can provide an effective solution for data analysis tasks, and our experience with our prototype system confirms this.

Digital Library: EI
Published Online: January  2020
  84  9
Image
Pages 364-1 - 364-7,  © Society for Imaging Science and Technology 2020
Volume 32
Issue 13

With the development of Apple’s ARKit and Google’s ARCore, mobile augmented reality (AR) applications have become much more popular. For Android devices, ARCore provides basic motion tracking and environmental understanding. However, with current software frameworks it can be difficult to create an AR application from the ground up. Our solution is CalAR, which is a lightweight, open-source software environment to develop AR applications for Android devices, while giving the programmer full control over the phone’s resources. With CalAR, the programmer can create marker-less AR applications which run at 60 frames per second on Android smartphones. These applications can include more complex environment understanding, physical simulation, user interaction with virtual objects, and interaction between virtual objects and objects in the physical environment. With CalAR being based on CalVR, which is our multi-platform virtual reality software engine, it is possible to port CalVR applications to an AR environment on Android phones with minimal effort. We demonstrate this with the example of a spatial visualization application.

Digital Library: EI
Published Online: January  2020
  167  8
Image
Pages 380-1 - 380-6,  © Society for Imaging Science and Technology 2020
Volume 32
Issue 13

Improving drivers’ risk prediction ability can reduce the accident risk significantly. The existing accident awareness training systems show poor performance due to the lack of immersive sense. In this research, an immersive educational system is proposed for risk prediction training based on VR technology. The system provides a highly realistic driving experience to driver through 360 degrees video using VR goggle. In the nearly actual driving scene, users are expected to point out every potential dangerous scenario in different cases. Afterwards, the system evaluates users’ performances and gives the corresponding explanations to help users improve safety awareness. The results show that the system is more effective than previous systems on improving drivers’ risk prediction capability.

Digital Library: EI
Published Online: January  2020

Keywords

[object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object]