According to the CDC, over three thousand people die every year from drowning in the United States. Many of these fatalities are preventable with properly trained lifeguards. Traditional lifeguard training relies on videos and mock rescues. While these methods are important, they have their shortcomings. Videos are static and do not build muscle memory. Mock rescues are labor-intensive and potentially put others in danger. Virtual reality (VR) can be used as an alternative training tool, building muscle memory in a fully controlled and safe environment. With full control over variables such as weather, population, and other distractions, lifeguards can be better equipped to respond to any situation. The single most important aspect of life guarding is finding the victim. This head rotation skill can be practiced and perfected in VR before guards ever get onto the stand. It also allows guards to practice in uncommon but nevertheless dangerous conditions such as fog and large crowds. VR also allows the user to get immediate feedback about performance and where they can improve.
The 360° images or regular 2D images look appealing in Virtual Reality yet they fail to represent depth and how the depth can be used to give an experience to the user from two dimensional images. We proposed an approach for creating stereogram from computer generated depth map using approximation algorithm and later use these stereo pairs for giving a complete experience on VR along with forward and backward navigation using mobile sensors. Firstly the image is being segmented into two images from which we generated our disparity map and afterwards generate the depth image from it. After the creation of the depth image, stereo pair which is the left and right image for the eyes were created. Acquired image from the previous process then handled by Cardboard SDK for VR support used in the Android devices using Google Cardboard headset. With the VR image in the stereoscopic device, we use the accelerometer sensor of the device to determine the movement of the device while head mounted. Unlike the other VR navigation systems offered (HTC Vibe, Oculus) using external sensors, our approach is to use the built-in sensors for motion processing. Using the accelerometer reading from the movement, the user will be able to move around virtually in the constructed image. The results of this experiment are the visual changes of the image displayed in VR according to the viewer’s physical movement.