Regular
AUGMENTED REALITY
BODY GESTURES
CYBERSICKNESSCAVE
FLYING EXPERIENCE
HANDS-ON EXPERIENCEHMDHAPTICSHUMAN FACTORS
IMMERSIVE PROJECTIONIMMERSIVE VIRTUAL REALITY
KINECTKINESTHETIC HAPTIC FEEDBACK
LOCOMOTION TECHNIQUESLASER PROJECTORS
MEMS SCANNER BASED MOBILE PROJECTOR
NATURAL USER INTERFACENAVIGATION METAPHORS
OCULUS RIFT
PROJECTION BASED AUGMENTED REALITY SYSTEM
REAL-TIME CONCURRENT DISPLAY AND IMAGINGROLLING SHUTTER IMAGE-SENSOR
SOFT ROBOTICSSURGERY TRAINING
TIME SEQUENTIAL OPERATION
VIRTUAL REALITYVRI GAMING MODULEVIRTUAL REALITY INSTRUCTIONAL MODULES (VRI)Virtual Reality , Hybrid Reality, Mixed Reality, Stereoscopic 3D, OLED, Augmented Reality, Stereoscopic Crosstalk
WEARABLE HAPTICS
3D INTERACTION3D USER INTERFACES
 Filters
Month and year
 
  28  2
Image
Pages 1 - 4,  © Society for Imaging Science and Technology 2017
Digital Library: EI
Published Online: January  2017
  130  5
Image
Pages 5 - 10,  © Society for Imaging Science and Technology 2017
Volume 29
Issue 3

Augmented Reality is a widely anticipated platform for user interfaces. AR devices have been around for decades but are for the first time becoming affordable and viable as a consumer device. Through direct representation of 3D space and integration with haptic controls, AR brings many benefits to a user during training scenarios, namely increased knowledge acquisition and direct applicability. We believe these opportunities are not enough explored yet in medical training scenarios. This paper reports on our medical simulation for intubation training, which uses an Oculus Rift DK2 with the stereo camera device Ovrvision Pro. Our work shows great potential for augmented reality devices in medical training, but the hardware devices have yet to mature for widespread use.

Digital Library: EI
Published Online: January  2017
  54  10
Image
Pages 11 - 18,  © Society for Imaging Science and Technology 2017
Volume 29
Issue 3

Virtual reality instructional modules are widely recognized in academia because they engage students and motivate them to learn by hands-on experience. For this reason, we have developed virtual reality instructional (VRI) modules for the teaching loops and Arrays that can provide a better understanding of the concept than with a traditional instruction approach. This paper focuses on the development and evaluation of the VRI gaming module, which adds more inquiry-based problem-solving activities and hands-on experiences based on gaming and virtual reality. VRI modules are designed to encourage faculty to teach and motivate students to learn the concepts of loops and Arrays using interactive, graphical, game-like examples. The VRI modules act as a supplement to an existing course and enables faculty to explore teaching with a gametheme metaphor. We have evaluated the VRI modules in introductory programming courses during the semesters for computer science students. The student survey baseline results demonstrate positive student perceptions about the use of gaming instructional modules to advance student learning and understanding of the concepts. The results of the evaluation of VRI modules also demonstrate the effectiveness of instructional modules and the possibility to include them in the existing curriculum with minimum alterations to the existing established course material.

Digital Library: EI
Published Online: January  2017
  300  53
Image
Pages 19 - 24,  © Society for Imaging Science and Technology 2017
Volume 29
Issue 3

Current virtual environments rely heavily on audio and visual feedback as a form of sensory feedback [1]. The degree of immersion can be increased by augmenting synthetic haptic feedback from the user interface. Most of the existing wearable haptic feedback systems use tactile stimulation by vibrating motors for haptic feedback which lack a compelling sense of immersion with force feedback[2][3]. e.g.in the case of pressing a button. This research addresses this issue with hardware architecture for kinesthetic force feedback. This research focuses on the design of a wearable soft robotic haptic feedback glove for force feedback in virtual environments. The glove provides a force feedback to the fingers while clicking a button in virtual environments. The glove design includes a soft exoskeleton actuated by Mckibben muscles which are controlled using a custom fluidic control board [4]. The user's fingers are tracked using the infrared cameras. This tracking system provides the information for the position of the user's fingers. Based on this information, the soft glove is actuated to provide a haptic feedback. The Soft exoskeleton and actuation make the glove compliant, compact and unintimidating as compared to force feedback glove with rigid kinematic linkages. The glove design is inexpensive, mass-manufacturable and compatible to 90% of the U. S. population. The user could test the glove by playing the piano in virtual reality environment. The presence of audio, visual and haptic feedback makes the virtual reality environment highly immersive. The informal pilot study indicates that haptic glove improves the immersive experience of the virtual reality environments. Users in informal pilot study described the experience as "like nothing seen before", "mesmerizing" and "amazing".

Digital Library: EI
Published Online: January  2017
  36  4
Image
Pages 25 - 30,  © Society for Imaging Science and Technology 2017
Volume 29
Issue 3

The Destiny-class CyberCANOE is a hybrid-reality environment that provides 20/20 visual acuity in a 13-foot-wide, 320-degree cylindrical structure comprised of tiled passive stereocapable organic light emitting diode (OLED) displays. Hybridreality systems such as Destiny, CAVE2, WAVE and the TourCAVE combine surround-screen virtual reality environments with ultrahigh-resolution digital project-rooms. They are intended as collaborative environments that enable multiple users to work minimally encumbered, and hence comfortably, for long periods of time in rooms surrounded by data in the form of visualizations that benefit from being displayed at resolutions matching visual acuity and in stereoscopic 3D. Destiny is unique in that: it is the first hybrid-reality system to use OLED displays; it uses a real-time software-based approach rather than a physical optical approach for minimizing stereoscopic crosstalk when images are viewed severely off-axis on polarized stereoscopic displays; an d it used Microsoft's HoloLens augmented reality display to prototype its design and aid in its construction. This paper will describe Destiny's design and implementation - in particular the technique for software-based crosstalk mitigation. Lastly it will describe how the HoloLens helped validate Destiny' s design as well as train the construction team in its assembly.

Digital Library: EI
Published Online: January  2017
  126  19
Image
Pages 31 - 35,  © Society for Imaging Science and Technology 2017
Volume 29
Issue 3

In a short period of time, virtual reality has taken over the media, tending to promote that idea that it is a new technology. In fact, all started as early as the seventies and eighties, for portable devices (e.g.Head-Mounted Displays) as well as for complex and large devices (CAVEs). In this paper, we try to put these different systems in perspective, and to show the interest of comparing them in an experimental approach.

Digital Library: EI
Published Online: January  2017
  115  6
Image
Pages 36 - 41,  © Society for Imaging Science and Technology 2017
Volume 29
Issue 3

Virtual reality is rapidly becoming a pervasive component in the field of computing. From head mounted displays to CAVE virtual environments, realism in user immersion has continued to increase dramatically. While user interaction has made significant gains in the past few years, visual quality within the virtual environment has not. Many CAVE frameworks are built on libraries that use rasterization methods which limit the extent to which complex lighting models can be implemented. In this paper, we seek to remedy this issue by introducing the NVIDIA OptiX real-time raytracing framework to the CAVE virtual environment. A rendering engine was first developed using NVIDIA OptiX before being ported to the CalVR virtual reality framework, which allows running OptiX in CAVE environments as well as modern consumer HMDs such as the Oculus Rift.

Digital Library: EI
Published Online: January  2017
  18  0
Image
Pages 42 - 47,  © Society for Imaging Science and Technology 2017
Volume 29
Issue 3

Laser-based illumination sources for projection systems are becoming a powerful alternative to conventional lamp-based light sources. The laser-based lights have a multitude of advantages over conventional lights, including brightness, longevity, scalability and others. The work described here looks at these advantages from a point of view of large-screen projection systems as used for immersive Virtual Reality systems. For these applications laser-based light sources are very useful, especially when using Infitec-base stereo glasses.

Digital Library: EI
Published Online: January  2017
  392  28
Image
Pages 48 - 53,  © Society for Imaging Science and Technology 2017
Volume 29
Issue 3

In nowadays state of the art VR environments, displayed in CAVEs or HMDs, navigation technics may frequently induce cybersickness or VR-Induced Symptoms and Effects (VRISE), drastically limiting the friendly use of VR environments with no navigation limitations. In two distinct experiments, we investigated acceleration VRISE thresholds for longitudinal and rotational motions and compared 3 different VR systems: 2 CAVEs and a HMD (Oculus Rift DK2). We found that VRISE occur more often and more strongly in case of rotational motions and found no major difference between the CAVEs and the HMD. Based on the obtained thresholds we developed a new "Head Lock" navigation method for rotational motions in a virtual environment in order to generate a "Pseudo AR" mode, keeping fixed visual outside world references. Thanks to a third experiment we have shown that this new metaphor significantly reduces VRISE occurrences and may be a useful base for future natural navigation technics.

Digital Library: EI
Published Online: January  2017
  4  0
Image
Pages 54 - 59,  © Society for Imaging Science and Technology 2017
Volume 29
Issue 3

Projection-based augmented reality systems overlay digital information directly on real objects, while at the same time use cameras to capture the scene information. A common problem with such systems is that cameras see the projected image besides the real objects to some degree. This crosstalk reduces the object detection and digital content registration abilities. The authors propose a novel time sharing-based technique that facilitates the real and digital content decoupling in real time without crosstalk. The proposed technique is based on time sequential operation between a MEMS scanner-based mobile projector and rolling shutter image sensor. A MEMS mirror- based projector scans light beam in raster pattern pixel by pixel and completes full frame projection over a refresh period, while a rolling shutter image sensor sequentially collects scene light row by row. In the proposed technique, the image sensor is synchronized with scanning MEMS mirror and precisely follows the display scanner with a half-period lag to make the displayed content completely invisible for camera. An experimental setup consisting of laser pico projector, an image sensor, and a delay and amplifier circuit is developed. The performance of proposed technique is evaluated by measuring the crosstalk in captured content and sensor exposure limit. The results show 0% crosstalk in captured content up to 8 ms sensor exposure. High capture frame rate (up to 45 fps) is achieved by cyclically triggering a 3.2 MP, 60 fps CMOS sensor and using a 60 Hz pico projector. c 2017 Society for Imaging Science and Technology.

Digital Library: EI
Published Online: January  2017

Keywords

[object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object]