Regular
artificial intelligenceagent-based simulationAugmented RealityAugmented Reality (AR)Augmented realityAmbient Lightingaugmented visionaugmented realityAccelerometer sensor
Birds
cross-modalitycrowd behavior
Diffuse lightingdecision makingDepth Map
Emergent Augmented Reality PlatformsEye trackingemergency evacuationEducation systemExhibition
Furniture-of-interest
game engineGoogle DaydreamGaminggame design
hapticsHead-mounted Displayhuman behaviorhuman computer interfacehead-mounted display
InteractiveIndoor decoration
low frequency lighting
Motion Processingmobile VR controllerMicrosoft HoloLensmodelingmedical imagingMultimodal gesture controllerMobile VR interfacemotion learning
Navigation
Ornithologyoptics
Product AssemblyPhoto-realisticphysiological responsesperception
Real-time
Spherical harmonicssimulationStyle colorizationStereogramsegmentationStereoscopic Visualization
tactical training applicationstangible
viewVirtual and Augmented Reality Systemsvr designvolume renderingVirtual Reality UI and UXVirtual and Augmented Reality in Education, Learning, Gaming, Artvirtual realityVirtual RealityvisualizationVirtual crowds
Work Instructions
 Filters
Month and year
 
  11  1
Image
Pages A02-1 - A02-7,  © Society for Imaging Science and Technology 2019
Digital Library: EI
Published Online: January  2019
  149  21
Image
Pages 175-1 - 175-9,  © Society for Imaging Science and Technology 2019
Volume 31
Issue 2

Augmented Reality (AR) can seamlessly create an illusion of virtual elements blended into the real world scene, which is one of the most fascinating human-machine interaction technologies. AR has been utilized in a variety of real-life applications including immersive collaborative gaming, fashion appreciation, interior design, and assistive devices for individuals with vision impairments. This paper contributes a real-time AR application, ARFurniture, which will allow the users to envision furniture-ofinterests in different colors and different styles, all from their smart devices. The core software architecture consists of deep-learning based semantic segmentation and fast-speed color transformation. Our software architecture allows the user prompt the system to colorize the style of the furniture-of-interest within the scene on their mobile devices, and has been successfully deployed on mobile devices. In addition, using eye gaze as a pointing indicator, a head-mounted user-centric augmented reality based indoor decoration style colorization concept is discussed. Related algorithms, system design, and simulation results for ARFurniture are presented. Furthermore, a no-reference image quality measure, Naturalness Image Quality Evaluator (NIQE), was utilized to evaluate the immersiveness and naturalness of ARFuniture. The results demonstrate that ARFurniture has game-changing value to enhance user experience in indoor decoration.

Digital Library: EI
Published Online: January  2019
  166  42
Image
Pages 176-1 - 176-8,  © Society for Imaging Science and Technology 2019
Volume 31
Issue 2

Virtual crowds for non-combative environments play an important role in modern military operations and often create complications for the combatant forces involved. To address this problem, we are developing crowd simulation capable of generating crowds of non-combative civilians that exhibit a variety of individual and group behaviors at a different level of fidelity. Commercial game technology is used for creating an experimental setup to model an urban megacity environment and the physical behaviors of human characters that make up the crowd. The main objective of this work is to verify the feasibility of designing a collaborative virtual environment (CVE) and its usability for training security agents to respond to emergency situations like active shooter events, bomb blasts, fire and smoke. We present a hybrid (human-artificial) platform where experiments for disaster response can be performed in CVE by including AI agents and User-controlled agents. AI agents are computer controlled agents to include behaviors such as hostile agents, non-hostile agents, leader following agents, goal following agents, selfish agents, and fuzzy agents. User-controlled agents are autonomous agents for specific situation roles such as police officer, medic, firefighter, and swat official. The novelty of our work lies in modeling behaviors for AI agents or computer-controlled agents so that they can interact with user-controlled agents in an immersive training environment for emergency response and decision making. The hybrid platform aids in creating an experimental setup to study human behavior in a megacity for emergency response, decision-making strategies, and what-if scenarios.

Digital Library: EI
Published Online: January  2019
  190  9
Image
Pages 177-1 - 177-8,  © Society for Imaging Science and Technology 2019
Volume 31
Issue 2

Outreach and citizen science are important aspects in research and development. For example, the collection of bird-related data is driven forward by non-professional ornithologists as well as by researchers. At the exhibition “From Lake Constance to Africa, a long distance travel with ICARUS” which took place during the summer 2018 at the island Mainau in Germany, a Virtual Reality (VR) installation was shown utilizing movement data of a flock of storks. Two VR binoculars were installed which were used by the visitors to observe storks flying, following their way from Lake Constance towards Africa based on GPS data of real storks. In this way, viewers experienced a 360° view on top of the stork “Bubbel” as it flies with 26 flock mates. The VR binoculars were created as a 3D print equipped with a smartphone, VR headset and other special features enabling the long-term use. The overall project consists of three components: 1) the production software Bird Watcher, 2) the exhibitioncompatible exploration software Bird 360°, as well as 3) the hardware setup: the Sword of Stork Bubbel.

Digital Library: EI
Published Online: January  2019
  136  2
Image
Pages 178-1 - 178-7,  © Society for Imaging Science and Technology 2019
Volume 31
Issue 2

The 360° images or regular 2D images look appealing in Virtual Reality yet they fail to represent depth and how the depth can be used to give an experience to the user from two dimensional images. We proposed an approach for creating stereogram from computer generated depth map using approximation algorithm and later use these stereo pairs for giving a complete experience on VR along with forward and backward navigation using mobile sensors. Firstly the image is being segmented into two images from which we generated our disparity map and afterwards generate the depth image from it. After the creation of the depth image, stereo pair which is the left and right image for the eyes were created. Acquired image from the previous process then handled by Cardboard SDK for VR support used in the Android devices using Google Cardboard headset. With the VR image in the stereoscopic device, we use the accelerometer sensor of the device to determine the movement of the device while head mounted. Unlike the other VR navigation systems offered (HTC Vibe, Oculus) using external sensors, our approach is to use the built-in sensors for motion processing. Using the accelerometer reading from the movement, the user will be able to move around virtually in the constructed image. The results of this experiment are the visual changes of the image displayed in VR according to the viewer’s physical movement.

Digital Library: EI
Published Online: January  2019
  95  21
Image
Pages 179-1 - 179-9,  © Society for Imaging Science and Technology 2019
Volume 31
Issue 2

Augmented Reality (AR) is an emerging technology that could greatly increase training efficiency and effectiveness for assembly processes. In fact, studies show that AR may reduce time and errors by as much as 50%. While many devices are available to display AR content, the use of Head Mounted Displays (HMDs) is the most interesting for manufacturing and assembly as they free up the user’s hands. Due to the emerging nature of this technology, there are many limitations including input, field of view, tracking and occlusion. The work presented in this paper explores the use of the Microsoft HoloLens to deliver AR work instructions for product assembly. An AR assembly application was developed to guide a trainee through the assembly of a mock aircraft wing. To ensure accurate instructions were displayed, novel techniques were developed to mitigate the HoloLens’ tracking and display limitations. Additionally, a data visualization application was developed to validate the training session and explore trends through data collected from the HoloLens along with wearable physiological sensor data.

Digital Library: EI
Published Online: January  2019
  32  1
Image
Pages 180-1 - 180-5,  © Society for Imaging Science and Technology 2019
Volume 31
Issue 2

In this paper, a “learning by observation” method, which is most commonly employed motion for learning, is examined. In the observation-based learning method, learners generally observe, recognize, and reproduce a model-performed reference motions only from one direction. Subjects can observe the model from various directions: the orientation of the model’s trunk doesn’t accord with that of the subjects when viewing from a direction other than from behind the model. It prevents the subjects from learning the model’s reference motions easily because the subjects need to rotate the model mentally (it is called the “mental rotation”). On the other hand, when viewing from behind the avatar in order to avoid the mental rotation cost, subjects would occasionally encounter occlusion problems. Therefore, we have studied perceptual characteristics of various observation views through a psychophysical experiment. Two kinds of physical values were employed for evaluating subject’s responses. One is the delayed time for reproduced motion onset, and the other is the error rate of reproduced motion direction. The results suggest that the perception suffers ill-effects from the mental rotation in two ways: the amount of the mental rotation increases the delayed time, and the presence of the mental rotation does the directional errors.

Digital Library: EI
Published Online: January  2019
  31  6
Image
Pages 181-1 - 181-10,  © Society for Imaging Science and Technology 2019
Volume 31
Issue 2

The increased replication of human behavior with virtual agents and proxies in a multi-user or collaborated virtual reality environment (CVE) has influenced the eruption of scholastic research and training. The capabilities of the user experience for emergency response and training in emergency and catastrophic situations may be highly influenced by the use of computer bots, avatar, and virtual agents. Our contribution and proposal for the concerted collaborated Virtual Reality nightclub environment consequently warrants the flexibility to run manifold scenarios and evacuation drills in reaction to emergency and disaster preparedness. Modeling such an environment is very essential because it helps emulate the emergencies we experience in our routine lives and provide a learning platform towards the preparation of extreme events. The results of the user study to measure presence in the VE using presence questionnaire (PQ) are discussed in detail, and it was found that there is a consistent positive relation between presence and task performance in VEs. The results further suggest that most users feel that this application could be a good tool for education and training purposes.

Digital Library: EI
Published Online: January  2019
  43  6
Image
Pages 182-1 - 182-8,  © Society for Imaging Science and Technology 2019
Volume 31
Issue 2

In this paper, we explore the use of tangible user interfaces in the context of prototyping digital game design. Traditional approaches to game design use a combination of digital and non-digital prototyping techniques to identify core game mechanics and object placements when designing levels followed by a labourious process where the non-digital prototypes are translated into digital counterparts in a game engine. The presented system aims at reducing the need for non-digital prototypes by providing an easy to use interface that doesn’t require technical expertise and results in a playable digital prototype in a modern game engine. The presented system uses Augmented Reality markers as the tangible interface to facilitate specific functionality in a modern game engine. A preliminary user study is presented to understand the current strengths/weaknesses of this approach. Our hypothesis is that our system would improve the game design experience for users with respect to usability, performance, creativity support and enjoyment as we firmly believe that the process of designing games should also be an enjoyable process.

Digital Library: EI
Published Online: January  2019
  108  25
Image
Pages 183-1 - 183-11,  © Society for Imaging Science and Technology 2019
Volume 31
Issue 2

Technology in education can influence students to learn with enthusiasm and can motivate them, leading to an effective process of learning. Researchers have identified the problem that technology will create a passive learning process if the technology used does not promote critical thinking, meaning-making or metacognition. Since its introduction, augmented reality (AR) has been shown to have good potential in making the learning process more active, effective and meaningful. This is because its advanced technology enables users to interact with virtual and real-time applications and brings the natural experiences to the user. In addition, the merging of AR with education has recently attracted research attention because of its ability to allow students to be immersed in realistic experiences. Therefore, this thesis paper is based on the research that has been conducted on AR. The review describes the application of AR on primary education using individual “topic cards” for different topics of primary school syllabus. The review of the results of the research shows that, overall, AR technologies have a positive result and potentiality that can be adapted in education. The review also indicates the advantages and limitations of AR which could be addressed in future research.

Digital Library: EI
Published Online: January  2019

Keywords

[object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object]