Augmented reality (AR) combines elements of the real world with additional virtual content, creating a blended viewing environment. Optical see-through AR (OST-AR) accomplishes this by using a transparent beam splitter to overlay virtual elements over a user’s view of the real world. However, the inherent see-through nature of OST-AR carries challenges for color appearance, especially around the appearance of darker and less chromatic objects. When displaying human faces—a promising application of AR technology—these challenges disproportionately affect darker skin tones, making them appear more transparent than lighter skin tones. Still, some transparency in the rendered object may not be entirely negative; people’s evaluations of transparency when interacting with other humans in AR-mediated modalities are not yet fully understood. In this work, two psychophysical experiments were conducted to assess how people evaluate OST-AR transparency across several characteristics including different skin tones, object types, lighting conditions, and display types. The results provide a scale of perceived transparency allowing comparisons to transparency for conventional emissive displays. The results also demonstrate how AR transparency impacts perceptions of object preference and fit within the environment. These results reveal several areas with need for further attention, particularly regarding darker skin tones, lighter ambient lighting, and displaying human faces more generally. This work may be useful in guiding the development of OST-AR technology, and emphasizes the importance of AR design goals, perception of human faces, and optimizing visual appearance in extended reality systems.
In this study, we used a custom-built Optical See-Through Augmented Reality (OST AR) system to conduct a psychophysical experiment to determine the preferred gamma and black level for high naturalness perception in OST-AR. We utilized 6 different fruit stimuli and 11 different backgrounds to do this experiment. We used two-way ANOVA to analyze the data and concluded that only the effect of different fruits on virtual objects’ gamma preference for high naturalness is considered statistically significant. Surprisingly, all ANOVA analyses indicate background’s color does not contribute to observers’ gamma or black level preference for naturalness. We found that gamma preference has a strong correlation with the average lightness of the virtual stimuli. There is no clear correlation between chroma, hue, and gamma preference in terms of naturalness perception. This finding suggests that the background can be ignored in future imaging pipelines emphasizing high naturalness appearance in Augmented Reality.
During emergencies communicating in multi-level built environment becomes challenging because architectural complexity can create problems with visual and mental representation of 3D space. Our Hololens application gives a visual representation of a building on campus in 3D space, allowing people to see where exits are in the building as well as creating alerts for anomalous behavior for emergency response such as active shooter, fire, and smoke. It also gives path to the various exits; shortest path to the exits as well as directions to a safe zone from their current position. The augmented reality (AR) application was developed in Unity 3D for Microsoft HoloLens and also is deployed on tablets and smartphones. It is a fast and robust marker detection technique inspired by the use of Vuforia AR library. Our aim is to enhance the evacuation process by ensuring that all building patrons know all of the building exits and how to get to them, which improves evacuation time and eradicates the injuries and fatalities occurring during indoor crises such as building fires and active shooter events. We have incorporated existing permanent features in the building as markers for the AR application to trigger the floor plan and subsequent location of the person in the building. This work also describes the system architecture as well as the design and implementation of this AR application to leverage HoloLens for building evacuation purposes. We believe that AR technologies like HoloLens could be adopted for all building evacuating strategies during emergencies as it offers a more enriched experience in navigating large-scale environments.
This publication reports on a research project in which we set out to explore the advantages and disadvantages augmented reality (AR) technology has for visual data analytics. We developed a prototype of an AR data analytics application, which provides users with an interactive 3D interface, hand gesture-based controls and multi-user support for a shared experience, enabling multiple people to collaboratively visualize, analyze and manipulate data with high dimensional features in 3D space. Our software prototype, called DataCube, runs on the Microsoft HoloLens - one of the first true stand-alone AR headsets, through which users can see computer-generated images overlaid onto realworld objects in the user’s physical environment. Using hand gestures, the users can select menu options, control the 3D data visualization with various filtering and visualization functions, and freely arrange the various menus and virtual displays in their environment. The shared multi-user experience allows all participating users to see and interact with the virtual environment, changes one user makes will become visible to the other users instantly. As users engage together they are not restricted from observing the physical world simultaneously and therefore they can also see non-verbal cues such as gesturing or facial reactions of other users in the physical environment. The main objective of this research project was to find out if AR interfaces and collaborative analysis can provide an effective solution for data analysis tasks, and our experience with our prototype system confirms this.
With the development of Apple’s ARKit and Google’s ARCore, mobile augmented reality (AR) applications have become much more popular. For Android devices, ARCore provides basic motion tracking and environmental understanding. However, with current software frameworks it can be difficult to create an AR application from the ground up. Our solution is CalAR, which is a lightweight, open-source software environment to develop AR applications for Android devices, while giving the programmer full control over the phone’s resources. With CalAR, the programmer can create marker-less AR applications which run at 60 frames per second on Android smartphones. These applications can include more complex environment understanding, physical simulation, user interaction with virtual objects, and interaction between virtual objects and objects in the physical environment. With CalAR being based on CalVR, which is our multi-platform virtual reality software engine, it is possible to port CalVR applications to an AR environment on Android phones with minimal effort. We demonstrate this with the example of a spatial visualization application.
This paper presents a study on Quality of Experience (QoE) evaluation of 3D objects in Mixed Reality (MR) scenarios. In particular, a subjective test was performed with Microsoft HoloLens, considering different degradations affecting the geometry and texture of the content. Apart from the analysis of the perceptual effects of these artifacts, given the need for recommendations for subjective assessment of immersive media, this study was also aimed at: 1) checking the appropriateness of a single stimulus methodology (ACR-HR) for these scenarios where observers have less references than with traditional media, and 2) analyzing the possible impact of environment lighting conditions on the quality evaluation of 3D objects in mixed reality (MR), and 3) benchmark state-of-the-art objective metrics in this context. The subjective results provide insights for recommendations for subjective testing in MR/AR, showing that ACR-HR can be used in similar QoE tests and reflecting the influence among the lighting conditions, the content characteristics, and the type of degradations. The objective results show an acceptable performance of perceptual metrics for geometry quantization artifacts and point out the need of further research on metrics covering both geometry and texture compression degradations.
Developing an augmented reality (AR) system involves a multitude of interconnected algorithms such as image fusion, camera synchronization and calibration, and brightness control, each having diverse parameters. This abundance of features, while beneficial in nature for its applicability to different tasks, is detrimental to developers as they try to navigate different combinations and pick the most suitable configuration for their application. Additionally, the temporally inconsistent nature of the real world hinders the development of reproducible and reliable testing methods for AR systems. To help address these issues, we develop and test a virtual reality (VR) environment [1] that allows the simulation of variable AR configurations for image fusion. In this work, we improve our system with a more realistic AR glass model adhering to physical light and glass properties. Our implementation combines the incoming real-world background light and the AR projector light at the level of the AR glass.