Recent research shows that children with chronic health conditions (CCH) often confront challenges that affect their daily activities and quality of life. As a result, in many cases, they experience high levels of stress. Research suggests that programs designed to reduce stress may help CCH and improve their quality of life. This article will support these recent findings by presenting a new concept and method (still in its pilot stage) that may help to reduce stress through a well-defined interactive process for CCH, designed to empower them and improve their quality of life. In order to better understand the concept and its context, this article will initially discuss a community outreach program, entitled 'OMSI' - Online Museum for Self Improvement. OMSI represents a new concept for an interactive online museum that is comprised of wellbeing techniques, works of art chosen from digitalized museum collections and activities that adopt an interactive exhibition approach. Moving on, the article will present STORIA, the case study that provided the inspiration for OMSI followed by a discussion regarding the pilot exhibition, 'Heroes of Today'. This exhibition focuses on empowering CCH and reducing their levels of stress through the utilization of a meaningful-engagement activity entitled, "Create Your STORIA". Following this, the article will discuss the relationship between art/images and wellbeing / empowerment. The discussion will include literature related to a recent case study entitled, 'Art at the Bedside' which was conducted in Strong Memorial Hospital in Rochester, NY. This case study used images of museum art works on tablets to create conversations with patients and their families; the results showed an improvement in the quality of life of hospitalized patients. Thereafter, the article will discuss the relationship between art/images and its effect on the brain, including literature related to the interplay between imagination, images and mirror neurons. The article will conclude with a presentation of the proposed next steps for the OMSI pilot, which aims to empower CCH.
Visualizations are widely used when working with family and genealogical structures, both to navigate through the generations and to provide overview information about the family as a whole. Our research investigates the concept of "marriage" in the complex and polygamous familial structures of Mormon society in mid-1800s Nauvoo, IL, including several definitions of marital and relational ties. We have found current visualizations to be insufficient in fully expressing this complexity. We present visualizations based on chord and flow diagrams to capture the locality and cohesiveness of larger and more complex family units and encapsulate familial dynamics into the nodes of their overall lineage. Each family unit is portrayed as a chord diagram adapted to display intra-familial relationships with a left-to-right generational flow and chords indicating relationships between participants. Zooming out, we depict the overall lineage as a modified flow diagram with the family units as nodes, connected with others based on the participants; each hyper-edge links an individual's family of birth to her adult marriage.<br/> Our implementation has yielded evocative and provocative visualizations–preserving locality of family unit members, an overall temporal order on their display, and distinguishability of relational types–by which scholars can investigate these complex social structure.
Automatic generation of data visualizations allows to quickly deploy data visualizations. In visual analytics, the combination of automatic and human analysis increases the effort necessary to achieve similar effects substantially. Where automatic visualization only needs to map the data, in visual analytics the whole data preparation and processing pipeline has to be considered. The user is interested in representations reflecting certain interpretations of the data, for example the idea that different groups represent different clusters in the data. In this paper, we prove that an information-driven automatic design of visual analytics pipelines is feasible. To this end, we prove that the ability of an analysis system to derive and visualize data supporting inquired information is decidable – at least for real-world applications. Having overcome this major obstacle, we outline a general algorithm scheme that can be implemented on a wide range of data and information models.
Machine learning (ML) algorithms and machine learning based software systems implicitly or explicitly involve complex flow of information between various entities such as training data, feature space, validation set and results. Understanding the statistical distribution of such information and how they flow from one entity to another influence the operation and correctness of such systems, especially in large-scale applications that perform classification or prediction in real time. In this paper, we propose a visual approach to understand and analyze flow of information during model training and serving phases. We build the visualizations using a technique called Sankey Diagram - conventionally used to understand data flow among sets - to address various use cases of in a machine learning system. We demonstrate how the proposed technique, tweaked and twisted to suit a classification problem, can play a critical role in better understanding of the training data, the features, and the classifier performance. We also discuss how this technique enables diagnostic analysis of model predictions and comparative analysis of predictions from multiple classifiers. The proposed concept is illustrated with the example of categorization of millions of products in the e-commerce domain - a multi-class hierarchical classification problem.
In order to achieve high-fidelity representation of colors, it needs to do some purposeful interference when colors are conversed, so the color gamut mapping which is the main technology to ensure visual matching would be used during this process. And the accurate visualization of out-of-gamut between images and devices plays an important role in color gamut mapping, so a method of the 3D visualization of out-of-gamut in graphic communication based on segment maxima algorithm was proposed in this paper. And this method mainly had two steps, the first step was that the device color gamut boundary points could be obtained when used the color conversed information in digital output device profile, so as to generate accurate device color gamut. The next step was that out-of-gamut points could be obtained through making comparison between device gamut and digital image gamut, as result that the area of out-of-gamut could be presented in digital images and image gamut according to the out-of-gamut cloud dots. The advantages of this method were that it can encourage us to predict whether the image gamut is beyond the output device gamut in advance, and then it can help us to select appropriate color gamut mapping algorithm according to the result of out-of-gamut judgment, so as to realize the accurate reproduction of colors.
A playbook in American Football can consist of hundreds of plays, and to learn each play and the corresponding assignments and responsibilities is a big challenge for the players. In this paper we propose a teaching tool for coaches in American Football based on computer vision and visualization techniques which eases the learning process and helps the players gain deeper knowledge of the underlying concepts. Coaches can create, manipulate and animate plays with adjustable parameters which affect the player actions in the animation. The general player behaviors and interactions between players are modeled based on expert knowledge. The final goal of the framework is to compare the theoretical concepts with their practical implementation in training and game, by using computer vision algorithms which extract spatio-temporal motion patterns from corresponding real video material. First results indicate that the software can be used effectively by coaches and the players' understanding of critical moments of the play can be increased with the animation system.
A visual system cannot process everything with full fidelity, nor, in a given moment, perform all possible visual tasks. Rather, it must lose some information, and prioritize some tasks over others. The human visual system has developed a number of strategies for dealing with its limited capacity. This paper reviews recent evidence for one strategy: encoding the visual input in terms of a rich set of local image statistics, where the local regions grow — and the representation becomes less precise — with distance from fixation. The explanatory power of this proposed encoding scheme has implications for another proposed strategy for dealing with limited capacity: that of selective attention, which gates visual processing so that the visual system momentarily processes some objects, features, or locations at the expense of others. A lossy peripheral encoding offers an alternative explanation for a number of phenomena used to study selective attention. Based on lessons learned from studying peripheral vision, this paper proposes a different characterization of capacity limits as limits on decision complexity. A general-purpose decision process may deal with such limits by "cutting corners" when the task becomes too complicated.
Category search is a searching activity where the user has an example image and searches for other images of the same category. This activity often requires appropriate keywords of target categories making it difficult to search images without prior knowledge of appropriate keywords. Text annotations attached to images are a valuable resource for helping users to find appropriate keywords for the target categories. We propose an image exploration system in this article for category image search without the prior knowledge of category keywords. Our system integrates content-based and keyword-based image exploration and seamlessly switches exploration types according to user interests. The system enables users to learn target categories both in image and keyword representation through exploration activities. Our user study demonstrated the effectiveness of image exploration using our system, especially for the search of images with unfamiliar category compared to the single-modality image search. © 2016 Society for Imaging Science and Technology.
The simulation of human behavior with avatars and agents in virtual reality (VR) has led to an explosion of training and educational research. The use of avatars (user-controlled characters) or agents (computer-controlled characters) may influence the engagements of the user experience for emergency response, and training in emergency scenarios. Our proposed collaborative VR megacity environment offers flexibility to run multiple scenarios and evacuation drills for disaster preparedness and response. Modeling such an environment is very important because in the real-time emergencies we experience in day-to-day life, there is a need for preparation to extreme events. These emergencies could be the result of fire, smoke, gunman threat, or a bomb blast in a city block. The collaborative virtual environment (CVE) can act as a platform for training and decision making for SWAT teams, fire responders, and traffic clearance personnel. The novelty of our work lies in modeling behaviors (hostile, non-hostile, selfish, leader-following) for computer-controlled agents so that they can interact with user-controlled agents in a CVE. We have used game creation as a metaphor for creating an experimental setup to study human behavior in a megacity for emergency response, decision-making strategies, and what-if scenarios. Our proposed collaborative VR environment includes both immersive and non-immersive environments. The participant can enter the CVE setup on the cloud and participate in the emergency evacuation drill, which leads to considerable cost advantages over large-scale, real-life exercises. We present two ways for controlling crowd behavior. The first defines rules for agents, and the second provides controls to the users as avatars to navigate in the VR environment as autonomous agents. Our contribution lies in our approach to combine these two approaches of behavior to perform virtual drills for emergency response and decision making.
Standard desktop setups, even with multiple monitor configurations, only provide a somewhat small view on the data set at hand. In addition, typical mouse and keyboard input paradigms often result in less user-friendly configurations, especially when it comes to dealing with 3D data sets. For simulation environments in which participants or users are supposed to be exposed to a more realistic scenario with increased immersion, desktop configurations, such as fishtank VR, are not necessarily a viable choice. This papers aims at providing an overview of different display technology and input devices that provides a virtual environment paradigm suitable for various different visualization and simulation tasks. The focus is on cost-effective display technology that does not break a researchers budget. The software framework utilizing these displays combines different visualization and graphics packages to create an easy-to-use software environment that can run readily on all these displays without changing the software thereby providing easy-to-use software frameworks for visualization and simulation.