Regular
AGENT-BASED SIMULATIONADVECTIONAPPROXIMATIONACTIVE LEARNING
BIG GRAPHS
CLIPPING|RAY TRACINGCLUSTERINGCLASSIFICATIONCOLLABORATIVE VIRTUAL ENVIRONMENTCORRELATION
DECLARATIVE
EXPLORATORY IMAGE SEARCHEMERGENCY EVACUATION
FLOWFLOW VISUALIZATIONFLOW PATHSFLUID SIMULATION
GRAPH DRAWINGGRAPH PROPERTIESGRAPH SAMPLINGGPGPUGUIDES
HIERARCHICAL STRUCTURESHUMAN BEHAVIORHUMAN-COMPUTER INTERACTION
IMMERSIVE DISPLAY SYSTEMS
LAGRANGIANLABELING
MOTION CAPTURE DATA
NODE CONNECTIVITIESNEUROSURGICAL TREATMENT PLANNING
PEER-TO-PEER (P2P) NETWORKSPARTICLE DATA
RAY TRACING
SIMULATIONSCIENTIFIC VISUALIZATION
TOPIC MODELINGTEXT ANALYSISTAXONOMY
USER INTERFACEUNCERTAINTY
VISUALIZATIONVIRTUAL ENVIRONMENTVECTOR FIELD VISUALIZATIONVISUAL ANALYTICSVIRTUAL REALITYVOLUME RENDERING
 Filters
Month and year
 
  20  1
Image
Pages 1 - 4,  © Society for Imaging Science and Technology 2017
Digital Library: EI
Published Online: January  2017
  99  7
Image
Pages 5 - 11,  © Society for Imaging Science and Technology 2017
Volume 29
Issue 1

Weather scientists are looking to better understand the atmospheric conditions. We propose a new tool to detect the most significant association between variables in the multidimensional multivariate time-varying climate datasets. In this case, we represent the correlation between variables, the uncertainty between different members within ensembles, and several clustering methods. 77w climate dataset is collected in different time steps and locations. One of the most important research questions for weather scientists is the relationship between various variables in different time steps, or dissimilar spatial locations. In this paper; we present a set of techniques to evaluate the correlation and association between different variables within a time step and spatial location. In another way, we perform static analysis on a single point in space-time, then extending that analysis either in the temporal or spatial dimensiorz(s), followed by an aggregation of the individual results to get an "overall" correlation. We created a tool that not only can he used to visualize the correlation and uncertainty between two time series of all ensembles, but also spatial locations. Mini-batch-K-Means clustering is applied to these datasets to identify the most substantial patterns within them. We study the Pearson correlation and integrate glyphs and color mapping into our design to demonstrate the trend of changing the correlation values of a single, pair: or triple of variables. Statistical calculations are applied to derive an accurate interpretation of the time-varying correlations between members within all of the ensembles as well as the uncertainty of the correlation values. The uncertainty visualizations provide insight toward the effects of parameter perturbation, sensitivity to initial conditions, and inconsistencies in model outputs. To evaluate the tool, we apply this technique to a climatology dataset.

Digital Library: EI
Published Online: January  2017
  134  1
Image
Pages 12 - 21,  © Society for Imaging Science and Technology 2017
Volume 29
Issue 1

Particle-based fluid simulation (PFS), such as Smoothed Particle Hydrodynamics (SPH) and Position-based Fluid (PBF), is a mesh-free method that has been widely used in various fields, including astrophysics, mechanical engineering, and biomedical engineering for the study of liquid behaviors under different circumstances. Due to its meshless nature, most analysis techniques that are developed for mesh-based data need to be adapted for the analysis of PFS data. In this work, we study a number of flow analysis techniques and their extension for PFS data analysis, including the FTLE approach, Jacobian analysis, and an attribute accumlation framework. In particular, we apply these analysis techniques to free surface fluids. We demonstrate that these analyses can reveal some interesting underlying flow patterns that would be hard to see otherwise via a number of PFS simulated flows with different parameters and boundary settings. In addition, we point out that an in-situ analysis framework that performs these analyses can potentially be used to guide the adaptive PFS to allocate the computation and storage power to the regions of interest during the simulation.

Digital Library: EI
Published Online: January  2017
  15  3
Image
Pages 22 - 33,  © Society for Imaging Science and Technology 2017
Volume 29
Issue 1

Correct guides, such as axes and legends, are an important part of creating an understandable visualization. Guides contextualize the other visuals by providing information about the source data and analysis process. Despite inherent ties to analysis already specified, most visualization programming libraries do reuse the existing specification. Automatic guide creation based on the analysis specification can be performed if the visualization program semantics are well defined and proper metadata is supplied. This paper presents high-level execution semantics for visualization-supporting analysis. These semantics are used with selected metadata to automatically construct guides. The Stencil visualization system includes an implementation of the presented guide system. Stencil is used to explore advantages, limitations and possible extensions to the proposed system. The principles presented can be applied to other visualization frameworks that include programmable analysis. Implementation of automatic guide creation simplifies the construction of visualizations, and can ultimately lead to higher quality visualizations.

Digital Library: EI
Published Online: January  2017
  250  20
Image
Pages 34 - 45,  © Society for Imaging Science and Technology 2017
Volume 29
Issue 1

The characterization and abstraction of large multivariate time series data often poses challenges with respect to effectiveness or efficiency. Using the example of human motion capture data challenges exist in creating compact solutions that still reflect semantics and kinematics in a meaningful way. We present a visual-interactive approach for the semi-supervised labeling of human motion capture data. Users are enabled to assign labels to the data which can subsequently be used to represent the multivariate time series as sequences of motion classes. The approach combines multiple views supporting the user in the visualinteractive labeling process. Visual guidance concepts further ease the labeling process by propagating the results of supportive algorithmic models. The abstraction of motion capture data to sequences of event intervals allows overview and detail-on-demand visualizations even for large and heterogeneous data collections. The guided selection of candidate data for the extension and improvement of the labeling closes the feedback loop of the semisupervised workflow. We demonstrate the effectiveness and the efficiency of the approach in two usage scenarios, taking visualinteractive learning and human motion synthesis as examples.

Digital Library: EI
Published Online: January  2017
  252  6
Image
Pages 46 - 57,  © Society for Imaging Science and Technology 2017
Volume 29
Issue 1

The exploration of text document collections is a complex and cumbersome task. Clustering techniques can help to group documents based on their content for the generation of overviews. However, the underlying clustering workflows comprising preprocessing, feature selection, clustering algorithm selection and parameterization offer several degrees of freedom. Since no "best" clustering workflow exists, users have to evaluate clustering results based on the data and analysis tasks at hand. In our approach, we present an interactive system for the creation and validation of text clustering workflows with the goal to explore document collections. The system allows users to control every step of the text clustering workflow. First, users are supported in the feature selection process via feature selection metrics-based feature ranking and linguistic filtering (e.g., part-of-speech filtering). Second, users can choose between different clustering methods and their parameterizations. Third, the clustering results can be explored based on the cluster content (documents and relevant feature terms), and cluster quality measures. Fourth, the results of different clusterings can be compared, and frequent document subsets in clusters can be identified. We validate the usefulness of the system with a usage scenario describing how users can explore document collections in a visual and interactive way.

Digital Library: EI
Published Online: January  2017
  120  1
Image
Pages 58 - 69,  © Society for Imaging Science and Technology 2017
Volume 29
Issue 1

Category search is a searching activity where the user has an example image and searches for other images of the same category. This activity often requires appropriate keywords of target categories making it difficult to search images without prior knowledge of appropriate keywords. Text annotations attached to images are a valuable resource for helping users to find appropriate keywords for the target categories. We propose an image exploration system in this article for category image search without the prior knowledge of category keywords. Our system integrates content-based and keyword-based image exploration and seamlessly switches exploration types according to user interests. The system enables users to learn target categories both in image and keyword representation through exploration activities. Our user study demonstrated the effectiveness of image exploration using our system, especially for the search of images with unfamiliar category compared to the single-modality image search. © 2016 Society for Imaging Science and Technology.

Digital Library: EI
Published Online: January  2017
  294  30
Image
Pages 70 - 77,  © Society for Imaging Science and Technology 2017
Volume 29
Issue 1

The simulation of human behavior with avatars and agents in virtual reality (VR) has led to an explosion of training and educational research. The use of avatars (user-controlled characters) or agents (computer-controlled characters) may influence the engagements of the user experience for emergency response, and training in emergency scenarios. Our proposed collaborative VR megacity environment offers flexibility to run multiple scenarios and evacuation drills for disaster preparedness and response. Modeling such an environment is very important because in the real-time emergencies we experience in day-to-day life, there is a need for preparation to extreme events. These emergencies could be the result of fire, smoke, gunman threat, or a bomb blast in a city block. The collaborative virtual environment (CVE) can act as a platform for training and decision making for SWAT teams, fire responders, and traffic clearance personnel. The novelty of our work lies in modeling behaviors (hostile, non-hostile, selfish, leader-following) for computer-controlled agents so that they can interact with user-controlled agents in a CVE. We have used game creation as a metaphor for creating an experimental setup to study human behavior in a megacity for emergency response, decision-making strategies, and what-if scenarios. Our proposed collaborative VR environment includes both immersive and non-immersive environments. The participant can enter the CVE setup on the cloud and participate in the emergency evacuation drill, which leads to considerable cost advantages over large-scale, real-life exercises. We present two ways for controlling crowd behavior. The first defines rules for agents, and the second provides controls to the users as avatars to navigate in the VR environment as autonomous agents. Our contribution lies in our approach to combine these two approaches of behavior to perform virtual drills for emergency response and decision making.

Digital Library: EI
Published Online: January  2017
  170  2
Image
Pages 78 - 88,  © Society for Imaging Science and Technology 2017
Volume 29
Issue 1

Standard desktop setups, even with multiple monitor configurations, only provide a somewhat small view on the data set at hand. In addition, typical mouse and keyboard input paradigms often result in less user-friendly configurations, especially when it comes to dealing with 3D data sets. For simulation environments in which participants or users are supposed to be exposed to a more realistic scenario with increased immersion, desktop configurations, such as fishtank VR, are not necessarily a viable choice. This papers aims at providing an overview of different display technology and input devices that provides a virtual environment paradigm suitable for various different visualization and simulation tasks. The focus is on cost-effective display technology that does not break a researchers budget. The software framework utilizing these displays combines different visualization and graphics packages to create an easy-to-use software environment that can run readily on all these displays without changing the software thereby providing easy-to-use software frameworks for visualization and simulation.

Digital Library: EI
Published Online: January  2017
  103  4
Image
Pages 89 - 98,  © Society for Imaging Science and Technology 2017
Volume 29
Issue 1

Clipping is an important operation in the context of direct volume rendering to gain an understanding of the inner structures of scientific datasets. Rendering systems often only support volume clipping with geometry types that can be described in a parametric form, or they employ costly multi-pass GPU approaches. We present a SIMD-friendly clipping algorithm for ray traced direct volume rendering that is compatible with arbitrary geometric surface primitives ranging from mere planes over quadric surfaces such as spheres to general triangle meshes. By using a generic programming approach, our algorithm is in general not even limited to triangle or quadric primitives. Ray tracing complex geometric objects with a high primitive count requires the use of acceleration data structures. Our algorithm is based on the multi-hit query for traversing bounding volume hierarchies with rays. We provide efficient CPU and GPU implementations and present performance results.

Digital Library: EI
Published Online: January  2017

Keywords

[object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object]