Regular
Cyber-security
Data MiningDocument VisualizationDesign studyData-parallel primitives
GazeGPU
Human FactorsHash tableshuman-computer interface
Image classification
Model comparisonmedical applicatiomMachine learning
Performance analysis
Software engineeringSearch algorithmsSearch Visualization
tongue detection
vector visualizationVTK-mVolume RenderingVisual and Data AnalyticsVisualizationVirtual and Augmented Reality
Wordclouds
 Filters
Month and year
 
  14  1
Image
Pages A01-1 - A01-4,  © Society for Imaging Science and Technology 2020
Digital Library: EI
Published Online: January  2020
  38  17
Image
Pages 374-1 - 374-11,  © Society for Imaging Science and Technology 2020
Volume 32
Issue 1

Computational complexity is a limiting factor for visualizing large-scale scientific data. Most approaches to render large datasets are focused on novel algorithms that leverage cutting-edge graphics hardware to provide users with an interactive experience. In this paper, we alternatively demonstrate foveated imaging which allows interactive exploration using low-cost hardware by tracking the gaze of a participant to drive the rendering quality of an image. Foveated imaging exploits the fact that the spatial resolution of the human visual system decreases dramatically away from the central point of gaze, allowing computational resources to be reserved for areas of importance. We demonstrate this approach using face tracking to identify the gaze point of the participant for both vector and volumetric datasets and evaluate our results by comparing against traditional techniques. In our evaluation, we found a significant increase in computational performance using our foveated imaging approach while maintaining high image quality in regions of visual attention.

Digital Library: EI
Published Online: January  2020
  18  1
Image
Pages 375-1 - 375-9,  © Society for Imaging Science and Technology 2020
Volume 32
Issue 1

Developing machine learning models for image classification problems involves various tasks such as model selection, layer design, and hyperparameter tuning for improving the model performance. However, regarding deep learning models, insufficient model interpretability renders it infeasible to understand how they make predictions. To facilitate model interpretation, performance analysis at the class and instance levels with model visualization is essential. We herein present an interactive visual analytics system to provide a wide range of performance evaluations of different machine learning models for image classification. The proposed system aims to overcome challenges by providing visual performance analysis at different levels and visualizing misclassification instances. The system which comprises five views - ranking, projection, matrix, and instance list views, enables the comparison and analysis different models through user interaction. Several use cases of the proposed system are described and the application of the system based on MNIST data is explained. Our demo app is available at https://chanhee13p.github.io/VisMlic/.

Digital Library: EI
Published Online: January  2020
  70  2
Image
Pages 376-1 - 376-13,  © Society for Imaging Science and Technology 2020
Volume 32
Issue 1

We introduce a new platform-portable hash table and collision-resolution approach, HashFight, for use in visualization and data analysis algorithms. Designed entirely in terms of dataparallel primitives (DPPs), HashFight is atomics-free and consists of a single code base that can be invoked across a diverse range of architectures. To evaluate its hashing performance, we compare the single-node insert and query throughput of Hash- Fight to that of two best-in-class GPU and CPU hash table implementations, using several experimental configurations and factors. Overall, HashFight maintains competitive performance across both modern and older generation GPU and CPU devices, which differ in computational and memory abilities. In particular, HashFight achieves stable performance across all hash table sizes, and has leading query throughput for the largest sets of queries, while remaining within a factor of 1.5X of the comparator GPU implementation on all smaller query sets. Moreover, HashFight performs better than the comparator CPU implementation across all configurations. Our findings reveal that our platform-agnostic implementation can perform as well as optimized, platform-specific implementations, which demonstrates the portable performance of our DPP-based design.

Digital Library: EI
Published Online: January  2020
  41  18
Image
Pages 387-1 - 387-11,  © Society for Imaging Science and Technology 2020
Volume 32
Issue 1

Code repositories are a common way to archive software source code files. Understanding code repository content and history is important but can be difficult due to the complexity of code repositories. Most available tools are designed for users who are actively maintaining a code repository. In contrast, external developers need to assess the suitability of using a software library, e.g. whether its code repository has a healthy level of maintenance, and how much risk the external developers face if they depend on that code in their own project. In this paper, we identify six risks associated with using a software library, we derive seven requirements for tools to assess these risks, and we contribute two dashboard designs derived from these requirements. The first dashboard is designed to assess a software library's usage suitability via its code repository, and the second dashboard visually compares usage suitability information about multiple software libraries' code repositories. Using four popular libraries' code repositories, we show that these dashboards are effective for understanding and comparing key aspects of software library usage suitability. We further compare our dashboard to a typical code repository user interface and show that our dashboard is more succinct and requires less work.

Digital Library: EI
Published Online: January  2020
  73  7
Image
Pages 388-1 - 388-7,  © Society for Imaging Science and Technology 2020
Volume 32
Issue 1

When presented with many search results, finding information or patterns within the data poses a challenge. This paper presents the design, implementation and evaluation of a visualization enabling users to browse through voluminous information and comprehend the data. Implemented with the JavaScript library Data Driven Documents (D3), the visualization represents the search as clusters of similar documents grouped into bubbles with the contents depicted as word-clouds. Highly interactive features such as touch gestures and intuitive menu actions allow for expeditious exploration of the search results. Other features include drag-and-drop functionality for articles among bubbles, merging nodes, and refining the search by selecting specific terms or articles to receive more similar results. A user study consisting of a survey questionnaire demonstrated that in comparison to a standard text-browser for viewing search results, the visualization performs commensurate or better on most metrics.

Digital Library: EI
Published Online: January  2020
  145  20
Image
Pages 389-1 - 389-5,  © Society for Imaging Science and Technology 2020
Volume 32
Issue 1

The novel human computer interface is introduced, based on tongue and lips movements and using video data from a commercially available camera. The size and direction of the movements are extracted and can be used for setting cursor actions or to other relevant activities. The movement detection is based on convolutional neural networks. The applicability of the proposed solution is shown on the ASSISLT system [1], aimed to support speech therapy for adults and children with inborn and acquired motor speech disorders. The system focuses on individual treatment using exercises that improve tongue motion and thus articulation. The system offers an adjustable set of exercises which proper performance is motivated using augmented reality. Automatic evaluation of the performance of therapeutic movements allows the therapist to objectively follow the progress of the treatment.

Digital Library: EI
Published Online: January  2020

Keywords

[object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object]