Regular
FastTrack
No keywords found
 Filters
Month and year
 
  13  1
Image
Page 010101-1,  © Society for Imaging Science and Technology 2016
Digital Library: JIST
Published Online: January  2016
  26  1
Image
Pages 010401-1 - 010401-10,  © Society for Imaging Science and Technology 2016
Volume 60
Issue 1
Abstract

Trilinear interpolation is a method of multivariate interpolation on a three-dimensional regular grid. It approximates the value of an intermediate point using data on the lattice points, and thus is frequently used for display characterization with 3D lookup tables (3D LUTs). However, large color errors are usually caused by the nonlinear relationship between the source RGB space and the destination CIELAB space. In this article the display characterization is improved by modifying the traditional trilinear interpolation model. First, the Yule–Nielsen n-factor is applied to the destination functions, for the purpose of reducing the nonlinearity between the source and destination color spaces. Afterward, different calibrating curves are developed to calculate the effective values of the source RGB values. The input/source RGB values are usually called nominal values, and the effective values can be regarded as the optimized RGB values which improve the matching degree of the predicted and measured destination CIELAB values. In experiment, a Toshiba M5 liquid crystal display is characterized by using the modified trilinear interpolation model, and the forward and inverse characterization errors of different methods are calculated and compared. The evaluation results demonstrate that both the average and the maximum color errors have significantly decreased when calibrating curve III (one of the three types of curves developed) is employed in combination with the optimal n-factor. Thus, the method of developing effective calibrating curves and finding optimal n-factors proposed in this article can be adopted during display characterization.

Digital Library: JIST
Published Online: January  2016
  108  14
Image
Pages 010402-1 - 010402-9,  © Society for Imaging Science and Technology 2016
Volume 60
Issue 1
Abstract

An automatic system to extract terrestrial objects from aerial imagery has many applications in a wide range of areas. However, in general, this task has been performed by human experts manually, so that it is very costly and time consuming. There have been many attempts at automating this task, but many of the existing works are based on class-specific features and classifiers. In this article, the authors propose a convolutional neural network (CNN)-based building and road extraction system. This takes raw pixel values in aerial imagery as input and outputs predicted three-channel label images (building–road–background). Using CNNs, both feature extractors and classifiers are automatically constructed. The authors propose a new technique to train a single CNN efficiently for extracting multiple kinds of objects simultaneously. Finally, they show that the proposed technique improves the prediction performance and surpasses state-of-the-art results tested on a publicly available aerial imagery dataset.

Digital Library: JIST
Published Online: January  2016
  40  2
Image
Pages 010403-1 - 010403-5,  © Society for Imaging Science and Technology 2016
Volume 60
Issue 1
Abstract

The human visual system produces the sensation of stereopsis with the help of depth cues. Disparity and blur (defocus) are widely accepted as the most important depth cues. The two cues are also known to have important effects on the visual discomfort caused by viewing stereoscopic content. However, the relationship between the combination of the two cues and visual comfort has rarely been investigated, especially when considering the effects of proximity cues, e.g., looming and motion parallax. In this study, various stereoscopic videos were compared with the planar videos corresponding to them. Each of the stereoscopic videos contained a set of depth cues, and the levels of the cues varied from video to video. The subjects were required to judge the relative visual comfort (RVC) for each pair, where RVC means the level of visual comfort of the first stimulus in the pair relative to the second one. The results showed that both disparity and blur have significant effects on RVC. The effects of disparity did not vary significantly with blur and proximity cues, and the effects of blur did not vary significantly with disparity and proximity cues. Based on these findings, the authors further built a regression model for the estimation of RVC from depth cues.

Digital Library: JIST
Published Online: January  2016
  15  1
Image
Pages 010404-1 - 010404-11,  © Society for Imaging Science and Technology 2016
Volume 60
Issue 1
Abstract

Simulation and modeling of turbulent flow, and of turbulent reacting flow in particular, involve solving for and analyzing time-dependent and spatially dense tensor quantities, such as turbulent stress tensors. The interactive visual exploration of these tensor quantities can effectively steer the computational modeling of combustion systems. In this article, the authors analyze the challenges in dense symmetric-tensor visualization as applied to turbulent combustion calculation; most notable among these challenges are the dataset size and density. They analyze, together with domain experts, the feasibility of using several established tensor visualization techniques in this application domain. They further examine and propose visual descriptors for volume rendering of the data. Of these novel descriptors, one is a density-gradient descriptor which results in Schlieren-style images, and another one is a classification descriptor inspired by machine-learning techniques. The result is a hybrid visual analysis tool to be utilized in the debugging, benchmarking and verification of models and solutions in turbulent combustion. The authors demonstrate this analysis tool on two example configurations, report feedback from combustion researchers, and summarize the design lessons learned.

Digital Library: JIST
Published Online: January  2016
  32  1
Image
Pages 010405-1 - 010405-13,  © Society for Imaging Science and Technology 2016
Volume 60
Issue 1
Abstract

Study of the behavior of individual members in communities of dynamic networks can help neuroscientists to understand how interactions between neurons in brain networks change over time. Visualization of those temporal features is challenging, especially for networks embedded within spatial structures, such as brain networks. In this article, the authors present the design of SwordPlots, an interactive multi-view visualization system to assist neuroscientists in their exploration of dynamic brain networks from multiple perspectives. Their visualization helps neuroscientists to understand how the functional behavior of the brain changes over time, how such behaviors are related to the spatial structure of the brain, and how communities of neurons with similar functionality evolve over time. To evaluate their application, they asked neuroscientists to use SwordPlots to examine four different mouse brain data sets. Based on feedback, their visualization design can provide neuroscientists with the ability to gain new insights into the properties of dynamic brain networks.

Digital Library: JIST
Published Online: January  2016
  34  1
Image
Pages 010406-1 - 010406-8,  © Society for Imaging Science and Technology 2016
Volume 60
Issue 1
Abstract

Visualization is an important task in data analytics, as it allows researchers to view patterns within the data instead of reading through extensive raw data. Allowing the ability to interact with the visualizations is an essential aspect, since it provides the ability to intuitively explore data to find meaning and patterns more efficiently. Interactivity, however, becomes progressively more difficult as the size of the dataset increases. This project begins by leveraging existing web-based data visualization technologies, and extends their functionality through the use of parallel processing. This methodology utilizes state-of-the-art techniques, such as Node.js, to split the visualization rendering and user interactivity controls between a client–server infrastructure without having to rebuild the visualization technologies. The approach minimizes data transfer by performing the rendering step on the server while allowing for the use of high-performance computing systems to render the visualizations more quickly. In order to improve the scaling of the system with larger datasets, parallel processing and visualization optimization techniques are used. This work uses parameter space data generated from mindmodeling.org to showcase the authors’ methodology for handling large-scale datasets while retaining interactivity and user friendliness.

Digital Library: JIST
Published Online: January  2016
  37  1
Image
Pages 010407-1 - 010407-16,  © Society for Imaging Science and Technology 2016
Volume 60
Issue 1
Abstract

This article presents a system dedicated to automatic language identification of text regions in heterogeneous and complex documents. This system is able to process documents with mixed printed and handwritten text and various layouts. To handle such a problem, the authors propose a system that performs the following sub-tasks: writing type identification (printed/handwritten), script identification and language identification. The methods for writing type recognition and script discrimination are based on analysis of the connected components, while the language identification approach relies on a statistical text analysis, which requires a recognition engine. The authors evaluate the system on a new public dataset and present detailed results on the three tasks. Their system outperforms the Google plug-in evaluated on ground-truth transcriptions of the same dataset.

Digital Library: JIST
Published Online: January  2016
  25  1
Image
Pages 010408-1 - 010408-10,  © Society for Imaging Science and Technology 2016
Volume 60
Issue 1
Abstract

In this article the authors investigated a visualization tool (uVis) for end-user developers, in order to see how end users actually use it. The tool was an early version and the investigation helped the authors to improve it. The investigation showed that users appreciated the simple formula language, the coordinated panels, and the drag-and-drop mechanism. However, the most important thing for them was the immediate response when they changed something, for instance part of a formula. The entire visualization was updated immediately without having to switch from development view to production view.

With uVis, developers construct a visualization from simple visual components such as boxes, curvePoints, and textboxes. All component properties such as Top and BackColor can be complex formulas similar to spreadsheet formulas. The operands in the formula can address relational data in a database, other visual objects, and dialog data provided by the user. A special Rows property can bind to a database query and make the component replicate itself for each row in the query. In this way, traditional as well as novel visualizations can be constructed.

The most serious usability problems were data binding and not noticing errors (errors were shown in an error list, but not in the formula that had the error). There were many other usability problems. Removing them would speed up learning and make the tool more successful.

Digital Library: JIST
Published Online: January  2016
  27  1
Image
Pages 010409-1 - 010409-13,  © Society for Imaging Science and Technology 2016
Volume 60
Issue 1
Abstract

Models that researchers often use for the dehazing task are based on the Koschmieder law. In this article, we use the STRESS (Spatio-Temporal Retinex-inspired Envelope with Stochastic Sampling) model for the dehazing task. In our work, we demonstrate theoretically and empirically how the parameters in the STRESS framework can be set for dehazing. We then propose a new algorithm for haze removal, based on the model of the (STRESS) framework, which combines edge detection and Hidden Markov Model (HMM) to solve the problem. Experiments show that our approach yields more visibility—based on some metrics and psychophysical tests—than most of the state-of-the-art approaches.

Digital Library: JIST
Published Online: January  2016