The light-field display (LfD) radiance image is a raster description of a light-field where every pixel in the image represents a unique ray within a 3D volume. The LfD radiance image can be projected through an array of micro-lenses to project a perspective-correct 3D aerial image visible for all viewers within the LfDs projection frustum. The synthetic LfD radiance image is comparable to the radiance image as captured by a plenoptic/light-field camera but is rendered from a 3D model or scene. Synthetic radiance image rasterization is an example of extreme multi-view rendering as the 3D scene must be rendered from many (1,000s to millions) viewpoints into small viewports per update of the light-field display. However, GPUs and their accompanying APIs (OpenGL, DirectX, Vulkan) generally expect to render a 3D scene from one viewpoint to a single large viewport/framebuffer. Therefore, LfD radiance image rendering is extremely time consuming and compute intensive. This paper reviews the novel, full-parallax, BowTie Radiance Image Rasterization algorithm which can be embedded within an LfD to accelerate light-field radiance image rendering for real-time update.
Digital Imaging and Communications in Medicine (DICOM) is an international standard to transfer, store, retrieve, print, process and display medical imaging information. It provides a standardized method to store medical images from many types of imaging devices. Typically, CT and MRI scans, which are composed of 2D slice images in DICOM format, can be inspected and analyzed with DICOM-compatible imaging software. Additionally, the DICOM format provides important information to assemble cross-sections into 3D volumetric datasets. Not many DICOM viewers are available for mobile platforms (smartphones and tablets), and most of them are 2D-based with limited functionality and user interaction. This paper reports on our efforts to design and implement a volumetric 3D DICOM viewer for mobile devices with real-time rendering, interaction, a full transfer function editor and server access capabilities. 3D DICOM image sets, either loaded from the device or downloaded from a remote server, can be rendered at up to 60 fps on Android devices. By connecting to our server, users can a) get pre-computed image quality metrics and organ segmentation results, and b) share their experience and synchronize views with other users on different platforms.
Computational complexity is a limiting factor for visualizing large-scale scientific data. Most approaches to render large datasets are focused on novel algorithms that leverage cutting-edge graphics hardware to provide users with an interactive experience. In this paper, we alternatively demonstrate foveated imaging which allows interactive exploration using low-cost hardware by tracking the gaze of a participant to drive the rendering quality of an image. Foveated imaging exploits the fact that the spatial resolution of the human visual system decreases dramatically away from the central point of gaze, allowing computational resources to be reserved for areas of importance. We demonstrate this approach using face tracking to identify the gaze point of the participant for both vector and volumetric datasets and evaluate our results by comparing against traditional techniques. In our evaluation, we found a significant increase in computational performance using our foveated imaging approach while maintaining high image quality in regions of visual attention.
Phase space tessellation techniques for N–body dark matter simulations yield density fields of very high quality. However, due to the vast amount of elements and self-intersections in the resulting tetrahedral meshes, interactive visualization methods for this approach so far either employed simplified versions of the volume rendering integral or suffered from rendering artifacts. This paper presents a volume rendering approach for phase space tessellations, that combines state-of-the-art order–independent transparency methods to manage the extreme depth complexity of this mesh type. We propose several performance optimizations, including a view–dependent multiresolution representation of the data and a tile–based rendering strategy, to enable high image quality at interactive frame rates for this complex data structure and demonstrate the advantages of our approach for different types of dark matter simulations.