In this paper, we introduce FastPoints, a state-of-the-art point cloud renderer for the Unity game development platform. Our program supports standard unprocessed point cloud formats with non-programmatic, drag-and-drop support, and creates an out-of-core data structure for large clouds without requiring an explicit preprocessing step; instead, the software renders a decimated point cloud immediately and constructs a shallow octree online, during which time the Unity editor remains fully interactive.
The digital representation of three dimensional objects with different materials has become common not only in the games and movie industry, but also in designer software, e-commerce and other applications. Although the rendered images often seem to be realistic, a closer look reveals that their color accuracy is often insufficient for critical applications. Storage of the angledependent color properties of metallic coatings and other gonioapparent materials demands large amounts of data. Apart from that, also rendering sparkle, gloss and other visual texture phenomena is still a subject of active research. Current approaches are computationally very demanding, and require manual ad-hoc setting of many model parameters. In this paper, we describe a new approach to solve these problems. We combine a multi-spectral physics-based approach to make BRDF representation more efficient. We also account for the common loss in color accuracy due to the varying technical specifications of displays, and we correct for the influence from ambient lighting. The rendering framework presented here is shown to be capable of rendering sparkle and gloss as well, based on objective measurement of these properties. This takes out the subjective phase of manual fine-tuning of model parameters that is characteristic for many current rendering approaches. A feasibility test with the new spectral rendering pipeline shows that is indeed able to produce realistic rendering of color, sparkle, gloss and other texture aspects. The computation time is small enough to make the rendering real-time on an iPad 2017, i.e. with low memory footprint and without high demands on graphic card or data storage.
Production of high-quality virtual reality content from real sensed data is a challenging task due to several factors such as calibration of multiple cameras and rendering of virtual views. In this paper, we present a pipeline that maximizes the performance of virtual view rendering from an imagery captured by a camera equipped with fisheye lens optics. While such optics offer a wide field-of-view, it also introduces specific distortions. These have to be taken into account while rendering virtual views for a target application (e.g., head-mounted displays). We integrate a generic camera model into a fast rendering pipeline where we can tune intrinsic and extrinsic camera parameters along with resolution to meet the device or user requirements. We specifically target CPU-based implementation and quality in par with GPU-based rendering approaches. Using the adopted generic camera model, we numerically tabulate the required backward projection mapping and store it in a look-up table. This approach offers a tradeoff between memory and computational complexity in terms of operations for calculating the mapping values. Finally, we complement our method with an interpolator, which handles occlusions efficiently. Experimental results demonstrate the viability, robustness and accuracy of the proposed pipeline.
With the advent of more sophisticated color and surface treatments in 3D printing, a more robust system for previewing these features is required. This work reviews current systems and proposes a framework for integrating a more accurate preview into 3D modelling systems.