
As 3D Imaging for cultural heritage continues to evolve, it’s important to step back and assess the objective as well as the subjective attributes of image quality. The delivery and interchange of 3D content today is reminiscent of the early days of the analog to digital photography transition, when practitioners struggled to maintain quality for online and print representations. Traditional 2D photographic documentation techniques have matured thanks to decades of collective photographic knowledge and the development of international standards that support global archiving and interchange. Because of this maturation, still photography techniques and existing standards play a key role in shaping 3D standards for delivery, archiving and interchange. This paper outlines specific techniques to leverage ISO-19264-1 objective image quality analysis for 3D color rendition validation, and methods to translate important aesthetic photographic camera and lighting techniques from physical studio sets to rendered 3D scenes. Creating high-fidelity still reference photography of collection objects as a benchmark to assess 3D image quality for renders and online representations has and will continue to help bridge the current gaps between 2D and 3D imaging practice. The accessible techniques outlined in this paper have vastly improved the rendition of online 3D objects and will be presented in a companion workshop.

In this paper, we introduce FastPoints, a state-of-the-art point cloud renderer for the Unity game development platform. Our program supports standard unprocessed point cloud formats with non-programmatic, drag-and-drop support, and creates an out-of-core data structure for large clouds without requiring an explicit preprocessing step; instead, the software renders a decimated point cloud immediately and constructs a shallow octree online, during which time the Unity editor remains fully interactive.

The digital representation of three dimensional objects with different materials has become common not only in the games and movie industry, but also in designer software, e-commerce and other applications. Although the rendered images often seem to be realistic, a closer look reveals that their color accuracy is often insufficient for critical applications. Storage of the angledependent color properties of metallic coatings and other gonioapparent materials demands large amounts of data. Apart from that, also rendering sparkle, gloss and other visual texture phenomena is still a subject of active research. Current approaches are computationally very demanding, and require manual ad-hoc setting of many model parameters. In this paper, we describe a new approach to solve these problems. We combine a multi-spectral physics-based approach to make BRDF representation more efficient. We also account for the common loss in color accuracy due to the varying technical specifications of displays, and we correct for the influence from ambient lighting. The rendering framework presented here is shown to be capable of rendering sparkle and gloss as well, based on objective measurement of these properties. This takes out the subjective phase of manual fine-tuning of model parameters that is characteristic for many current rendering approaches. A feasibility test with the new spectral rendering pipeline shows that is indeed able to produce realistic rendering of color, sparkle, gloss and other texture aspects. The computation time is small enough to make the rendering real-time on an iPad 2017, i.e. with low memory footprint and without high demands on graphic card or data storage.

Production of high-quality virtual reality content from real sensed data is a challenging task due to several factors such as calibration of multiple cameras and rendering of virtual views. In this paper, we present a pipeline that maximizes the performance of virtual view rendering from an imagery captured by a camera equipped with fisheye lens optics. While such optics offer a wide field-of-view, it also introduces specific distortions. These have to be taken into account while rendering virtual views for a target application (e.g., head-mounted displays). We integrate a generic camera model into a fast rendering pipeline where we can tune intrinsic and extrinsic camera parameters along with resolution to meet the device or user requirements. We specifically target CPU-based implementation and quality in par with GPU-based rendering approaches. Using the adopted generic camera model, we numerically tabulate the required backward projection mapping and store it in a look-up table. This approach offers a tradeoff between memory and computational complexity in terms of operations for calculating the mapping values. Finally, we complement our method with an interpolator, which handles occlusions efficiently. Experimental results demonstrate the viability, robustness and accuracy of the proposed pipeline.

With the advent of more sophisticated color and surface treatments in 3D printing, a more robust system for previewing these features is required. This work reviews current systems and proposes a framework for integrating a more accurate preview into 3D modelling systems.