With the release of the Apple iPhone 12 pro in 2020, various features were integrated that make it attractive as a recording device for scene-related computer graphics pipelines. The captured Apple RAW images have a much higher dynamic range than the standard 8-bit images. Since a scene-based workflow naturally has an extended dynamic range (HDR), the Apple RAW recordings can be well integrated. Another feature is the Dolby Vision HDR recordings, which are primarily adapted to the respective display of the source device. However, these recordings can also be used in the CG workflow since at least the basic HLG transfer function is integrated. The iPhone12pro's two Laser scanners can produce complex 3D models and textures for the CG pipeline. On the one hand, there is a scanner on the back that is primarily intended for capturing the surroundings for AR purposes. On the other hand, there is another scanner on the front for facial recognition. In addition, external software can read out the scanning data for integration in 3D applications. To correctly integrate the iPhone12pro Apple RAW data into a scene-related workflow, two command-line-based software solutions can be used, among others: dcraw and rawtoaces. Dcraw offers the possibility to export RAW images directly to ACES2065-1. Unfortunately, the modifiers for the four RAW color channels to address the different white points are unavailable. Experimental test series are performed under controlled studio conditions to retrieve these modifier values. Subsequently, these RAW-derived images are imported into computer graphics pipelines of various CG software applications (SideFx Houdini, The Foundry Nuke, Autodesk Maya) with the help of OpenColorIO (OCIO) and ACES. Finally, it will be determined if they can improve the overall color quality. Dolby Vision content can be captured using the native Camera app on an iPhone 12. It captures HDR video using Dolby Vision Profile 8.4, which contains a cross-compatible HLG Rec.2020 base layer and Dolby Vision dynamic metadata. Only the HLG base layer is passed on when exporting the Dolby Vision iPhone video without the corresponding metadata. It is investigated whether the iPhone12 videos transferred this way can increase the quality of the computer graphics pipeline. The 3D Scanner App software controls the two integrated Laser Scanners. In addition, the software provides a large number of export formats. Therefore, integrating the OBJ-3D data into industry-standard software like Maya and Houdini is unproblematic. Unfortunately, the models and the corresponding UV map are more or less machine-readable. So, manually improving the 3D geometry (filling holes, refining the geometry, setting up new topology) is cumbersome and time-consuming. It is investigated if standard techniques like using the ZRemesher in ZBrush, applying Texture- and UV-Projection in Maya, and VEX-snippets in Houdini can assemble these models and textures for manual editing.
According to our recent paper [1], the concept of creating a still image panorama with the additional inclusion of video footage up to 30K resolution has proven to be successful in various application examples. However, certain aspects of the production pipeline need some optimization, especially the color workflow and the spatial placement of the video content. This paper aims to compare two workflows to overcome these problems. In particular, the following two methods are described in detail: 1) Improving the current workflow with the Canon EOS D5 Mark IV camera as the central device, 2) Establishing a new workflow using the new possibilities of the Apple iPhone 12 Pro MAX. The following aspects are the subject of our investigation: a) The fundamental idea is to use the ACES as the central color management system. It is investigated if the direct import from RAW to ACEScg via dcraw and rawtoaces shows advantages. In addition, the conversion from Dolby Vision to ACES for the video processing is investigated, and the result is evaluated. Furthermore, the influence of stitching programs (e.g., PTGUI) on the color workflow is observed and optimized. b) The second part of the paper deals with the spatial integration of the videos into the still panoramas. Due to the different crop factors, specific focal lengths must be applied when using the Canon EOS D5 Mark IV; this distorts the image and video materials differently and makes it difficult to place the video footage in the panorama. We investigate if the usage of the lens distortion removal algorithm improves results. Furthermore, the comparison of the performance and capabilities of the Apple iPhone 12 Pro MAX is also evaluated regarding this aspect. Finally, the recorded resolution of detailed vegetation and foliage in video footage is compared. The paper summarizes the results of the new proposed workflow and indicates necessary further investigation. [1] Hasche, Eberhard; Benning, Dominik; Karaschewski, Oliver; Carstens, Florian; Creutzburg, Reiner: Creating high-resolution 360-degree single-line 25K video content for modern conference rooms using film compositing techniques. In: Electronic Imaging, Mobile Devices and Multimedia: Technologies, Algorithms & Applications 2020, pp. 206-1-206-14(14), https://doi.org/10.2352/ISSN.2470-1173.2020.3.MOBMU-206