According to our recent paper [1], the concept of creating a still image panorama with the additional inclusion of video footage up to 30K resolution has proven to be successful in various application examples. However, certain aspects of the production pipeline need some optimization, especially the color workflow and the spatial placement of the video content. This paper aims to compare two workflows to overcome these problems. In particular, the following two methods are described in detail: 1) Improving the current workflow with the Canon EOS D5 Mark IV camera as the central device, 2) Establishing a new workflow using the new possibilities of the Apple iPhone 12 Pro MAX. The following aspects are the subject of our investigation: a) The fundamental idea is to use the ACES as the central color management system. It is investigated if the direct import from RAW to ACEScg via dcraw and rawtoaces shows advantages. In addition, the conversion from Dolby Vision to ACES for the video processing is investigated, and the result is evaluated. Furthermore, the influence of stitching programs (e.g., PTGUI) on the color workflow is observed and optimized. b) The second part of the paper deals with the spatial integration of the videos into the still panoramas. Due to the different crop factors, specific focal lengths must be applied when using the Canon EOS D5 Mark IV; this distorts the image and video materials differently and makes it difficult to place the video footage in the panorama. We investigate if the usage of the lens distortion removal algorithm improves results. Furthermore, the comparison of the performance and capabilities of the Apple iPhone 12 Pro MAX is also evaluated regarding this aspect. Finally, the recorded resolution of detailed vegetation and foliage in video footage is compared. The paper summarizes the results of the new proposed workflow and indicates necessary further investigation. [1] Hasche, Eberhard; Benning, Dominik; Karaschewski, Oliver; Carstens, Florian; Creutzburg, Reiner: Creating high-resolution 360-degree single-line 25K video content for modern conference rooms using film compositing techniques. In: Electronic Imaging, Mobile Devices and Multimedia: Technologies, Algorithms & Applications 2020, pp. 206-1-206-14(14), https://doi.org/10.2352/ISSN.2470-1173.2020.3.MOBMU-206
Most cameras still encode images in the small-gamut sRGB color space. The reliance on sRGB is disappointing as modern display hardware and image-editing software are capable of using wider-gamut color spaces. Converting a small-gamut image to a wider-gamut is a challenging problem. Many devices and software use colorimetric strategies that map colors from the small gamut to their equivalent colors in the wider gamut. This colorimetric approach avoids visual changes in the image but leaves much of the target wide-gamut space unused. Noncolorimetric approaches stretch or expand the small-gamut colors to enhance image colors while risking color distortions. We take a unique approach to gamut expansion by treating it as a restoration problem. A key insight used in our approach is that cameras internally encode images in a wide-gamut color space (i.e., ProPhoto) before compressing and clipping the colors to sRGB's smaller gamut. Based on this insight, we use a softwarebased camera ISP to generate a dataset of 5,000 image pairs of images encoded in both sRGB and ProPhoto. This dataset enables us to train a neural network to perform wide-gamut color restoration. Our deep-learning strategy achieves significant improvements over existing solutions and produces color-rich images with few to no visual artifacts.