ACES is a standardized color management system widely used in the film and visual effects industry to ensure consistent and accurate color reproduction throughout the production pipeline. Integrating ACES into game engines like Unreal Engine could have significant benefits, especially for game developers who want to achieve high-quality, consistent color representation across different platforms and displays. Game developers can achieve heightened visual fidelity by leveraging ACES in Unreal Engine 5, especially concerning wide color gamuts and high dynamic range (HDR) content. The standardized color management system allows cross-platform development, guaranteeing consistent color reproduction on various devices and display technologies. Moreover, Unreal Engine 5's support for ACES facilitates seamless collaboration with other creative industries that utilize this industry-standard color pipeline. However, implementing ACES in a real-time engine presents unique challenges regarding performance optimization and ensuring compatibility with other game engines. Artists and developers may need to adapt their workflows to accommodate ACES color transforms, impacting the art pipeline and user-generated content. This paper uses ACES to investigate color input and output consistency to and from Epic Games Unreal 5 regarding Wide Color Gamut and High Dynamic Range imagery.
With the release of the Apple iPhone 12 pro in 2020, various features were integrated that make it attractive as a recording device for scene-related computer graphics pipelines. The captured Apple RAW images have a much higher dynamic range than the standard 8-bit images. Since a scene-based workflow naturally has an extended dynamic range (HDR), the Apple RAW recordings can be well integrated. Another feature is the Dolby Vision HDR recordings, which are primarily adapted to the respective display of the source device. However, these recordings can also be used in the CG workflow since at least the basic HLG transfer function is integrated. The iPhone12pro's two Laser scanners can produce complex 3D models and textures for the CG pipeline. On the one hand, there is a scanner on the back that is primarily intended for capturing the surroundings for AR purposes. On the other hand, there is another scanner on the front for facial recognition. In addition, external software can read out the scanning data for integration in 3D applications. To correctly integrate the iPhone12pro Apple RAW data into a scene-related workflow, two command-line-based software solutions can be used, among others: dcraw and rawtoaces. Dcraw offers the possibility to export RAW images directly to ACES2065-1. Unfortunately, the modifiers for the four RAW color channels to address the different white points are unavailable. Experimental test series are performed under controlled studio conditions to retrieve these modifier values. Subsequently, these RAW-derived images are imported into computer graphics pipelines of various CG software applications (SideFx Houdini, The Foundry Nuke, Autodesk Maya) with the help of OpenColorIO (OCIO) and ACES. Finally, it will be determined if they can improve the overall color quality. Dolby Vision content can be captured using the native Camera app on an iPhone 12. It captures HDR video using Dolby Vision Profile 8.4, which contains a cross-compatible HLG Rec.2020 base layer and Dolby Vision dynamic metadata. Only the HLG base layer is passed on when exporting the Dolby Vision iPhone video without the corresponding metadata. It is investigated whether the iPhone12 videos transferred this way can increase the quality of the computer graphics pipeline. The 3D Scanner App software controls the two integrated Laser Scanners. In addition, the software provides a large number of export formats. Therefore, integrating the OBJ-3D data into industry-standard software like Maya and Houdini is unproblematic. Unfortunately, the models and the corresponding UV map are more or less machine-readable. So, manually improving the 3D geometry (filling holes, refining the geometry, setting up new topology) is cumbersome and time-consuming. It is investigated if standard techniques like using the ZRemesher in ZBrush, applying Texture- and UV-Projection in Maya, and VEX-snippets in Houdini can assemble these models and textures for manual editing.
According to our recent paper [1], the concept of creating a still image panorama with the additional inclusion of video footage up to 30K resolution has proven to be successful in various application examples. However, certain aspects of the production pipeline need some optimization, especially the color workflow and the spatial placement of the video content. This paper aims to compare two workflows to overcome these problems. In particular, the following two methods are described in detail: 1) Improving the current workflow with the Canon EOS D5 Mark IV camera as the central device, 2) Establishing a new workflow using the new possibilities of the Apple iPhone 12 Pro MAX. The following aspects are the subject of our investigation: a) The fundamental idea is to use the ACES as the central color management system. It is investigated if the direct import from RAW to ACEScg via dcraw and rawtoaces shows advantages. In addition, the conversion from Dolby Vision to ACES for the video processing is investigated, and the result is evaluated. Furthermore, the influence of stitching programs (e.g., PTGUI) on the color workflow is observed and optimized. b) The second part of the paper deals with the spatial integration of the videos into the still panoramas. Due to the different crop factors, specific focal lengths must be applied when using the Canon EOS D5 Mark IV; this distorts the image and video materials differently and makes it difficult to place the video footage in the panorama. We investigate if the usage of the lens distortion removal algorithm improves results. Furthermore, the comparison of the performance and capabilities of the Apple iPhone 12 Pro MAX is also evaluated regarding this aspect. Finally, the recorded resolution of detailed vegetation and foliage in video footage is compared. The paper summarizes the results of the new proposed workflow and indicates necessary further investigation. [1] Hasche, Eberhard; Benning, Dominik; Karaschewski, Oliver; Carstens, Florian; Creutzburg, Reiner: Creating high-resolution 360-degree single-line 25K video content for modern conference rooms using film compositing techniques. In: Electronic Imaging, Mobile Devices and Multimedia: Technologies, Algorithms & Applications 2020, pp. 206-1-206-14(14), https://doi.org/10.2352/ISSN.2470-1173.2020.3.MOBMU-206
In modern moving image production pipelines, it is unavoidable to move the footage through different color spaces. Unfortunately, these color spaces exhibit color gamuts of various sizes. The most common problem is converting the cameras’ widegamut color spaces to the smaller gamuts of the display devices (cinema projector, broadcast monitor, computer display). So it is necessary to scale down the scene-referred footage to the gamut of the display using tone mapping functions [34].In a cinema production pipeline, ACES is widely used as the predominant color system. The all-color compassing ACES AP0 primaries are defined inside the system in a general way. However, when implementing visual effects and performing a color grade, the more usable ACES AP1 primaries are in use. When recording highly saturated bright colors, color values are often outside the target color space. This results in negative color values, which are hard to address inside a color pipeline. "Users of ACES are experiencing problems with clipping of colors and the resulting artifacts (loss of texture, intensification of color fringes). This clipping occurs at two stages in the pipeline: <list list-type="simple"> <list-item>- Conversion from camera raw RGB or from the manufacturer’s encoding space into ACES AP0</list-item> <list-item>- Conversion from ACES AP0 into the working color space ACES AP1" [1]</list-item> </list>The ACES community established a Gamut Mapping Virtual Working Group (VWG) to address these problems. The group’s scope is to propose a suitable gamut mapping/compression algorithm. This algorithm should perform well with wide-gamut, high dynamic range, scene-referred content. Furthermore, it should also be robust and invertible. This paper tests the behavior of the published GamutCompressor when applied to in- and out-ofgamut imagery and provides suggestions for application implementation. The tests are executed in The Foundry’s Nuke [2].
Look Modification Transforms (LMTs) are a very powerful component of the Academy Color Encoding System (ACES) and offer extraordinary flexibility in ACES-based workflows. In ACES, the look of a project can be defined by a Look Transform, previously referred to as a Look Modification Transform, or LMT, applied before the Output Transform. The advantage of this approach is that technically the Look Transform should not have to change regardless of whether the Output Transform chosen is for [1]. The Academy Technical Bulletin TB-2014-10 describes the application of a Look Transform using the Academy Common LUT Format (CLF). Since CLF is not yet widely supported, any look applied using a LUT or grading operation, which spans an entire scene or show, and is used in series with per-shot adjustments can be considered to be a Look Transform. Therefore the import and export to other programs and color pipelines like 8 bit image processing programs, as well as video and film cameras, are of particular importance [1]. This paper aims to discuss the possibilities of importing and exporting Rec.709-coded imagery into the Foundry's Nuke 12 ACEScg workspace using the OCIO ACES1.1 configuration. Presented are the implemented Rec.709 transforms and possible alternatives. We also discuss the behavior of transforms concerning processing the transfer function (gamma) and color conversion of primaries and their complementary colors.
ACES is the Academy Color Encoding System established by the Academy of Motion Picture Arts and Science (A.M.P.A.S). Since its introduction (version 1.0 in December 2014), it has been widely used in the film industry. The interaction of four modules makes the system flexible and leaves room for own developments and modifications. Nevertheless, improvements are possible for various practical applications. This paper analyzes some of the problems frequently encountered in practice in order to identify possible solutions. These include improvements in importing still images, white point conversions problems and test lighting. The results should be applicable in practice and take into account above all the workflow with commercially available software programs. The goal of this paper is to record the spectral distribution of a GretagMacbeth ColorChecker using a spectrometer and also photography it with different cameras like RED Scarlet M-X, Blackmagic URSA Mini Pro and Canon EOS 5D Mark III under the same lighting conditions. The recorded imagery is then converted to the ACES2065-1 color space. The positions of the patches of the ColorChecker in CIE Yxy color space are then compared to the positions of the patches captured by the spectral device. Using several built-in converters the goal is to match the positions of the spectral data as close as possible