The challenges of film restoration demand versatile tools, making machine learning (ML)—through training custom models—an ideal solution. This research demonstrates that custom models effectively restore color in deteriorated films, even without direct references, and recover spatial features using techniques like gauge and analog video reference recovery. A key advantage of this approach is its ability to address restoration tasks that are difficult or impossible with traditional methods, which rely on spatial and temporal filters. While general-purpose video generation models like Runway, Sora, and Pika Labs have advanced significantly, they often fall short in film restoration due to limitations in temporal consistency, artifact generation, and lack of precise control. Custom ML models offer a solution by providing targeted restoration and overcoming the inherent limitations of conventional filtering techniques. Results from employing these local models are promising; however, developing highly specific models tailored to individual restoration scenarios is crucial for greater efficiency.
Since the advent of the Digital Intermediate (DI) and the Cineon system, motion picture film preservation and restoration practices overcame an enormous change derived from the possibility of digitizing and digitally restoring film materials. Today, film materials are scanned using mostly commercial film scanners, which process the frames into the Academy Color Encoding Specification (ACES) and present proprietary LUTs of negative-to-positive conversions, image enhancement, and color correction. The processing operated by scanner systems is not always openly available. The various digitization hardware and software can lead to different approaches and workflows in motion picture film preservation and restoration, resulting in inconsistency among archives and laboratories. This work presents an overview of the main approaches and systems used to digitize and encode motion picture film frames to explain these systems’ potentials and limits.