This paper will present the story of a collaborative project between the Imaging Department and the Paintings Conservation Department of the Metropolitan Museum of Art to use 3D imaging technology to restore missing and broken elements of an intricately carved giltwood frame from the late 18th century.
Advancements in accurate digitization of 3D objects through photogrammetry are ongoing in the cultural heritage space, for the purposes of digital archival and worldwide access. This paper outlines and documents several user-driven enhancements to the photogrammetry pipeline to improve the fidelity of digitizations. In particular, we introduce a new platform for capturing empirically-based specularity of 3D models called Kintsugi 3D, and visually compare traditional photogrammetry results with this new technique. Kintsugi 3D is a free and open-source package that features, among other things, the ability to generate a set of textures for a 3D model, including normal and specularity maps, based empirically on ground-truth observations from a flash-on-camera image set. It is hoped that the ongoing development of Kintsugi 3D will improve public access for institutions with an interest in sharing high-fidelity photogrammetry.
Simplification of 3D meshes is a fundamental part of most 3D workflows, where the amount of data is reduced to be more manageable for a user. The unprocessed data includes a lot of redundancies and small errors that occur during a 3D acquisition process which can often safely be removed without jeopardizing is function. Several algorithmic approaches are being used across applications of 3D data, which bring with them their own benefits and drawbacks. There is for the moment no standardized algorithm for cultural heritage. This investigation will make a statistical evaluation of how geometric primitive shapes behave during different simplification approaches and evaluate what information might be lost in a HBIM (Heritage-Building-Information-Modeling) or change-monitoring process of cultural heritage if each of these are applied to more complex manifolds.
This paper addresses the concerns of the digital heritage field by setting out a series of recommendations for establishing a workflow for 3D objects, increasingly prevalent but still lacking a standardized process, in terms of long-term preservation and dissemination. We build our approach on interdisciplinary collaborations together with a comprehensive literature review. We provide a set of heuristics consisting of the following six components: data acquisition, data preservation, data description, data curation and processing, data dissemination, as well as data interoperability, analysis and exploration. Each component is supplemented by suggestions for standards and tools, which are either already common in 3D practices or represent a high potential component seeking consensus to formalize a 3D environment fit for the Humanities, such as efforts carried out by the International Image Interoperability Framework (IIIF). We then present a conceptual high-level 3D workflow which highly relies on standards adhering to the Linked Open Usable Data (LOUD) design principles.
We describe a novel method for monocular view synthesis. The goal of our work is to create a visually pleasing set of horizontally spaced views based on a single image. This can be applied in view synthesis for virtual reality and glasses-free 3D displays. Previous methods produce realistic results on images that show a clear distinction between a foreground object and the background. We aim to create novel views in more general, crowded scenes in which there is no clear distinction. Our main contributions are a computationally efficient method for realistic occlusion inpainting and blending, especially in complex scenes. Our method can be effectively applied to any image, which is shown both qualitatively and quantitatively on a large dataset of stereo images. Our method performs natural disocclusion inpainting and maintains the shape and edge quality of foreground objects.
Digital archaeology is a rapidly evolving field, adapting new technologies to interpret diverse data sources. This paper details the superimposition of 2D maps and 3D data in an interactive 3D space, and their selective subtraction by a 3D brush system. The subject of study is the archaeological landscape of the medieval city of Angkor in Cambodia, an area of approximately 3500 square kilometers. By cutting through the superimposed layers of LIDAR point clouds, 2D mapping of the archaeological features, and the 3D reconstructions of the living city of Angkor, the brush system reveals both correspondences and discontinuities through interactive examination.
We proposed the method for non-destructively embedding information inside a 3D fabricated object very clearly by the process of re-magnetization. Our strong points are that the 3D object is finished (ready to use) after only a printing process, and is able to be reused by re-writing information many times. In this paper, we investigated the effects of the depth (positions inside the object) of the storage cell, which is printed as a ferromagnetic filament, on the clarity of the embedded information. Our purpose: we need to find the conditions that gave the most benefit in both obtaining high magnetic strength and protecting the embedded information. With this advantage, the method leads to the production of creating the high-quality household 4D object, the personal interactive 3D object, in the near future.
In the early phases of the pandemic lockdown, our team was eager to share our collection in new ways. Using an existing 3D asset and advancements in AR technology we were able to augment a 3D model of a collection object with the voice of a curator to add context and value. This experience leveraged the unique capabilities of the open Pixar USD format USDZ extension. This paper documents the workflow behind creating an AR experience as well as other applications of the USD/USDZ format for cultural heritage applications. This paper will also provide valuable information about developments, limitations and misconceptions between WebXR glTF and USDZ.
Photogrammetric three-dimensional (3D) reconstruction is an image processing technique used to develop digital 3D models from a series of two-dimensional images. This technique is commonly applied to optical photography though it can also be applied to microscopic imaging techniques such as scanning electron microscopy (SEM). The authors propose a method for the application of photogrammetry techniques to SEM micrographs in order to develop 3D models suitable for volumetric analysis. SEM operating parameters for image acquisition are explored and the relative effects discussed. This study considered a variety of microscopic samples, differing in size, geometry and composition, and found that optimal operating parameters vary with sample geometry. Evaluation of reconstructed 3D models suggests that the quality of the models strongly determines the accuracy of the volumetric measurements obtainable. In particular, they report on volumetric results achieved from a laser ablation pit and discuss considerations for data acquisition routines.