We introduce a technique for improving photographs using inverse lighting, a new process based on algorithms developed in computer graphics for computing the reflection of light in 3D space. From a photograph and a 3D surface model for the object pictured, inverse lighting estimates
the directional distribution of the incident light. We then use this information to process the photograph digitally to alter the lighting on the object.Inverse lighting is a specific example of the general idea of inverse rendering. This refers to the practice of using the methods
of computer graphics, which normally are used to render images from scene information, to infer scene information from images. Our system uses physically based rendering technology to construct a linear least squares system that we solve to find the lighting. As an application, the results
are then used to simulate a change in the incident light in the photograph.An implementation is described that uses 3D models from a laser range scanner and photographs from a high-resolution color CCD camera. We demonstrate the system on a simple test object and a human face.
Journal Title : Color and Imaging Conference
Publisher Name : Society of Imaging Science and Technology
Publisher Location : 7003 Kilworth Lane, Springfield, VA 22151, USA
Stephen R. Marschner, Donald P. Greenberg, "Inverse Lighting for Photography" in Proc. IS&T 5th Color and Imaging Conf.,1997,pp 262 - 265, https://doi.org/10.2352/CIC.1997.5.1.art00052
We introduce a technique for improving photographs using inverse lighting, a new process based on algorithms developed in computer graphics for computing the reflection of light in 3D space. From a photograph and a 3D surface model for the object pictured, inverse lighting estimates
the directional distribution of the incident light. We then use this information to process the photograph digitally to alter the lighting on the object.Inverse lighting is a specific example of the general idea of inverse rendering. This refers to the practice of using the methods
of computer graphics, which normally are used to render images from scene information, to infer scene information from images. Our system uses physically based rendering technology to construct a linear least squares system that we solve to find the lighting. As an application, the results
are then used to simulate a change in the incident light in the photograph.An implementation is described that uses 3D models from a laser range scanner and photographs from a high-resolution color CCD camera. We demonstrate the system on a simple test object and a human face.