Neural Radiance Fields (NeRF) have attracted particular attention due to their exceptional capability in virtual view generation from a sparse set of input images. However, their scope is constrained by the substantial amount of images required for training. This work introduces a data augmentation methodology to train NeRF using external depth information. The approach entails generating new virtual images at different positions through the utilization of MPEG's reference view synthesizer (RVS) to augment the training image pool for NeRF. Results demonstrate a substantial enhancement in the output quality when employing the generated views in comparison to a scenario where they are omitted.
In this paper we propose a solution for view synthesis of scenes presenting highly non-Lambertian objects. While Image- Based Rendering methods can easily render diffuse materials given only their depth, non-Lambertian objects present non-linear displacements of their features, characterized by curved lines in epipolar plane images. Hence, we propose to replace the depth maps used for rendering new viewpoints by a more complex “non- Lambertian map” describing the light field?s behavior. In a 4D light field, diffuse features are linearly displaced following their disparity, but non-Lambertian feature can follow any trajectory and need to be approximated by non-Lambertian maps. We compute those maps from nine input images using Bezier or polynomial interpolation. After the map computation, a classical Image- Based Rendering method is applied to warp the input images to novel viewpoints.