Virtual Reality and Free Viewpoint navigation require high-quality rendered images to be realistic. Current hardware assisted raytracing methods cannot reach the expected quality in real-time and are also limited by the 3D mesh quality. An alternative is Depth Image Based Rendering (DIBR) where the input only consists of images and their associated depth maps for synthesizing virtual views to the Head Mounted Display (HMD). The MPEG Immersive Video (MIV) standard uses such DIBR algorithm called the Reference View Synthesizer (RVS). We have first implemented a GPU version, called the Realtime accelerated View Synthesizer (RaViS), that synthesizes two virtual views in real-time for the HMD. In the present paper, we explore the differences between desktop and embedded GPU platforms, porting RaViS to an embedded HMD without the need for a separate, discrete desktop GPU. The proposed solution gives a first insight into DIBR View Synthesis techniques in embedded HMDs using OpenGL and Vulkan, a cross-platform 3D rendering library with support for embedded devices.
Laurie Van Bogaert, Daniele Bonatto, Sarah Fernades Pinto Fachada, Gauthier Lafruit, "Novel view synthesis in embedded virtual reality devices" in Proc. IS&T Int’l. Symp. on Electronic Imaging: Engineering Reality of Virtual Reality, 2022, pp 269-1 - 269-6, https://doi.org/10.2352/EI.2022.34.12.ERVR-269