Back to articles
Article
Volume: 34 | Article ID: ERVR-269
Image
Novel view synthesis in embedded virtual reality devices
  DOI :  10.2352/EI.2022.34.12.ERVR-269  Published OnlineJanuary 2022
Abstract
Abstract

Virtual Reality and Free Viewpoint navigation require high-quality rendered images to be realistic. Current hardware assisted raytracing methods cannot reach the expected quality in real-time and are also limited by the 3D mesh quality. An alternative is Depth Image Based Rendering (DIBR) where the input only consists of images and their associated depth maps for synthesizing virtual views to the Head Mounted Display (HMD). The MPEG Immersive Video (MIV) standard uses such DIBR algorithm called the Reference View Synthesizer (RVS). We have first implemented a GPU version, called the Realtime accelerated View Synthesizer (RaViS), that synthesizes two virtual views in real-time for the HMD. In the present paper, we explore the differences between desktop and embedded GPU platforms, porting RaViS to an embedded HMD without the need for a separate, discrete desktop GPU. The proposed solution gives a first insight into DIBR View Synthesis techniques in embedded HMDs using OpenGL and Vulkan, a cross-platform 3D rendering library with support for embedded devices.

Subject Areas :
Views 59
Downloads 19
 articleview.views 59
 articleview.downloads 19
  Cite this article 

Laurie Van Bogaert, Daniele Bonatto, Sarah Fernades Pinto Fachada, Gauthier Lafruit, "Novel view synthesis in embedded virtual reality devicesin Proc. IS&T Int’l. Symp. on Electronic Imaging: Engineering Reality of Virtual Reality,  2022,  pp 269-1 - 269-6,  https://doi.org/10.2352/EI.2022.34.12.ERVR-269

 Copy citation
  Copyright statement 
Copyright © Society for Imaging Science and Technology 2022
ei
Electronic Imaging
2470-1173
2470-1173
Society for Imaging Science and Technology
IS&T 7003 Kilworth Lane, Springfield, VA 22151 USA