Back to articles
Article
Volume: 34 | Article ID: IMAGE-255
Image
VR facial expression tracking via action unit intensity regression model
  DOI :  10.2352/EI.2022.34.8.IMAGE-255  Published OnlineJanuary 2022
Abstract
Abstract

Virtual Reality (VR) Head-Mounted Displays (HMDs), also known as VR headsets, are powerful devices that provide interaction between people and the virtual 3D world generated by a computer. For an immersive VR experience, the realistic facial animation of the participant is crucial. However, facial expression tracking has been one of the major challenges of facial animation. Existing face tracking methods often rely on a statistical model of the entire face, which is not feasible as occlusions arising from HMDs are inevitable. In this paper, we provide an overview of the current state of VR facial expression tracking and discuss bottlenecks for VR expression re-targeting. We introduce a baseline method for expression tracking from single view, partially occluded facial infrared (IR) images, which are captured by the HP reverb G2 VR headset camera. The experiment shows good visual prediction results for mouth region expressions from a single person.

Subject Areas :
Views 103
Downloads 23
 articleview.views 103
 articleview.downloads 23
  Cite this article 

Xiaoyu Ji, Justin Yang, Jishang Wei, Yvonne Huang, Qian Lin, Jan P. Allebach, Fengqing Zhu, "VR facial expression tracking via action unit intensity regression modelin Electronic Imaging,  2022,  pp 255-1 - 255-7,  https://doi.org/10.2352/EI.2022.34.8.IMAGE-255

 Copy citation
  Copyright statement 
Copyright © 2022, Society for Imaging Science and Technology 2022
ei
Electronic Imaging
2470-1173
2470-1173
Society for Imaging Science and Technology
IS&T 7003 Kilworth Lane, Springfield, VA 22151 USA