Back to articles
Article
Volume: 35 | Article ID: IMAGE-272
Image
Light-weight recurrent network for real-time video super-resolution
  DOI :  10.2352/EI.2023.35.7.IMAGE-272  Published OnlineJanuary 2023
Abstract
Abstract

Real-time video super-resolution (VSR) has been considered a promising solution to improving video quality for video conferencing and media video playing, which requires low latency and short inference time. Although state-of-the-art VSR methods have been proposed with well-designed architectures, many of them are not feasible to be transformed into a real-time VSR model because of vast computation complexity and memory occupation. In this work, we propose a light-weight recurrent network for this task, where motion compensation offset is estimated by an optical flow estimation network, features extracted from the previous high-resolution output are aligned to the current target frame, and a hidden space is utilized to propagate long-term information. We show that the proposed method is efficient in real-time video super-resolution. We also carefully study the effectiveness of the existence of an optical flow estimation module in a lightweight recurrent VSR model and compare two ways of training the models. We further compare four different motion estimation networks that have been used in light-weight VSR approaches and demonstrate the importance of reducing information loss in motion estimation.

Subject Areas :
Views 50
Downloads 30
 articleview.views 50
 articleview.downloads 30
  Cite this article 

Tianqi Wang, Jan P. Allebach, Qian Lin, "Light-weight recurrent network for real-time video super-resolutionin Electronic Imaging,  2023,  pp 272-1 - 272-6,  https://doi.org/10.2352/EI.2023.35.7.IMAGE-272

 Copy citation
  Copyright statement 
Copyright © 2023, Society for Imaging Science and Technology 2023
ei
Electronic Imaging
2470-1173
2470-1173
Society for Imaging Science and Technology
IS&T 7003 Kilworth Lane, Springfield, VA 22151 USA