First-Person Videos (FPVs) captured by body-mounted cameras are usually too shaky to watch comfortably. Many approaches, either software-based or hardware-based, are proposed for stabilization. Most of them are designed to maximize stability of videos. However, according to our previous work [1], FPVs need to be carefully stabilized to maintain their First-Person Motion information (FPMI). To stabilize FPVs appropriately, we propose a new video stability estimator Viewing Experience under "Central bias + Uniform" model (VECU) for FPVs on the basis of [1]. We first discuss stability estimators and their role in applications. Based on the discussion and our application target, we design a subjective test using real scene videos with synthetic camera motions to help us to improve the human perception model proposed in [1]. The proposed estimator VECU measures the absolute stability and the experimental results show that it has a good interval scale and outperforms existing stability estimators in predicting subjective scores.
First-person videos (FPVs) recorded by wearable cameras have different characteristics compared to mobile videos. Video frames in FPVs are subject to blur, rotation, shear and fisheye distortions. We design a subjective test that uses actual captured images with real distortions, synthetic distortions or a combination of both. Results indicate shear is less sensitive to content than rotation. For fisheye, personal preference and content dependence affect the subjective results. The performance of 7 noreference (NR) quality estimators (QEs) and our QE, local visual information (LVI) [1], are evaluated based on subjective results. We propose two mapping functions for rotation and shear that improve the ability of LVI and 4 NR QEs to accurately predict the subjective scores.