First-person videos (FPVs) recorded by wearable cameras have different characteristics compared to mobile videos. Video frames in FPVs are subject to blur, rotation, shear and fisheye distortions. We design a subjective test that uses actual captured images with real distortions, synthetic distortions or a combination of both. Results indicate shear is less sensitive to content than rotation. For fisheye, personal preference and content dependence affect the subjective results. The performance of 7 noreference (NR) quality estimators (QEs) and our QE, local visual information (LVI) [1], are evaluated based on subjective results. We propose two mapping functions for rotation and shear that improve the ability of LVI and 4 NR QEs to accurately predict the subjective scores.