User expectation is one of the main factors that drives the user satisfaction for video streaming service providers and online social media platforms. Depending on the context, users may have different expectations of the video quality. Measuring the Quality of Experience (QoE) by taking user expectations into account provide online social media platforms with increased efficiency and users with higher satisfaction. In this work, we explore the relation between video quality and acceptability&annoyance of video quality in online social media platforms context. Moreover we present the methodology to determine the metric thresholds for acceptability&annoyance of video quality. We compare the estimated thresholds with previous studies.
Changes in the footballing world’s approach to technology and innovation contributed to the decision by the International Football Association Board (IFAB) to introduce Video Assistant Referees (VAR). The change meant that under strict protocols referees could use video replays to review decisions in the event of a “clear and obvious error” or a “serious missed incident”. This led to the need by Fédération Internationale de Football Association (FIFA) to develop methods for quality control of the VAR-systems, which was done in collaboration with RISE Research Institutes of Sweden AB. One of the important aspects is the video quality. The novelty of this study is that it has performed a user study specifically targeting video experts i.e., to measure the perceived quality of video professionals working with video production as their main occupation. An experiment was performed involving 25 video experts. In addition, six video quality models have been benchmarked against the user data and evaluated to show which of the models could provide the best predictions of perceived quality for this application. Video Quality Metric for variable frame delay (VQM_VFD) had the best performance for both formats, followed by Video Multimethod Assessment Fusion (VMAF) and VQM General model.
Omnidirectional video (ODV) streaming has become widespread. Since the data size of ODV is extremely large, tilebased streaming has been developed to compress the data. In this coding technology, high-quality tiles encoded at a higher bitrate for the users’ viewing direction and low-quality tiles encoded at a lower bitrate for the whole environment are sent to the client, and a player decodes these tiles. As a result, quality degrades due to coding, streaming, and the client’s buffer. Therefore, to provide high-quality tile-based ODV streaming services, qualityof- experience needs to be monitored by comprehensively evaluating the quality degradations. By taking into account the quality degradation due to the low-quality tiles, the ITU-T Recommendation P.1203 model, which can be used for monitoring the quality of 2D video streaming services, is extended to tile-based ODV streaming services. Our model is demonstrated to estimate quality with sufficiently high accuracy.
In recent years, with the introduction of powerful HMDs such as Oculus Rift, HTC Vive Pro, the QoE that can be achieved with VR/360° videos has increased substantially. Unfortunately, no standardized guidelines, methodologies and protocols exist for conducting and evaluating the quality of 360° videos in tests with human test subjects. In this paper, we present a set of test protocols for the evaluation of quality of 360° videos using HMDs. To this aim, we review the state-of-the-art with respect to the assessment of 360° videos summarizes their results. Also, we summarize the methodological approaches and results taken for different subjective experiments at our lab under different contextual conditions. In the first two experiments 1a and 1b, the performance of two different subjective test methods, Double-Stimulus Impairment Scale (DSIS) and Modified Absolute Category Rating (M-ACR) was compared under different contextual conditions. In experiment 2, the performance of three different subjective test methods, DSIS, M-ACR and Absolute Category Rating (ACR) was compared this time without varying the contextual conditions. Building on the reliability and general applicability of the procedure across different tests, a methodological framework for 360° video quality assessment is presented in this paper. Besides video or media quality judgments, the procedure comprises the assessment of presence and simulator sickness, for which different methods were compared. Further, the accompanying head-rotation data can be used to analyze both content- and quality-related behavioural viewing aspects. Based on the results, the implications of different contextual settings are discussed.