Machine Learning and IIIF are popular topics today when it comes to digitisation projects and digital humanities. But are these really practical topics or just buzzwords? Are these rather exclusive technologies of some elite cultural and research institutions? Or can everyday digitisation projects with less exquisite materials really benefit from such technologies? The example of the community around the open source software Goobi shows what the reality of numerous digitisation projects really looks like. What is no longer just theory and can be used in everyday life without having to develop software yourself? And what added value can actually be expected here?
First-Person Videos (FPVs) captured by body-mounted cameras are usually too shaky to watch comfortably. Many approaches, either software-based or hardware-based, are proposed for stabilization. Most of them are designed to maximize stability of videos. However, according to our previous work [1], FPVs need to be carefully stabilized to maintain their First-Person Motion information (FPMI). To stabilize FPVs appropriately, we propose a new video stability estimator Viewing Experience under "Central bias + Uniform" model (VECU) for FPVs on the basis of [1]. We first discuss stability estimators and their role in applications. Based on the discussion and our application target, we design a subjective test using real scene videos with synthetic camera motions to help us to improve the human perception model proposed in [1]. The proposed estimator VECU measures the absolute stability and the experimental results show that it has a good interval scale and outperforms existing stability estimators in predicting subjective scores.