Back to articles
Proceedings Paper
Volume: 36 | Article ID: MWSF-335
Image
Efficient Temporally-aware DeepFake Detection using H.264 Motion Vectors
  DOI :  10.2352/EI.2024.36.4.MWSF-335  Published OnlineJanuary 2024
Abstract
Abstract

Video DeepFakes are fake media created with Deep Learning (DL) that manipulate a person’s expression or identity. Most current DeepFake detection methods analyze each frame independently, ignoring inconsistencies and unnatural movements between frames. Some newer methods employ optical flow models to capture this temporal aspect, but they are computationally expensive. In contrast, we propose using the related but often ignored Motion Vectors (MVs) and Information Masks (IMs) from the H.264 video codec, to detect temporal inconsistencies in DeepFakes. Our experiments show that this approach is effective and has minimal computational costs, compared with per-frame RGB-only methods. This could lead to new, real-time temporally-aware DeepFake detection methods for video calls and streaming.

Subject Areas :
Views 56
Downloads 12
 articleview.views 56
 articleview.downloads 12
  Cite this article 

Peter Grönquist, Yufan Ren, Qingyi He, Alessio Verardo, Sabine Süsstrunk, "Efficient Temporally-aware DeepFake Detection using H.264 Motion Vectorsin Electronic Imaging,  2024,  pp 335-1 - 335-9,  https://doi.org/10.2352/EI.2024.36.4.MWSF-335

 Copy citation
  Copyright statement 
Copyright This work is licensed under the Creative Commons Attribution 4.0 International License. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. 2024
ei
Electronic Imaging
2470-1173
2470-1173
Society for Imaging Science and Technology
IS&T 7003 Kilworth Lane, Springfield, VA 22151 USA