Video DeepFakes are fake media created with Deep Learning (DL) that manipulate a person’s expression or identity. Most current DeepFake detection methods analyze each frame independently, ignoring inconsistencies and unnatural movements between frames. Some newer methods employ optical flow models to capture this temporal aspect, but they are computationally expensive. In contrast, we propose using the related but often ignored Motion Vectors (MVs) and Information Masks (IMs) from the H.264 video codec, to detect temporal inconsistencies in DeepFakes. Our experiments show that this approach is effective and has minimal computational costs, compared with per-frame RGB-only methods. This could lead to new, real-time temporally-aware DeepFake detection methods for video calls and streaming.
Efficient compression plays a significant role in Light Field imaging technology because of the huge amount of data needed for their representation. Video encoders using different strategies are commonly used for Light Field image compression. In this paper, different video encoder implementations including HM, VTM, x265, xvc, VP9, and AV1 are analysed and compared in terms of coding efficiency, and encoder/decoder time-complexity. Light field images are compressed as pseudo-videos.