Understanding complex events from unstructured video, like scoring a goal in a football game, is an extremely challenging task due to the dynamics, complexity and variation of video sequences. In this work, we attack this problem exploiting the capabilities of the recently developed framework of deep learning. We consider independently encoding spatial and temporal information via convolutional neural networks and fusion of features via regularized Autoencoders. To demonstrate the capacities of the proposed scheme, a new dataset is compiled, compose of goal and no-goal sequences. Experimental results demonstrate that extremely high classification accuracy can be achieved, from a dramatically limited number of examples, by leveraging pre-trained models with fine-tuned fusion of spatio-temporal features.
Grigorios Tsagkatakis, Mustafa Jaber, Panagiotis Tsakalides, "Goal!! Event detection in sports video" in Proc. IS&T Int’l. Symp. on Electronic Imaging: Computer Vision Applications in Sports, 2017, pp 15 - 20, https://doi.org/10.2352/ISSN.2470-1173.2017.16.CVAS-344