Open-source technologies (OSINT) are becoming increasingly popular with investigative and government agencies, intelligence services, media companies, and corporations [22]. These OSINT technologies use sophisticated techniques and special tools to analyze the continually growing sources of information efficiently [17]. There is a great need for professional training and further education in this field worldwide. After having already presented the overall structure of a professional training concept in this field in a previous paper [25], this series of articles offers individual further training modules for the worldwide standard state-of-the-art OSINT tools. The modules presented here are suitable for a professional training program and an OSINT course in a bachelor’s or master’s computer science or cybersecurity study at a university. In part 1 of a series of 4 articles, the OSINT tool RiskIQ Passiv-Total [26] is introduced, and its application possibilities are explained using concrete examples. In part 2 the OSINT tool Censys is explained [27]. This part 3 deals with Maltego [28] and Part 4 compares the 3 different tools of Part 1-3 [29].
Due to the use of 3D contents in various applications, Stereo Image Quality Assessment (SIQA) has attracted more attention to ensure good viewing experience for the users. Several methods have been thus proposed in the literature with a clear improvement for deep learning-based methods. This paper introduces a new deep learning-based no-reference SIQA using cyclopean view hypothesis and human visual attention. First, the cyclopean image is built considering the presence of binocular rivalry that covers the asymmetric distortion case. Second, the saliency map is computed taking into account the depth information. The latter aims to extract patches on the most perceptual relevant regions. Finally, a modified version of the pre-trained vgg-19 is fine-tuned and used to predict the quality score through the selected patches. The performance of the proposed metric has been evaluated on 3D LIVE phase I and phase II databases. Compared with the state-of-the-art metrics, our method gives better outcomes.
With the expanding use of stereoscopic imaging for 3D applications, no-reference perceptual quality evaluation has become important to provide good viewing experience. The effect of the quality distortion is related to the scene’s spatial details. Taking this into account, this paper introduces a blind stereoscopic image quality measurement using synthesized cyclopean image and deep feature extraction. The proposed method is based on Human Visual System (HVS) modeling and quality-aware indicators. First, the cyclopean image is formed, taking on the existence of binocular rivalry / suppression that includes the asymmetric distortion case. Second, the cyclopean image is decomposed into four equivalent parts. Then, four Convolutional Neural Network (CNN) models are deployed to automatically extract quality feature sets. Finally, a feature bank is then created from the four patches and mapped to quality score using a Support Vector Regression (SVR) model. The best known 3D LIVE phase I and phase II databases were used to evaluate the efficiency of our technique. Compared with the state-of-the-art stereoscopic image quality measurement metrics, the proposed method has shown competitive outcomes and achieved good performance.