Object detection in varying traffic scenes presents significant challenges in real-world applications. Thermal image utilization is acknowledged as a beneficial approach to enhance RGB image detection, especially in suboptimal lighting conditions. However, harnessing the combined potential of RGB and thermal images remains a formidable task. We tackle this by implementing an illumination-guided adaptive information fusion technique across both data types. Thus, we propose the illumination-guided with crossmodal attention transformer fusion (ICATF), a novel object detection framework that skillfully integrates features from RGB and thermal data. Further, an illumination-guided module is developed to adapt features to current lighting conditions, steering the learning process towards the most informative data fusion. Then, we incorporate frequency domain convolutions within the network’s backbone to assimilate spectral context and derive more nuanced features. In addition, we fuse the differential modality features for multispectral pedestrian detection with illumination-guided feature weights and transformer fusion architecture. Our method achieves state-of-the-art by experimental results on multispectral detection datasets, including FLIR-aligned, LLVIP, and KAIST.
The ability to synthesize convincing human speech has become easier due to the availability of speech generation tools. This necessitates the development of forensics methods that can authenticate and attribute speech signals. In this paper, we examine a speech attribution task, which identifies the origin of a speech signal. Our proposed method known as Synthetic Speech Attribution Transformer (SSAT) converts speech signals into mel spectrograms and uses a self-supervised pretrained transformer for attribution. This transformer is pretrained on two large publicly available audio datasets: Audio Set and LibriSpeech. We finetune the pretrained transformer on three speech attribution datasets: the DARPA SemaFor Audio Attribution dataset, the ASVspoof2019 dataset, and the 2022 IEEE SP Cup dataset. SSAT achieves high closed-set accuracy on all datasets (99.8% on ASVspoof2019 dataset, 96.3% on SP Cup dataset, and 93.4% on DARPA SemaFor Audio Attribution dataset). We also investigate the method’s ability to generalize to unknown speech generation methods (open-set scenario). SSAT has high performance, achieving an open-set accuracy of 90.2% on the ASVspoof2019 dataset and 88.45% on DARPA SemaFor Audio Attribution dataset. Finally, we show that our approach is robust to typical compression rates used by YouTube for speech signals.