Accident detection is one of the biggest challenges as there are various anomalies, occlusions, and objects in the image at different times. Therefore, this paper focuses on detecting traffic accidents through a combination of Object Tracking (OT) and image generation using GAN with variants such as skip connection, residual, and attention connection. The background removal techniques will be applied to reduce the background variation in the frame. Later, YOLO-R is used to detect objects, followed by DeepSort tracking of objects in the frame. Finally, the distance error metric and the adversarial error are determined using the Kalman filter and the GAN approach and help to decide accidents in videos.
Autonomous robots and self-driving vehicles require agents to learn and maintain accurate maps for safe and reliable operation. We use a variant of pose-graph Simultaneous Localization and Mapping (SLAM) to integrate multiple sensors for autonomous navigation in an urban environment. Our methods efficiently and accurately localize the agent across a stack of maps generated from different sensors across different periods of time. To incorporate a priori localization data, we account for the discrepancies between LiDAR observations and publicly available building geometry. We fuse data derived from heterogeneous sensor modalities to increase invariance to dynamic environmental factors, such as weather, luminance, and occlusions. To discriminate traversable terrain, we employ a deep segmentation network whose predictions increase the confidence of a LiDAR-generated cost map. Path planning is accomplished using the Timed-ElasticBand algorithm on the persistent map created through SLAM. We evaluate our method in varying environmental conditions on a large university campus and show the efficacy of the sensor and map fusion.