
To address the issue of low accuracy in 3D modeling of images captured by unmanned aerial vehicles (UAVs), the authors propose an enhanced 3D reconstruction model for a mountain in Yuanmou by employing an improved Structure from Motion–Multiview Stereo (SFM–MVS) algorithm. In the process of converting 2D into 3D data, the key challenges lie in feature point extraction and matching. The authors introduce an algorithm for optimizing Speeded Up Robust Features (SURF) by combining the SURF descriptor operator with a fast feature point algorithm. The use of the Laplace operator to refine the extraction of weighted feature points along with integration into the robust SURF descriptor simultaneously improves both matching speed and accuracy. This solution mitigates issues related to excessive image data and the low accuracy and efficiency of 3D reconstruction models in UAV applications. Experimental results demonstrate that the proposed method extracts more feature points, increases matching speed, and significantly enhances accuracy compared to both the original SURF and traditional Scale-Invariant Feature Transform algorithms. When compared to the unoptimized SFM–MVS algorithm, the accuracy of the optimized 3D reconstruction using the SURF-based algorithm improves by approximately 44.68%, with a 30% increase in processing speed. Additionally, to evaluate UAV path planning performance on complex terrain, the authors first employ the optimized SURF-based 3D reconstruction method to precisely reconstruct the terrain map of the target area. This method improves both the accuracy and efficiency of 3D terrain reconstruction, providing high-precision data for subsequent path planning algorithm performance testing. Subsequently, three classical path planning algorithms—Ant Colony Optimization, A*, and Rapidly exploring Random Tree—are selected for comparative analysis of UAV path planning capabilities on complex terrain.

A fully automated colorization model that integrates image segmentation features to enhance both the accuracy and diversity of colorization is proposed. In the model, a multipath architecture is employed, with each path designed to address a specific objective in processing grayscale input images. The context path utilizes a pretrained ResNet50 model to identify object classes while the spatial path determines the locations of these objects. ResNet50 is a 50-layer deep convolutional neural network (CNN) that uses skip connections to address the challenges of training deep models. It is widely applied in image classification and feature extraction. The outputs from both paths are subsequently fused and fed into the colorization network to ensure precise representation of image structures and to prevent color spillover across object boundaries. The colorization network is designed to handle high-resolution inputs, enabling accurate colorization of small objects and enhancing overall color diversity. The proposed model demonstrates robust performance even when training with small datasets. Comparative evaluations with CNN-based and diffusion-based classification approaches show that the proposed model significantly improves colorization quality.

A recent work proposed a methodology to effectively enhance or suppress perceived bumpiness in digital images. We hypothesized that this manipulation may affect perceived translucency due to similarity in affected image cues. In this work, we test this hypothesis experimentally. We conducted psychophysical experiments and found a correlation between perceived bumpiness and perceived translucency in processed images. This not only may have implications when digitally editing the bumpiness of a given material but also can be taken advantage of as a translucency editing tool unless the method produces artifacts. To check this, we evaluated the naturalness and quality of the processed images using subjective user study and objective image quality metrics, respectively.

Accurate segmentation of brain tumors is essential in the planning of neurosurgical treatments as it can significantly enhance their effectiveness. In this paper, the authors propose a modified Residual U-shaped network (ResUnet) based on multimodal fusion and a Generative Adversarial Network for multimodal brain tumor Magnetic Resonance Imaging segmentation. First, they propose a three-path structure for the encoding stage to address the issue of inadequate utilization of multimodal features, which leads to suboptimal segmentation results. The structure comprises three components: the T1 path, the T1ce path, and the fusion path combining Flair and T2 modalities. They then utilize average pooling to integrate the global information from the T1 path into both the T1ce and the fusion path, enhancing the feature fusion across different modalities and strengthening the robustness of the network. Subsequently, the features from the T1ce path and the fusion path are connected to the decoding stage through skip connections to enhance the utilization of model features and improve segmentation accuracy. Finally, the authors leverage the Deep Convolutional Generative Adversarial Network (DCGAN) to further enhance the accuracy of the network. They improve the loss function of the DCGAN by introducing an adaptive coefficient, which reduces the loss value in the early stages of model training and increases it in the later stages. Experimental results demonstrate that the proposed method effectively improves segmentation accuracy compared to related methods.

Spinal CT image segmentation is actively researched in the field of medical image processing. However, due to factors such as high variability among spinal CT slices and image artifacts, and so on, automatic and accurate spinal CT segmentation tasks are extremely challenging. To address these issues, we propose a cascaded U-shaped framework that combines multi-scale features and attention mechanisms (MA-WNet) for the automatic segmentation of spinal CT images. Specifically, our framework combines two U-shaped networks to achieve coarse and fine segmentation separately for spinal CT images. Within each U-shaped network, we add multi-scale feature extraction modules during both the encoding and decoding phases to address variations in spine shape across different slices. Additionally, various attention mechanisms are embedded to mitigate the effects of image artifacts and irrelevant information on segmentation outcomes. Experimental results show that our proposed method achieves average segmentation Dice similarity coefficients of 94.53% and 91.38% on the CSI 2014 and VerSe 2020 datasets, respectively, indicating highly accurate segmentation performance, which is valuable for potential clinical applications.

Sparse representation is the key part of shape registration, compression, and regeneration. Most existing models generate sparse representation by detecting salient points directly from input point clouds, but they are susceptible to noise, deformations, and outliers. The authors propose a novel alternative solution that combines global distribution probabilities and local contextual features to learn semantic structural consistency and adaptively generate sparse structural representation for arbitrary 3D point clouds. First, they construct a 3D variational auto-encoder network to learn an optimal latent space aligned with multiple anisotropic Gaussian mixture models (GMMs). Then, they combine GMM parameters with contextual properties to construct enhanced point features that effectively resist noise and geometric deformations, better revealing underlying semantic structural consistency. Second, they design a weight scoring unit that computes a contribution matrix to the semantic structure and adaptively generates sparse structural points. Finally, the authors enforce semantic correspondence and structural consistency to ensure that the generated structural points have stronger discriminative ability in both feature and distribution domains. Extensive experiments on shape benchmarks have shown that the proposed network outperforms state-of-the-art methods, with lower costs and more significant performance in shape segmentation and classification.

Accurate traffic flow forecasting plays a crucial role in alleviating road congestion and optimizing traffic management. Although numerous effective models have been proposed in existing research to predict future traffic flow, most models exhibit certain limitations in modeling spatiotemporal dependencies, especially in capturing multiscale spatiotemporal relationships. To address this, we propose a novel model called Spatiotemporal Augmented Interactive Learning and Temporal Attention (STAIL-TA) for traffic flow prediction, which is designed for dynamic and interactive adaptive modeling of spatiotemporal features in traffic flow data. Specifically, we first design a feature augmentation layer that enhances the interaction of time-based features. Next, we introduce an interactive dynamic graph convolutional network, which uses an interactive learning strategy to simultaneously capture spatiotemporal characteristics of traffic data. Additionally, a new dynamic graph generation method is employed to design a dynamic graph convolutional block, which is capable of capturing the spatial correlations that change dynamically within the traffic network. Finally, we construct a novel temporal attention mechanism that effectively leverages local contextual information and is specifically designed for transforming numerical sequence representations. This enables the prediction model to capture the dynamic temporal dependencies of traffic flow better, thus facilitating long-term forecasting. The experimental results show that the STAIL-TA model improves the mean absolute error and root mean squared error on the PEMS-BAY dataset by 7.75%, 3.68% and 5.59%, 2.72% in the 15-minute and 30-minute predictions, respectively, when compared to the existing optimal baseline method, MRA-BGCN.

In modern life, with the explosive growth of video, images, and other data, the use of computers to automatically and efficiently classify and analyze human actions has become increasingly important. Action recognition, a problem of perceiving and understanding the behavioral state of objects in a dynamic scene, is a fundamental yet key task in the computer field. However, analyzing a video with multiple objects or a video with irregular shooting angles poses a significant challenge for existing action recognition algorithms. To address these problems, the authors propose a novel deep-learning-based method called SlowFast-Convolutional Block Attention Module (SlowFast-CBAM). Specifically, the training dataset is preprocessed using the YOLOX network, where individual frames of action videos are separately placed in slow and fast pathways. Then, CBAM is incorporated into both the slow and fast pathways to highlight features and dynamics in the surrounding environment. Subsequently, the authors establish a relationship between the convolutional attention mechanism and the SlowFast network, allowing them to focus on distinguishing features of objects and behaviors appearing before and after different actions, thereby enabling action detection and performer recognition. Experimental results demonstrate that this approach better emphasizes the features of action performers, leading to more accurate action labeling and improved action recognition accuracy.

Purpose: Gliomas, particularly brain tumors, pose significant challenges due to their complex pathology and life-threatening potential. The goal of this study is to introduce LU-net, a novel semantic segmentation algorithm designed to enhance the diagnosis and treatment planning of gliomas. This research seeks to address the limitations of traditional classification and detection methods by improving the accuracy and robustness of tumor boundary delineation in medical images. Methods: LU-net employs a multiscale image pyramid along with a Bayesian-inference-based multiscale probability search to capture complex tumor features. The algorithm is further strengthened by integrating a Conditional Random Field model, enabling more precise segmentation. The performance of LU-net is evaluated against existing segmentation algorithms using standard metrics such as accuracy, Intersection over Union (IoU), and Dice score. Results: The experimental results demonstrate that LU-net outperforms current segmentation algorithms in terms of both accuracy and robustness. Specifically, LU-net achieves an accuracy of 0.9953, an IoU of 0.667, and a Dice score of 0.566, effectively addressing the pathological heterogeneity and invasiveness of gliomas. These results highlight LU-net’s superior ability to delineate tumor boundaries and improve diagnostic accuracy. Conclusion: LU-net sets a new benchmark in glioma lesion detection, offering a more effective approach for brain tumor segmentation. By improving the accuracy, reliability, and interpretability of brain tumor boundary delineation, LU-net enhances diagnostic and treatment strategies, providing significant benefits to patients, clinicians, and healthcare providers. Overall, this work marks a significant contribution to the field of medical imaging and glioma diagnosis.

The progressive fusion algorithm enhances image boundary smoothness, preserves details, and improves visual harmony. However, issues with multi-scale fusion and improper color space conversion can lead to blurred details and color distortion, which do not meet modern image processing standards for high-quality output. Therefore, a progressive fusion image transparency-guided enhancement algorithm based on generative adversarial learning is proposed. The method combines wavelet transform with gradient field fusion to enhance image details, preserve spectral features, and generate high-resolution true-color fused images. It extracts the image mean, standard deviation, and smoothness features, and uses these along with the original image input to generate an adversarial network. The optimization design introduces global context, transparency mask prediction, and a dual-discriminator structure to enhance the transparency of progressively fused images. The experimental results showed that using the designed method, the information entropy was 7.638, the blind image quality index was 24.331, the natural image quality evaluator value was 3.611, and the processing time was 0.036 s. The overall evaluation indices were excellent, effectively restoring image detail information and spatial color while avoiding artifacts. The processed images exhibited high quality with complete detail preservation.