
Zero-shot learning (ZSL) aims to classify unseen classes using semantic information from seen classes. However, existing methods often struggle with visual variations within the same attribute, leading to noisy features. We propose CRAE (Class Representation and Attribute Embedding), a novel ZSL method that combines class representation learning and attribute embedding learning for improved robustness and accuracy. CRAE introduces an adaptive softmax activation to normalize attribute feature maps, reducing noise and enhancing discriminability. It also employs attribute-level contrastive learning with hard sample selection and class-level contrastive learning to improve classification performance. Experimental results on CUB, SUN, and AWA2 demonstrate that CRAE outperforms state-of-the-art methods, proving its superiority in zero-shot image classification.

Breast cancer pathological images are considered the “gold standard” for clinical diagnosis of breast cancer, but manual diagnosis suffers from inherent drawbacks such as low efficiency and high subjectivity. Computer-aided diagnosis (CAD) systems can provide objective decision support for clinicians by deeply mining multi-level features such as tissue architecture and cytology from pathological images. However, current CAD systems are still challenged by complex background noise and inconsistency in cross-scale feature representation, which hinder the extraction of critical features. Therefore, this paper proposes a key feature dynamic enhancement network for breast cancer pathological image classification (KFDE), in which the channel-spatial feature enhancement module (CSFE) and the multi-scale feature dynamic fusion module (MFDF) serve as the two core components. The CSFE module effectively suppresses background noise and highlights lesion regions through local channel variance analysis and an energy entropy-driven spatial focusing mechanism. The MFDF module employs a heterogeneous multi-branch convolutional architecture to intelligently fuse cross-scale features, addressing the issue of information fragmentation caused by magnification variation. Experiments on the BreakHis dataset demonstrate that KFDE achieves significant performance improvements, with a benign/malignant classification accuracy of 99.74% and an eight-class subtype classification accuracy of 96.35%, significantly outperforming existing mainstream models.

Mammography is one of the most commonly used tools for early screening of breast cancer. Developing computer-aided diagnosis (CAD) based on mammographic images to assist doctors in making efficient and accurate diagnoses holds significant research value. Mass segmentation in mammograms is a core component of breast cancer CAD systems and an essential step in further qualitative analysis of breast cancer. However, significant challenges persist in the field of mass segmentation in whole mammograms, including model misalignment due to the small proportion of mass regions and difficulties in segmenting boundaries caused by blurred edges of mass areas. To solve these challenges, this paper proposes a local attention and detail-enhanced network (LADE-Net) for mass segmentation in whole mammograms. LADE-Net employs an asymmetric encoder-decoder architecture and introduces a lightweight local attention (LA) module aimed at early and precise localization of breast mass regions. Importantly, we design a new detail-enhanced fusion residual network (DEFRB) to refine and enhance the learning of edge features in breast masses. We evaluated the performance of LADE-Net on two publicly available datasets (INbreast, CBIS-DDSM). Compared to previous works, LADE-Net achieved superior performance.