
In recent years, multi-modal knowledge graphs (MMKGs) have emerged to enhance the representation of real-world entities through structural, textual, and visual features. However, the inherent heterogeneity among different modalities poses significant challenges for entity alignment across KGs. In this study, we introduce GMFDE, an innovative framework for multi-modal entity alignment that integrates a gated residual fusion mechanism with a knowledge distillation strategy. The fusion module adaptively balances and refines modality-specific features while the distillation component enables unimodal encoders to learn complementary information from the fused multi-modal representation, promoting consistency across modalities. Extensive experiments on both bilingual and cross-KG datasets demonstrate that GMFDE achieves superior performance compared with existing leading methods, particularly excelling in settings with limited alignment seeds.