AMMF-Net: An End-to-End Adaptive Multimodal Medical Image Fusion Method
Conference: BIBE 2025 - The 8th International Conference on Biological Information and Biomedical Engineering
08/11/2025 - 08/13/2025 at Guiyang, China
Proceedings: BIBE 2025
Pages: 6Language: englishTyp: PDF
Authors:
Luo, Dening; Liu, Xinran; Li, Lisha; Jin, Xiaozhao
Abstract:
We propose AMMF-Net, a novel end-to-end network for adaptive multimodal medical image fusion. Built on an encoder-decoder framework, AMMF-Net incorporates a modality-specific channel attention mechanism that adaptively balances complementary features from different modalities (e.g., CT and MRI). This design enables the network to automatically learn optimal fusion strategies, while preserving both structure and fine-grained detail in the fused output. A multi-component loss function—combining pixel-wise, structural, and perceptual supervision—guides the training process. Extensive experiments on public medical datasets demonstrate that AMMF-Net outperforms traditional methods (e.g., weighted averaging, wavelet, PCA) and deep feature-based approaches (e.g., VGG16Fusion), achieving state-of-the-art results with MSE as low as 0.0067, PSNR up to 21.732, SSIM up to 0.925, and MI up to 1.807. The superior quantitative and visual performance confirms the model’s effectiveness and clinical potential. Code is available at [link].

