AMMF-Net: An End-to-End Adaptive Multimodal Medical Image Fusion Method
Konferenz: BIBE 2025 - The 8th International Conference on Biological Information and Biomedical Engineering
11.08.2025-13.08.2025 in Guiyang, China
Tagungsband: BIBE 2025
Seiten: 6Sprache: EnglischTyp: PDF
Autoren:
Luo, Dening; Liu, Xinran; Li, Lisha; Jin, Xiaozhao
Inhalt:
We propose AMMF-Net, a novel end-to-end network for adaptive multimodal medical image fusion. Built on an encoder-decoder framework, AMMF-Net incorporates a modality-specific channel attention mechanism that adaptively balances complementary features from different modalities (e.g., CT and MRI). This design enables the network to automatically learn optimal fusion strategies, while preserving both structure and fine-grained detail in the fused output. A multi-component loss function—combining pixel-wise, structural, and perceptual supervision—guides the training process. Extensive experiments on public medical datasets demonstrate that AMMF-Net outperforms traditional methods (e.g., weighted averaging, wavelet, PCA) and deep feature-based approaches (e.g., VGG16Fusion), achieving state-of-the-art results with MSE as low as 0.0067, PSNR up to 21.732, SSIM up to 0.925, and MI up to 1.807. The superior quantitative and visual performance confirms the model’s effectiveness and clinical potential. Code is available at [link].

