ViCAM-DFL: Visual Explanation-Driven Defenses against Model Poisoning in Decentralized Federated Learning-Enabled CyberEdge Networks
Konferenz: European WIRELESS 2025 - 30th European Wireless Conference
27.10.2025-29.10.2025 in Sohia Antipolis, France
Tagungsband: European Wireless 2025
Seiten: 6Sprache: EnglischTyp: PDF
Autoren:
Zheng, Jingjing; Gao, Yu; Li, Kai; Wu, Bochun; Ni, Wei; Dressler, Falko
Inhalt:
In recent years, model poisoning attacks have emerged as a threat to the resilience of decentralized federated learning (DFL), as they corrupt model updates and compromise the integrity of collaborative training. To defend DFL against emerging model poisoning attacks based on graph neural networks, this paper proposes a specialized defense framework, visual explanation class activation mapping for DFL (ViCAMDFL). The ViCAM-DFL transforms the high-dimensional local model updates into low-dimensional, visually interpretable heat maps that reveal adversarial manipulations. These heat maps are further refined using an integrated auto-encoder, which amplifies subtle features to enhance separability and improve detection accuracy. Experimental evaluations based on non-i.i.d. CIFAR-100 datasets demonstrate that our ViCAM-DFL achieves substantial improvements in detecting adversarial manipulations. The framework consistently delivers optimal results in terms of key evaluation metrics, including Recall, Precision, Accuracy, F1 Score, and AUC (all reaching 1.0), while maintaining a False Positive Rate (FPR) of 0.0, outperforming baseline methods. Furthermore, ViCAM-DFL exhibits strong robustness and generalizability across different deep learning architectures, e.g., ResNet-50 and REGNETY-800MF, confirming its adaptability and effectiveness in diverse DFL settings.

