Knowledge Distillation For Deep Learning Based Automatic Modulation Classification

Conference: EEI 2022 - 4th International Conference on Electronic Engineering and Informatics
06/24/2022 - 06/26/2022 at Guiyang, China

Proceedings: EEI 2022

Pages: 5Language: englishTyp: PDF

Authors:
Wang, Shuai; Liu, Chunwu; Ba, Junhao (College of Intelligent Science National University of Defense Technology, China)

Abstract:
Many critical signal processing applications benefit from automatic modulation classification. Deep learning models have recently been utilized to recognize modulation, outperforming classic feature-based machine learning techniques. However, for the following reasons, automatic modulation classification remains difficult. Deep learning requires deeper and deeper network structures to attain high accuracy, however there are computing resource and real-time limits in the deployment phase. To solve the deployment issue, this research employs a knowledge distillation strategy to first train a big network and then distill it into a small network, called the student-teacher model. Experiments reveal that when compared to a directly trained simple network, the distillation-trained simple network has better accuracy at a reasonable distillation temperature.