Learning to Learn to Demodulate with Uncertainty Quantification via Bayesian Meta-Learning

Conference: WSA 2021 - 25th International ITG Workshop on Smart Antennas
11/10/2021 - 11/12/2021 at French Riviera, France

Proceedings: ITG-Fb. 300: WSA 2021

Pages: 6Language: englishTyp: PDF

Authors:
Cohen, Kfir M.; Park, Sangwoo; Simeone, Osvaldo (KCLIP, CTR, Department of Engineering, King’s College London, UK)
Shamai, Shlomo (Viterbi Electrical Engineering Department, Technion, Israel Institute of Technology, Haifa, Israel)

Abstract:
Meta-learning, or learning to learn, offers a principled framework for few-shot learning. It leverages data from multiple related learning tasks to infer an inductive bias that enables fast adaptation on a new task. The application of metalearning was recently proposed for learning how to demodulate from few pilots. The idea is to use pilots received and stored for offline use from multiple devices in order to meta-learn an adaptation procedure with the aim of speeding up online training on new devices. Standard frequentist learning, which can yield relatively accurate “hard” classification decisions, is known to be poorly calibrated, particularly in the small-data regime. Poor calibration implies that the soft scores output by the demodulator are inaccurate estimates of the true probability of correct demodulation. In this work, we introduce the use of Bayesian meta-learning via variational inference for the purpose of obtaining well-calibrated few-pilot demodulators. In a Bayesian framework, each neural network weight is represented by a distribution, capturing epistemic uncertainty. Bayesian metalearning optimizes over the prior distribution of the weights. The resulting Bayesian ensembles offer better calibrated soft decisions, at the computational cost of running multiple instances of the neural network for demodulation. Numerical results for singleinput single-output Rayleigh fading channels with transmitter’s non-linearities are provided that compare symbol error rate and expected calibration error for both frequentist and Bayesian meta-learning, illustrating how the latter is both more accurate and better-calibrated.