Expected Q-learning for Self-Organizing Resource Allocation in LTE-U with Downlink-Uplink Decoupling

Conference: European Wireless 2017 - 23th European Wireless Conference
05/17/2017 - 05/19/2017 at Dresden, Germany

Proceedings: European Wireless 2017

Pages: 6Language: englishTyp: PDF

Personal VDE Members are entitled to a 10% discount on this title

Authors:
Hu, Ye (Beijing University of Posts and Telecommunications, P.R. China)
MacKenzie, Richard (BT Technology, Service and Operations, UK)
Hao, Mo (Tsinghua SEM Advanced ICT Lab, P.R. China)

Abstract:
Exponentially growing demand on mobile data facilitates the need to increase spectrum efficiency and also to gain access to additional spectrum bands. LTE-U provides a way for LTE service to be provided, in some regulatory areas, using a combination of licensed and unlicensed spectrum. In this paper, we consider the resource allocation problem in LTE-U networks with downlink-uplink decoupling (DUDe) technique. Here, the spectrum allocation problem is formulated as a game theoretic model which incorporates user association, spectrum allocation, and load balancing. We propose a decentralized expected Q-learning algorithm to solve this game. Using the proposed algorithm, the base stations can autonomously choose their optimal spectrum allocation schemes based on only limited information from the networks. It is shown that the proposed algorithm converges to a stationary mixed-strategy distribution which constitutes a mixed strategy Nash equilibrium for the studied game. Simulation results show the proposed Q-learning algorithm yields up to 12.7% and 51.1% improvement in terms of total rate, compared to traditional Q-learning and nearest neighbor algorithms, respectively. Furthermore, it is shown that the proposed Q-learning algorithm always converges to a mixed Nash equilibrium, and need 19% less time to coverage compared to traditional Q-learning.