Implications of decentralized Q-learning resource allocation in wireless networks

Published in IEEE 28th Annual International Symposium on Personal, Indoor, and Mobile Radio Communications (PIMRC), 2017

Recommended citation: Wilhelmi Roca, F., Bellalta, B., Cano Bastidas, C., & Jonsson, A. (2017). Implications of decentralized Q-learning resource allocation in wireless networks. In 2017 IEEE 28th Annual International Symposium on Personal, Indoor, and Mobile Radio Communications (PIMRC); 2017 Oct 8-13; Montreal, Canada. Piscataway (NJ): IEEE; 2017.[5 p.]. Institute of Electrical and Electronics Engineers (IEEE). http://ieeexplore.ieee.org/document/8292321/

Abstract: Reinforcement Learning is gaining attention by the wireless networking community due to its potential to learn good-performing configurations only from the observed results. In this work we propose a stateless variation of Q-learning, which we apply to exploit spatial reuse in a wireless network. In particular, we allow networks to modify both their transmission power and the channel used solely based on the experienced throughput. We concentrate in a completely decentralized scenario in which no information about neighbouring nodes is available to the learners. Our results show that although the algorithm is able to find the best-performing actions to enhance aggregate throughput, there is high variability in the throughput experienced by the individual networks. We identify the cause of this variability as the adversarial setting of our setup, in which the most played actions provide intermittent good/poor performance depending on the neighbouring decisions. We also evaluate the effect of the intrinsic learning parameters of the algorithm on this variability.

Download paper here