Collaborative Spatial Reuse in Wireless Networks via Selfish Multi-Armed Bandits

Published in Ad-hoc Networks (Elsevier), 2017

Recommended citation: Wilhelmi, F., Cano, C., Neu, G., Bellalta, B., Jonsson, A., & Barrachina-Muñoz, S. (2019). Collaborative Spatial Reuse in Wireless Networks via Selfish Multi-Armed Bandits. Ad Hoc Networks 88 (2019): 129-141. https://www.sciencedirect.com/science/article/pii/S1570870518302646?casa_token=_3NaFYTPWgoAAAAA:7Z0DkV6IOJ3ITNi1uOoarip1kjU07-DKEdBMaYqoGHhDrqRoGLHQjHAOeaj9ETVoXNYrtXUx0w

Abstract: Next-generation wireless deployments are characterized by being dense and uncoordinated, which often leads to inefficient use of resources and poor performance. To solve this, we envision the utilization of completely decentralized mechanisms that enhance Spatial Reuse (SR). In particular, we concentrate in Reinforcement Learning (RL), and more specifically, in Multi-Armed Bandits (MABs), to allow networks to modify both their transmission power and channel based on their experienced throughput. In this work, we study the exploration-exploitation trade-off by means of the ε-greedy, EXP3, UCB and Thompson sampling action-selection strategies. Our results show that optimal proportional fairness can be achieved, even if no information about neighboring networks is available to the learners and WNs operate selfishly. However, there is high temporal variability in the throughput experienced by the individual networks, specially for ε-greedy and EXP3. We identify the cause of this variability to be the adversarial setting of our setup in which the set of most played actions provide intermittent good/poor performance depending on the neighboring decisions. We also show that this variability is reduced using UCB and Thompson sampling, which are parameter-free policies that perform exploration according to the reward distribution of each action.

Download paper here