Please use this identifier to cite or link to this item:
|Title:||Convergent learning algorithms for potential games with unknown noisy rewards|
|Authors:||Chapman, Archie C.|
Leslie, David S.
Jennings, Nicholas R.
Discipline of Business Analytics
learning in games
|Abstract:||In this paper, we address the problem of convergence to Nash equilibria in games with rewards that are initially unknown and which must be estimated over time from noisy observations. These games arise in many real-world applications, whenever rewards for actions cannot be prespecified and must be learned on-line. Standard results in game theory, however, do not consider such settings. Specifically, using results from stochastic approximation and differential inclusions, we prove the convergence of variants of fictitious play and adaptive play to Nash equilibria in potential games and weakly acyclic games, respectively. These variants all use a multi-agent version of Q-learning to estimate the reward functions and a novel form of the e-greedy decision rule to select an action. Furthermore, we derive e-greedy decision rules that exploit the sparse interaction structure encoded in two compact graphical representations of games, known as graphical and hypergraphical normal form, to improve the convergence rate of the learning algorithms. The structure captured in these representations naturally occurs in many distributed optimisation and control applications. Finally, we demonstrate the efficacy of the algorithms in a simulated ad hoc wireless sensor network management problem.|
|Department/Unit/Centre:||Discipline of Business Analytics|
|Type of Work:||Working Paper|
|Appears in Collections:||Working Papers - Business Analytics|
Items in Sydney eScholarship Repository are protected by copyright, with all rights reserved, unless otherwise indicated.