Accession Number ADA584441
Title Nonconvergence to Saddle Boundary Points under Perturbed Reinforcement Learning.
Publication Date Dec 2012
Media Count 31p
Personal Author A. Rantzer G. C. Chasparis J. S. Shamma
Abstract This paper presents a novel reinforcement learning algorithm and provides conditions for global convergence to Nash equilibria. For several classes of reinforcement learning schemes, including the ones proposed here, excluding convergence to action profiles which are not Nash equilibria may not be trivial, unless the step-size sequence is appropriately tailored to the specifics of the game. In this paper we sidestep these issues by introducing a perturbed reinforcement learning scheme where the strategy of each agent is perturbed by a strategy-dependent perturbation (or mutations) function. Contrary to prior work on equilibrium selection in games where perturbation functions are globally state dependent, the perturbation function here is assumed to be local, i.e., it only depends on the strategy of each agent. We provide conditions under which the strategies of the agents will converge to an arbitrarily small neighborhood of the set of Nash equilibria almost surely. This extends prior analysis on reinforcement learning in games which has been primarily focused on urn processes. We finally specialize the results to a class of potential games.
Keywords Game theory
Multiagent systems
Nash equilibria
Reinforcement learning

Source Agency Non Paid ADAS
NTIS Subject Category 72E - Operations Research
70D - Personnel Management, Labor Relations & Manpower Studies
96 - Business & Economics
Corporate Author Georgia Inst. of Tech., Atlanta. School of Electrical and Computer Engineering.
Document Type Journal article
Title Note Journal article.
NTIS Issue Number 1402
Contract Number FA9550-09-1-0538

Science and Technology Highlights

See a sampling of the latest scientific, technical and engineering information from NTIS in the NTIS Technical Reports Newsletter

Acrobat Reader Mobile    Acrobat Reader