The NTIS website and supporting ordering systems are undergoing a major upgrade from 8PM on September 25th through approximately October 6. During that time, much of the functionality, including subscription and product ordering, shipping, etc., will not be available. You may call NTIS at 1-800-553-6847 or (703) 605-6000 to place an order but you should expect delayed shipment. Please do NOT include credit card numbers in any email you might send NTIS.
Documents in the NTIS Technical Reports collection are the results of federally funded research. They are directly submitted to or collected by NTIS from Federal agencies for permanent accessibility to industry, academia and the public.  Before purchasing from NTIS, you may want to check for free access from (1) the issuing organization's website; (2) the U.S. Government Printing Office's Federal Digital System website http://www.gpo.gov/fdsys; (3) the federal government Internet portal USA.gov; or (4) a web search conducted using a commercial search engine such as http://www.google.com.
Accession Number ADA584441
Title Nonconvergence to Saddle Boundary Points under Perturbed Reinforcement Learning.
Publication Date Dec 2012
Media Count 31p
Personal Author A. Rantzer G. C. Chasparis J. S. Shamma
Abstract This paper presents a novel reinforcement learning algorithm and provides conditions for global convergence to Nash equilibria. For several classes of reinforcement learning schemes, including the ones proposed here, excluding convergence to action profiles which are not Nash equilibria may not be trivial, unless the step-size sequence is appropriately tailored to the specifics of the game. In this paper we sidestep these issues by introducing a perturbed reinforcement learning scheme where the strategy of each agent is perturbed by a strategy-dependent perturbation (or mutations) function. Contrary to prior work on equilibrium selection in games where perturbation functions are globally state dependent, the perturbation function here is assumed to be local, i.e., it only depends on the strategy of each agent. We provide conditions under which the strategies of the agents will converge to an arbitrarily small neighborhood of the set of Nash equilibria almost surely. This extends prior analysis on reinforcement learning in games which has been primarily focused on urn processes. We finally specialize the results to a class of potential games.
Keywords Game theory
Multiagent systems
Nash equilibria
Reinforcement learning
Strategy


 
Source Agency Non Paid ADAS
NTIS Subject Category 72E - Operations Research
70D - Personnel Management, Labor Relations & Manpower Studies
96 - Business & Economics
Corporate Author Georgia Inst. of Tech., Atlanta. School of Electrical and Computer Engineering.
Document Type Journal article
Title Note Journal article.
NTIS Issue Number 1402
Contract Number FA9550-09-1-0538

Science and Technology Highlights

See a sampling of the latest scientific, technical and engineering information from NTIS in the NTIS Technical Reports Newsletter

Acrobat Reader Mobile    Acrobat Reader