Multi-agent reinforcement learning (MARL) is an important and fundamental topic within agent-based research. After giving successful tutorials on this topic at EASSS 2004 (the European Agent Systems Summer School), ECML 2005, ICML 2006, EWRL 2008 and AAMAS 2009-2012, with different collaborators, we now propose a revised and updated tutorial, covering both theoretical as well as practical aspects of MARL: TUTORIAL 2013.
This tutorial will be hosted at the twelfth international conference on autonomous agents and multiagent systems (AAMAS 2013).
Participants will be taught the basics of single-agent reinforcement learning and the associated theoretical convergence guarantees related to Markov Decision Processes. We will then outline why this convergence guarantees no longer hold in a setting where multiple agents learn. We will explain practical approaches on how to scale single-agent reinforcement learning to these situations where multiple agents influence each other and introduce a framework, based on game theory and evolutionary game theory, that allows a thorough analysis of the dynamics of multi-agent learning. Several research applications of MARL will be outlined in detail. The tutorial will include a practical hands-on session, where participants can experience the viability of reinforcement learning in several key application domains. Finally, a broad view of the challenges and prospects of multi-agent learning will be given.