Real World Applications of Reinforcement Learning

A special session of the annual IEEE-INNS International Joint Conference on Neural Networks, IJCNN 2012, part of the 2012 IEEE World Congress on Computational Intelligence, IEEE WCCI 2012, Brisbane, Australia, June 10-15, 2012.

Aim and Scope

Reinforcement Learning (RL) algorithms have long left the tiny grid worlds of the early years. From robot control to autonomous navigation, research labs have been applying RL to address increasingly difficult problems, showing that this paradigm is ready for the real world. In recent years, a number of papers have shown successful practical applications, in fields as diverse as production control, finance, scheduling, communications, autonomous vehicle control. While such examples are relevant, they do not abound, and RL is still far from being routinely applied as more mature supervised machine learning techniques are. Moreover, conferences and journals tend to dismiss “mere application” papers which do not carry relevant contributions at the theoretical level.

With this special session, we intend to gather recent examples of the application of RL to real-world problems, focusing in particular on the practical difficulties of applying existing RL algorithms, rather than on theoretical innovations. The aim is to give an updated picture of the state of the art of real world applications of RL.

We solicit original submissions describing applications of all flavors of reinforcement learning and approximate dynamic programming in a real-world scenario. Topics of interest include, but are not limited to, the application of:

  • Approximate dynamic programming
  • Reinforcement learning
  • Batch RL
  • Policy gradients
  • Options learning
  • Hierarchical RL
  • Multi-objective RL
  • Multi-agent RL
  • Bandit problem solvers
  • Markov Decision Processes

in fields such as

  • Industrial control
  • Production control
  • Automotive control
  • Autonomous vehicles control
  • Logistics
  • Telecommunication networks
  • Sensor networks
  • Ambient intelligence
  • Robotics
  • Finance

In preparing your submission, please motivate the use of RL; point out the difficulties encountered in your implementation; discuss potential bottlenecks and limitations of your approach; and, if possible, compare its performance with that of a more traditional method.

Important Dates

Submission deadline: Dec 19, 2011 Jan 18, 2012 (extended)
Acceptance notification: Feb 20, 2012
Final version submission: April 2, 2012
Early registration: April 2, 2012
Conference: June 10-15, 2012

Please check the main WCCI site for updates.

Submission Instructions

Papers should not exceed 8 pages in the IEEE double column format, US Letter paper size. Please prepare your submission according to the common instructions for WCCI conferences. LaTeX templates are available here.

Please upload your paper via the IJCNN 2012 submission page, selecting the special session as main research topic:

  • S35. Real World Applications of Reinforcement Learning

Accepted papers will be published in the WCCI 2012 proceedings.

Organizers

Matteo Gagliolo, Peter Vrancx and Ann Nowé
AI Lab, Computational Modeling group (CoMo)
Department of Computer Science
Vrije Universiteit Brussel (VUB)
Brussels, Belgium

Program Committee

  • Ana Bazzan, Instituto de Informatica, Universidade Federal do Rio Grande do Sul, Brazil
  • Lucian Busoniu, Team SequeL, INRIA Lille - Nord Europe, France
  • Yann-Michaël De Hauwere, CoMo, Department of Informatics, Vrije Universiteit Brussel, Belgium
  • Robain De Keyser, SySTEMS, Universiteit Gent, Belgium
  • Madalina Drugan, CoMo, Department of Informatics, Vrije Universiteit Brussel, Belgium
  • Matteo Gagliolo, CoMo, Department of Informatics, Vrije Universiteit Brussel, Belgium
  • Zhong-Ping Jiang, Electrical & Computer Engineering, Polytechnic Institute of NYU, USA
  • Koichi Moriyama, Institute of Scientific and Industrial Research, Osaka University, Japan
  • Ann Nowé, CoMo, Department of Informatics, Vrije Universiteit Brussel, Belgium
  • Kazuhiro Ohkura, Mechanical Systems Engineering, Hiroshima University, Japan
  • Warren Powell, Dept. of Operations Research and Financial Engineering, Princeton University, USA
  • Radu-Emil Precup, Department of Automation and Applied Informatics, Politehnica University of Timisoara, Romania
  • Martin Riedmiller, Machine Learning Lab, Albert-Ludwigs University Freiburg, Germany
  • Peter Sunehag, Research School of Information Sciences and Engineering, Australian National University, Australia
  • Julian Togelius, Center for Computer Games Research, IT University of Copenhagen, Denmark
  • Peter Vamplew, Graduate School of Information Technology and Mathematical Sciences, University of Ballarat, Australia
  • Peter Vrancx, CoMo, Department of Informatics, Vrije Universiteit Brussel, Belgium
  • Marco Wiering, Department of Artificial Intelligence, University of Groningen, Netherlands