CFP: Workshop at PPSN 2014 "In Search of Synergies between Reinforcement learning and Evolutionary Computation"



Session I “Reinforcement Learning into Evolutionary Computation”

15’ presentation + 5 minutes questions

1.         ‘Guided’ Restarts Hill-Climbing, David Catteeuw, Madalina M. Drugan, and Bernard Manderick (slides)

2.         “A Method for Auxiliary Objectives Selection using Reinforcement Learning: An Overview”, Arina Buzdalova and Maxim Buzdalov (slides)

3.         “Schemata Monte Carlo Network Optimization”, Pedro Isasi, Madalina M. Drugan, and Bernard Manderick (slides)

4.         Discussion panel: chair Madalina Drugan

Session II “Evolutionary Computation in Reinforcement Learning”

1.         “Annealing-Pareto Multi-Objective Multi-Armed Bandit Algorithm” Saba Q. Yahyaa, Madalina M. Drugan, and Bernard Manderick (slides)

2.         A Q-learning Based Evolutionary Algorithm for Sequential Decision Making Problems, Haobo Fu, Peter R. Lewis and Xin Yao (slides)

3.         “Schemata bandits”, Madalina M. Drugan, Pedro Isasi and Bernard Manderick (slides)

4.         Discussion panel: chair Bernard Manderick


Motivation and background

A recent trend in machine learning is the transfer of knowledge from one area to another. In this workshop, we focus on potential synergies between reinforcement learning and evolutionary computation: reinforcement learning (RL) addresses sequential decision problems in an initially unknown stochastic environment, requiring lots of computational resources while the main strength of evolutionary computation (EC) is its general applicability and computational efficiency. Although at first they seem very different, these two learning techniques address basically the same problem: the maximization of the agent's reward in a potentially unknown environment that is not always completely observable. Possibly, these machine learning methods can benefit from an exchange of ideas resulting in a better theoretical understanding and/or empirical efficiency.


Topics of interest

Topics of interests include but are not limited to:

  • Reinforcement learning using evolutionary algorithms or techniques,

  • Optimization algorithms including meta-heuristics, evolutionary algorithms, etc. for dynamic and uncertain environments,

  • Theoretical results on the learnability in dynamic and uncertain environments,

  • Novel evolutionary computation frameworks for dynamical environments,

  • Online self-adapting systems,

  • Online automatic configuration systems,

  • Games using optimization techniques,

  • Decision making in dynamic and uncertain environments,

  • Real-world applications in engineering, business, computer science, biological sciences, scientific computation, etc. in dynamic and uncertain environments,

  • Dynamic/reactive scheduling and planning


Information for authors

We invite submissions as extended abstracts of  max. 4 pages in  Springer's LNCS style. The abstracts should be submitted in PDF format directly to the organisers of the workshop (see their emails above) and please mention “PPSN2014 workshop submission” in the subject line.

All accepted papers will be presented at the workshop and they will be provided online from this website.


Program committee 

El Ghazali Talbi (INRIA, France)

Marco Weiring (RUG, Groningen)

Gregoire Danoy (FSTC-CSC-ILIAS, Luxembourg)

Yann Michael De Hauwe (VUB, Belgium)

Camelia Chira (BUU, Romania)



Dr. Ing. Madalina M. Drugan,

Computational Modeling group, Vrije Universiteit Brussels, Belgium



Prof. dr. Bernard Manderick,

Computational Modeling group, Vrije Universiteit Brussels, Belgium



Important dates

Paper submission extended: 17 June 2014

Decision: 30 June 2014

Final paper submission: 15 July 2014

Workshop date: 13 September 2014