Emerging techniques and applications in Multi-objective Reinforcement Learning (ESANN2015)
**** EXTENDED DEADLINE (abstracts 28 November,full papers 12 December)********
Multi-objective optimization (MOO) and Reinforcement Learning (RL) are two well-established research fields in the area of learning, optimization, and control. RL addresses sequential decision making problems in initially unknown stochastic environments, involving stochastic policies and unknown temporal delays between actions and observable effects. Multi-objective optimization (MOO), which is a sub-area of multi-criteria decision making (MCDM), considers the optimization of more than one objective simultaneously and a decision maker, i.e. an algorithm or a technique, decides either which solutions are important for the user or when to present these solutions to the user for further consideration. Currently, MOO algorithms are seldom used for stochastic optimization, which makes it an unexplored but promising research area.
State of the art
Examples of algorithms that combine the two techniques MOO and RL are:
Multi-objective reinforcement learning is an extension of RL to multi-criteria stochastic rewards (also called utilities in decision theory). Techniques from multi-objective evolutionary computation have been used for multi-objective RL in order to improve the exploration-exploitation tradeoff. The resulting algorithms are hybrids between MCDM and stochastic optimization. The RL algorithms are enriched with the intuition and efficiency of MOO in handing multi-objective problems.
Preference based reinforcement learning combines reinforcement learning and preference learning that extend RL with qualitative reward vectors, e.g. ranking functions, that can be directly used by the user. Like MORL algorithms, RL is extended with new order relationships to order the policies.
Some multi-objective evolutionary algorithms use also method inspired by reinforcement learning to cope with noisy and uncertain environments.
Aim and scope
The main goal of this special session is to solicit research and potential synergies between multi-objective optimization, evolutionary computation and reinforcement learning. We encourage submissions describing applications of MOO for agents acting in difficult environments that are possibly dynamic, uncertain and partially observable, e.g. in games, multi-agent applications such as scheduling, and other real-world applications.
Topics of interests
- Novel frameworks combining both MOO and RL
- Multi-objective optimization algorithms such as meta-heuristics and evolutionary algorithms for dynamic and uncertain environments
- Theoretical results on learnability in multi-objective dynamic and uncertain environments
- On-line self-adapting systems or automatic configuration systems
- Solving multi-objective sequential decision making problems with RL
- Real-world multi-objective applications in engineering, business, computer science, biological sciences, scientific computation
Ann Nowe (firstname.lastname@example.org), Artificial Intelligence Lab, Vrije Universiteit Brussel, Pleinlaan 2, 1050, Brussels, Belgium
Abstracts submission: 28 November 2014
Full papers submission: 12 December 2014
Notification of acceptance:31 January 2015
ESANN conference:22 - 24 April 2015
- Papers must not exceed 6 pages, including figures and references.
- More information https://www.elen.ucl.ac.be/esann/index.php?pg=guidelines