Distributed Architectures for Deep Reinforcement learning

 

 

Reinforcement learning (RL) is one of the key AI paradigms for the development of autonomous systems. RL allows a learning agent to solve a task based on trial-and error interactions with its environment. By observing the results of its actions, the agent can determine the optimal sequence of actions to take in order to reach some goal.

Deep learning is a popular research track within the field of machine learning. The main idea behind deep learning is to create architectures consisting of multiple layers of representations in order to learn high level abstractions. Examples are the deep neural network methods used in image processing. Starting from individual pixels, each successive layer of the network learns progressively more complex features until the highest layers are able to recognize objects in the image.

Recent research has also shown that deep learning can be used to learn useful representations for reinforcement learning tasks. This has led to a new generation of state-of-the-art algorithms that combine deep learning and reinforcement learning. One example is the Deep-Q Network research, where an agent learned to play atari games by observing only the screen and the game score.

One important  limitation of these deep RL algorithms is that they require a very large amount of learning interactions before achieving good performance, with experiments sometimes having runtimes of days or weeks. A possible solution for this is to parallelize the learners by running multiple agents and combining their results. In thesis thesis you will extend exisiting Deep RL algorithms to a distributed learning setting and evaluate the performance on an RL benchmark system. 

 

Resources:

Contact:

Peter Vrancx