Knowledge Transfer in Deep Reinforcement learning

 

Reinforcement learning (RL) is one of the key AI paradigms for the development of autonomous systems. RL allows a learning agent to solve a task based on trial-and error interactions with its environment. By observing the results of its actions, the agent can determine the optimal sequence of actions to take in order to reach some goal.

Deep learning is a popular research track within the field of machine learning. The main idea behind deep learning is to create architectures consisting of multiple layers of representations in order to learn high level abstractions. Examples are the deep neural network methods used in image processing. Starting from individual pixels, each successive layer of the network learns progressively more complex features until the highest layers are able to recognize objects in the image.

Recent research has also shown that deep learning can be used to learn useful representations for reinforcement learning tasks. This has led to a new generation of state-of-the-art algorithms that combine deep learning and reinforcement learning. One example is the Deep-Q Network research, where an agent learned to play atari games by observing only the screen and the game score.

A downside to the Deep RL approach is that learning always starts from scratch, meaning that the deep neural network has to be retrained for every reinforcement learning task. This is a very time consuming task that makes the learning process slow. In this thesis you will investigate different strategies for transfer learning that try to use knowledge from previous tasks to speed up learning of new tasks. These methods will be compared empirically on a benchmark system to determine if a speedup in learning can be achieved.

 

Resources:

Contact:

Peter Vrancx