Unsupervised feature learning for Deep Reinforcement Learning

 

Reinforcement learning (RL) is one of the key AI paradigms for the development of autonomous systems. RL allows a learning agent to solve a task based on trial-and error interactions with its environment. By observing the results of its actions, the agent can determine the optimal sequence of actions to take in order to reach some goal.

One issue when applying RL, however,  is that human intervention is needed to define an appropriate representation for the learning problem. Usually, the designer selects a number of features to describe the system, based on his knowledge of the system and reinforcement learning. In this thesis you will investigate methods to automate the feature extraction problem for RL, allowing a reinforcement learning agent to autonomously determine suitable features for the problem to be solved.

 

The representation learning method will be based on the unsupervised feature learning methods used in deep learning systems. Deep learning systems train and stack multiple layers of feature extractors, leading to deep architectures that represent multiple levels of abstraction of the problem. Stacking the representational layers in this way, generally leads to progressively higher level representations of the data (e.g. moving from pixels, to edges, to objects in vision based tasks). Recent research has also shown that these multiple layers of representation are useful to learn features for reinforcement learning tasks. The key questions that you will investigate in this thesis are:  how useful are deep representations learned using unsupervised methods and can feature extractors be shared across multiple tasks.

 

Resources:

Contact:

Peter Vrancx