Reinforcement Learning is currently applied to single tasks, for instance parking a car, playing a video game, navigating to a goal, etc. Hierarchical Reinforcement Learning allows to decompose a complex task into simpler sub-tasks. For instance, an agent may progressively learn to grab objects, turn them, open doors, navigate through hallways (with doors), then do something interesting in a complete building. This project applies Hierarchical Reinforcement Learning to very complex tasks, for which the agent has to learn many skills. Playing a single video game is not the goal anymore, but only a small skill that may sometimes be needed by an agent taking care of an elderly person.
Another great advantage of Hierarchical Reinforcement Learning is that some of the sub-tasks may be solved using fixed policies. If a robot is able to walk, why re-learning that? Walking can be considered “known”, and the robot will learn more interesting behaviors built on that. This allows to apply Reinforcement Learning to problems for which partial (but sometimes proven-good) solutions exist. The agent learns to combine these solutions, and can also learn any skill that it needs and that was not provided.
This project consists of the following steps:
At the end of this project, the agent should be able to play and learn autonomously, building a large set of skills that it can then combine to quickly solve new tasks whose goals are provided by humans. Learning to deliver mail is easier when you don’t have to first learn what a door knob looks like.
Research topics: