Multi-agent reinforcement learning of coordination and problem structure

Research in machine learning and reinforcement learning is increasingly moving towards multi-agent solutions, where distinct entities called agents together solve problems such as routing traffic in a network or load balancing in power grids. When not all possible circumstances after deployment can be anticipated, learning offers an interesting alternative. However, learning in the presence of other dynamic, learning agents is challenging, and particularly difficult to scale to large groups of agents. Current techniques that are scalable assume that agents do not have a lot of interactions, and solve the few conflicts encountered locally.

In settings where agents do have stronger interactions, knowledge about these problem specific interactions can be used to structure coordination between the agents. This knowledge may be specified a priori, or learnt during operation. However, current algorithms require a global view for each agent, limiting the decoupling between agents; neither can they estimate the strength of interactions, and they are only applicable in settings where all agents share the same interests.

In this project, we will investigate the automatic detection of interactions between agents and its use for local coordination, without requiring agents to have a full view of other agent’s state and actions. Furthermore, we will validate the techniques developed from gained insights on settings that are fully cooperative, fully competitive and mixed.

Research topics:  

 

Project Info

Start 01/01/2013

End 31/12/2016

Funding FWO

Involved Members Ann Nowé