Publications: Learning in Multi-Agent Systems
Honesty and deception in populations of selfish, adaptive individuals. The Knowledge Engineering Review, 15.. (In Press).
The Emergence of Honest Signaling. In European Conference on Complex Systems. Lucca, Italy.. (2014).
The Limits and Robustness of Reinforcement Learning in Lewis Signaling Games. Connection Science, 26(2), 177. doi:10.1080/09540091.2014.885303. (2014).
Distributed Learning and Multi-Objectivity in Traffic Light Control. Connection Science, 26(1), 65-83.. (2014).
Continuous Action Reinforcement Learning Automata, an RL technique for controlling production machines. Vrije Universiteit Brussel, Brussels, Belgium.. (2013).
The Limits of Reinforcement Learning in Lewis Signaling Games. In , Proceedings of the 13th Adaptive and Learning Agents workshop. Saint Paul, MN, USA.. (2013).
Context sensitive reward shaping for sparse interaction multi-agent systems. Accepted for publication in the Knowledge Engineering Review.. (2013).
Learning Coordinated Traffic Light Control. Proceedings of the Adaptive and Learning Agents workshop (at AAMAS-13).. (2013).
Game Theory and Multi-agent Reinforcement Learning. In , Reinforcement Learning: State of the Art (pp. 441-470). Springer. Retrieved from http://www.springer.com/engineering/computational+intelligence+and+complexity/book/978-3-642-27644-6. (2012).
Honest Signaling: Learning Dynamics versus Evolutionary Stability. In , Proceedings of the 21st Belgian-Dutch Conference on Machine Learning. Ghent, Belgium.. (2012).
Hierarchial routing and traffic grooming in IP/MPLS-based ASON/GMPLS multi-domain networks. Photonic Network Communications, 23(3), 217 - 229.. (2012).
Network Wide Synchronization in Wireless Sensor Networks. 19th IEEE Symposium on Communications and Vehicular Technology in the Benelux (SCVT 2012). Eindhoven, Netherlands: IEEE.. (2012).
Learning Approach to Coordinate Exploration with Limited Communication in Continuous Action Games. AAMAS 2012 Workshop on Adaptive and Learning Agents. IFAAMAS.. (2012).