The Limits of Reinforcement Learning in Lewis Signaling Games

TitleThe Limits of Reinforcement Learning in Lewis Signaling Games
Publication TypeConference Paper
Year of Publication2013
AuthorsCatteeuw, D, Manderick, B
EditorDevlin, S, Hennes, D, Howly, E
Conference NameProceedings of the 13th Adaptive and Learning Agents workshop
Conference LocationSaint Paul, MN, USA
KeywordsGame Theory, lewis signaling games, reinforcement learning, signaling
Abstract

 

We study how communication systems can emerge through repeated interaction between two individuals. We apply three reinforcement learning algorithms (Roth-Erev learning, Learning Automata, and Q-Learning) to the repeated Lewis signaling game, a game theoretic model of a communication problem. Our experiments show that each of these algorithms always reach a state of optimal communication even for games with more than two types (N > 2) and for arbitrary type distributions.

We show how these algorithms are related and can be reduced to a simple random process which only updates behavior on success and never deviates from what was once successful. We call this random process "win-stay/lose-inaction".

For Lewis signaling games where the distribution over the types is uniform, the expected number of interactions needed by win-stay/lose-inaction to reach a state of optimal communication is a function of N: 1.45 * N^3.02 , where N is the size of the game –the number of types, signals, and responses.