Coordinating Human and Agent Behaviour in Collective-Risk Scenarios

Different situations, wherein humans interact among themselves or through technologies in hybrid socio-technical systems, resemble social dilemmas, i.e. situations wherein participants have to select between short-term personal profits and long-term social benefits. The behavioural outcome in those dilemmas is very much dependent on how successful the participants are in calculating the risk associated to the uncertainty of future rewards and on anticipating the opponents’ choices.

Take for instance the question of climate change. Alleviating (or even reverting) this severe phenomenon requires the cooperation of several countries with different ideologies, customs and economical perspectives for their industries, which are in many cases still very dependent on fossil energy. Measures that need to be taken will have a high impact for both industrialized countries as well as the so-called new economies. However, if the transition to renewable sources of energy keeps being postponed, the consequences are more certainly terrible. As long as the risk of a climate or environmental disaster is perceived to be low, individuals, or countries in this case, will be more likely to act to maximize their welfare over that of the collective. Only in high–risk situations will they be persuaded to make sufficient investments to ensure that the disaster is avoided. This situation has been operationalized in game theory as the collective risk game. In this game participants are each given an endowment and they need to decide whether to give, up to a predefined amount, to the common good for a fixed number of rounds.  If the joint contributions of all the participants over those rounds is above a certain threshold, which is achieved when everyone gives half or more of the predefined amount, then the disaster is adverted and they receive as a reward the remainder of the endowment (hence the dilemma). Yet, when the target is not reached the disaster can occur (meaning that they loose the remainder of the endowment) with a certain probability, which is defined by a risk parameter. The experiments show that people only tend to contribute to avoid the disaster if they perceive the risk to be high. 

Peer-to-peer energy markets or cloud-computing architectures are technology related examples of such collective-risk scenarios, with less disastrous outcomes. In those situations, it would be a tragedy when the energy-market fails to achieve the benefits of cooperation, leading to its demise, or the loss/overconsumption of computing-resources leads to a period of unavailability for the many users of the system. In addition, in complex societies defined by autonomous multi-agent systems, where non-human agents need to deal with uncertain situations, similar problems can arise, as was discussed before.

In order to guide both humans and agents in the previous examples towards an outcome beneficial for the collective, one needs to have a clear understanding of the mechanisms that define individual behaviour at the micro-level, and how these behaviours may aggregate into the dynamics one can observe at the macroscopic level. To achieve this goal, we can either focus on performing data analysis, when sufficient quantities of information are publicly available, or we can construct theoretical models that can be verified and improved through behavioural experiments.  Data analysis in itself is limited in providing causal dynamic insights without having a model that can show that the knowledge extracted from the data can indeed lead to the observed macroscopic behaviour.  The current proposal therefore focuses on the latter approach, linking it to the mathematical and computational modelling of strategic behaviours, i.e. linking game theory and artificial intelligence research. 

Within that context, as was mentioned earlier, the problem of risk anticipation and social welfare has been operationalized as a collective risk game.  To understand the behavioural results observed in this experiment models have been proposed that show how increasing risk in the game transforms the dynamics from an outcome that leads to overall defection to an outcome that generates sufficient cooperation. Some works have also aimed to identify the policies required to induce high levels of cooperation, although they are not directly linked to how humans behave in this game.  The policies learned in those works are focusing mostly on the question of which action to take using the information of past actions, which we call backward-looking behaviour. 

The problem now, which defines the core of this proposal, is that although humans may think about the past behaviour they often use that information to analyse the potential consequences of their actions for the future, what we will call forward-looking or anticipating behaviour. Sporadic prior work on anticipation in Artificial intelligence research has revealed that forward-looking behaviour is potentially better suited to model human behaviour.  Notwithstanding the importance of this behaviour for human decision-making and the potential it has for the development of hybrid socio-technological systems, little attention is given to development of its theories, systems and applications, an issue that we aim to overcome through this proposal, focusing particularly on social and technological problems related to the collective risk game.

Research topics: 

 

Project Info

Start 01/01/2017

End 31/12/2020

Funding FWO

Involved Members Elias Fernández, Tom Lenaerts, Ann Nowé