How much does it help to know what she knows you know?

On Friday October 2, Harmen de Weerd will defend his thesis entitled “If you know what I mean: Agent-based models for understanding the function of higher-order theory of mind”. The public defense will be held at 14:30 in the Aula of the Academy building of the University of Groningen. On the occasion of this defense, the Institute of Artificial Intelligence and Cognitive Engineering will organize a thematic mini-symposium in the morning.

The tentative schedule for the symposium is given below. Abstracts of the talks are found at the bottom of this page.

How much does it help to know what she knows you know?

Date: October 2, 2015
Location: Oude Zittingszaal, Oude Boteringestraat 38

9:30-9:45 Opening by Rineke Verbrugge
9:45-10:15 Andrés Perea
Forward induction reasoning versus equilibrium reasoning
10:15-10:45 Virginia Dignum
Human-agent interaction
10:45-11:15 Niels Taatgen
Learning theory of mind in negotiation
11:15-11:45 coffee/tea break
11:45-12:15 Harmen de Weerd
If you know what I mean:
Agent-based models of higher-order theory of mind
12:15-12:45 Daniel van der Post
Evolving models of social cognition


The symposium will be held in the Oude Zittingszaal (depicted above) of the University of Groningen. This room is located at Oude Boteringestraat 38, within short walking distance from the Academy building (as shown on this map).


Forward induction reasoning versus equilibrium reasoning
Andrés Perea

In the literature on static and dynamic games, most rationalizability concepts have an equilibrium counterpart. In two-player games, the equilibrium counterpart is obtained by taking the epistemic conditions of the rationalizability concept and adding the following correct beliefs assumption: (a) each player believes that the opponent is correct about his beliefs, and (b) each player believes that the opponent believes that he is correct about the opponent’s beliefs. In this talk I explain why there is no equilibrium counterpart to the forward induction concept of extensive-form rationalizability (Pearce (1984), Battigalli (1997)), epistemically characterized by common strong belief in rationality (Battigalli and Siniscalchi (2002)). The reason is that there are games where the epistemic conditions of common strong belief in rationality are logically inconsistent with the correct beliefs assumption. In fact, I show that this inconsistency holds for “most” dynamic games of interest.

Paper: http://epicenter.name/Perea/Papers/FI-Equilibrium.pdf


Human-agent interaction
Virginia Dignum

The ability to exhibit social behaviour is paramount for agents to be able to engage in meaningful interaction with people. This requires not only agent models that start from and integrate different socio-cognitive elements such as emotions, social norms, or personalities, but also organisation models that structure and regulate the interaction between people and agents. Robots, intelligent vehicles, virtual coaches, serious games are currently being developed that exhibit social behaviour – to facilitate social interactions, to enhance decision making, to improve learning and skill training, to facilitate negotiations and to generate insights about a domain.

In this talk, I will present our current work on organisational models. In particular, I present a model for reasoning about organisational structures, a model to regulate resource request and sharing by using conditional norms and use policies. I conclude with some new directions of work on agent deliberation architectures


Learning theory of mind in negotiation
Niels Taatgen

In negotiation it is useful to have a sense of the other party’s goals, but also negotiation style, and base your own decisions on that knowledge. Our experiments with human participants show that they are not always sensitive to this. To train people to become better negotiators, we have developed a cognitive model that can act as an agent to train negotiation. The agent uses theory of mind to assess the human participant’s play style, and shows its “thoughts” to the participant, thereby encouraging similar behavior. Preliminary experiments show that this approach is successful, showing improvement in participant’s scores when they play against a metacognitive agent as opposed to an agent with a fixed strategy.


If you know what I mean:
Agent-based models of higher-order theory of mind

Harmen de Weerd

When people engage in social interactions, they often rely on their theory of mind, their ability to reason about unobservable mental content of others, such as beliefs, goals, and intentions. This ability allows them to both understand why others behave the way they do as well as predict future behavior. People can also make use of higher-order theory of mind by applying theory of mind recursively, and reason about the way others make use of theory of mind such as in the sentence “Alice believes that Bob does not know about the surprise party”. However, the evolutionary origins of this ability are unknown. Using agent-based simulations, we show how individuals can benefit from the use of higher-order theory of mind. We show that higher-order theory of mind reasoning can be helpful both in competitive settings and in cooperative settings. Higher-order theory of mind appears to be especially useful when competitive and cooperative motives are mixed.


Evolving models of social cognition
Daniel van der Post

Cognitive and agent-based models can be used to study social cognition. For example, cognitive models can be used to investigate the limits of ‘higher order theory of mind’ in humans, and agent-based models can be used to test different ‘scenario’s’ for the evolution of various forms of social cognition, essentially simulating different hypothesized ‘evolutionary pressures’ and studying the extent to which they select for complex social cognition. However, some cognitive and agent-based models actually deliver an opposite message, that is often experienced as ‘killjoy’ – that what looks like complex social cognition might not be either social or complex at all! In this talk, we discuss why some computational models tend to deliver such ‘killjoy’ explanations, and what this means for studying the evolution of social cognition in animals. We conclude that such models incorporate embodiment and embeddedness due to various dynamical feedbacks, and so give rise to counter-intuitive dynamics that are often overlooked in other models. Through these dynamics, seemingly simple behavior rules can generate seemingly complex behavioral patterns. In principle, such ‘killjoy’ results enable us to identify ‘false-positives’, i.e. those cases where we have identified (selection for) social cognition where there is none. However, there is a danger that computational models oversimplify matters leading to false-negatives. In the latter case, we erroneously discard cognitive explanations. We propose that future research should address whether oversimplification and false-negatives are a problem, because this will affect whether we are identifying false-positives appropriately.

This entry was posted in Uncategorized. Bookmark the permalink.