Virtual Museum Guide

It is difficult to search large data collections, such as the collection of a museum. In part because of the sheer size of the collection, but also because often the information need is unknown: In a museum it is hard to explicate what interests you most, for instance because you cannot oversee the complete collection or because you lack the necessary vocabulary and knowledge. This project aims at assisting a visitor of a virtual museum - that is, an online, digitized collection of works of art - in viewing the art works that best align with his or her interests.
The focus will be on semantic network representations of the art collection, and on the interaction between a model of the museum visitor (user model) and the representation of the art collection.

Optimal Personalized Interface by Man-Imitating Agents (Optima)

Adapting Virtual Guides: Using cognitive user profiling to enhance data driven information exchange.
The project addresses the problem of selecting which information from an extremely rich database to relay to an interested but relatively passive user. In many situations, the amount of information that can be given to a receiving party exceeds practical constraints such as the time available for the information exchange, motivational limitation on the receiving party, or the maximum complexity that the receiving party is willing to process. A typical example of this situation is a professional, educated museum guide touring an interested party of art novices through a museum. The guide has access to much more knowledge that the party can handle, so she has to limit the information given during the tour. Assuming an experienced guide, she probably has a default tour through the museum, discussing the highlights of the museum and some anecdotes to keep the party interested. However, unlike in a classroom setting, there is no formal curriculum that she has to follow. Based on the perceived interests of the party, she can adapt her tour, selecting from her extensive knowledge of the works exhibited in the museum, to better align the relayed information to the interests of the party.
In this project we will apply principles from the ACT-R architecture of cognition to implement an dynamic, adaptive virtual guide for the collection of the Rijksmuseum. In related I2RP-projects, databases containing semantic knowledge about the collection and methods to interface this knowledge have been developed. The aim of this project is to design and implement a virtual guide that is able to adapt itself to the user. In order to realize this goal, both guide and user models have to be developed. One of the challenges that will be faced is how to update the guide model, as the incoming information for shaping the model will be relatively sparse. An import aspect is how to use the learning capabilities available in ACT-R to have the virtual guide adapt and develop itself over time in such a way that a returning user can take the tour again without being presented with exactly the same information.
Another issue that this project might focus on, is the form in which the information is given to the user. For example, when describing visual art, presenting large amounts of visually presented text goes against the parallel listen-and-look advantage of a real-world guided tour. At the same time, having to wait until a spoken text is finished decreases the
self-directed advantages of virtual guiding.

Eye-gaze based interest awareness

In many situations, the amount of information that can be given to a receiving party exceeds practical constraints such as the time available for the information exchange, motivational limitation on the receiving party, or the maximum complexity that the receiving party is willing to process. A typical example of this situation is a professional, educated museum guide touring an interested party of art novices through a museum. The guide has access to much more knowledge that the party can handle, so she has to limit the information given during the tour. Based on the perceived interests of the party, the guide can adapt her tour, selecting from her extensive knowledge of the works exhibited in the museum, to better align the relayed information to the interests of the party.
This project will address the question of how to use eye-tracking data in extracting the interests of a user from her eye movements. In a previous project (described by Janssen, 2006; Van Maanen, Janssen, & Van Rijn, 2006), we built a Virtual Museum Guide (VMG) that commented on the presented art works based on the eye fixations of the visitor. The VMG only gave background information on the depicted objects or persons if users fixated on them.
However, from the eye movements literature it is known that people do not have voluntary control over their eye movements (Theeuwes, 1992). Usually, control of eye-movements is divided into a voluntary component and a stimulus-driven component (Henderson, 2003). That is, eye movements can be caused by salient features of the art works. In this project, we will adapt the existing Virtual Museum Guide to discount for the involuntary eye movements.

References:

  • Henderson, J. M. (2003). Human gaze control during real-world scene perception. Trends in Cognitive Sciences, 7(11), 498-504.
  • Itti, L., Koch, C., & Niebur, E. (1998). A model of saliency-based visual attention for rapid scene analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence, 20(11), 1254-1259.
  • Janssen, C. (2006). The virtual museum tour guide: An eye-fixation based recommender system. Unpublished BSc thesis, Groningen, Groningen.
  • Theeuwes, J. (1992). Perceptual selectivity for color and form. Perception & Psychophysics, 51(6), 599-606.
  • Van Maanen, L., Janssen, C., & Van Rijn, H. (2006). Personalization of a virtual museum guide using eye-gaze, proceedings of CogSci'06 (pp. 2620). Vancouver, BC.

People involved in this project:

  • Chris Janssen
  • Leendert van Maanen
  • Hedderik van Rijn