Niels Taatgen

HomeResearchOverviewSkill Acquisition Multi-tasking Time perception Learning from instructions Transfer: PRIMsPublicationsPersonal

 

Cognitive Modeling

In cognitive modeling we construct computer simulations of human thought. The goal of these simulations is to better understand human thinking, and to help build more rigorously testable theories of human cognition. Although there are various systems for cognitive modeling, I use the ACT-R architecture. ACT-R is a theory developed by John Anderson, which is supported by many research groups over the world, and which has been useful in theory and model construction in many different domains.

ACT-R

The ACT-R architecture

My main research interests are inspired by a sense of wonder that humans are able to do almost any new task they are confronted with. To capture this wonder in scientific research, I have identified topics that I think are critical in understanding this uniquely human ability.

A first topic is learning. How can people learn knowledge representations specific for a task given a combination of instruction, exploration, examples and imitation? How do they not only retain facts and become faster, but also continue to discover new knowledge under the influence of existing knowledge?

A second topic is cognitive control. How do we discover the right control structure in a new task, and how do we balance control derived from the environment with control exerted mentally? Striking the right balance between these two sources is crucial to behavior that is functionally appropriate on the one hand and robust on the other hand.

As I said in my introduction, I use cognitive modeling to study these topics. Cognitive modeling in general has been criticized for the fact that the modeler has too much freedom in selecting knowledge representations and parameter values, and that models therefore can be made to model any phenomenon. Part of my research is therefore focused on making the method of cognitive modeling more sound. I aim at making cognitive architectures more constrained by letting models learn their own task knowledge and control structure, minimizing the number of arbitrary decisions on the part of the modeler. Another excellent counter for the parameter-fitting criticism is to make real model predictions, i.e., predict the outcome of an experiment prior to data collection. I have two successful examples of predicting experimental outcomes (Taatgen, Huss, Dickison & Anderson, under revision; Taatgen, van Rijn & Anderson, 2007), which I hope will inspire others.

My own research has focused on the question how people are able to learn completely new skills, without having a "pre-programmed" system that can already do half of what they have to learn. In that context I have developed a mechanism within ACT-R that can learn new rules. This mechanism has been applied successful in a variety of studies, ranging from learning language, learning to do multiple things at the same time, learning complex dynamic tasks, and offering explanations for how children. Although my research is explained in more detail in separate pages (click on the menu to the left to get there), my main theoretical contributions can be summarized by:

Production compilation A mechanism to learn new rules in ACT-R. The mechanism is very simple: it combines two rules and possibly a memory retrieval intro a new rule. (with John Anderson)

The minimal control principle A principle that states that people construct their representation of a task so that the amount of internal control they have to exert over it is minimal.

A time perception module A cognitive module that can be used to estimate time intervals. Although the module is designed for ACT-R, it could easily be adapted to any other cognitive architecture. (with Hedderik van Rijn)

Threaded cognition Developed with Dario Salvucci, a mechanism that allows a cognitive architecture to do multiple things at the same time.

Operator representation for instructions When we have to do something new, instructions for what we have to do have to somehow be represented in memory. The operator representation allows for a flexibility and robustness, can be transformed into procedural skills by production compilation, and helps satisfy the minimal control principle.