TITLE: Probabilistic approaches to belief revision ABSTRACT: In fact, probability theory was the first formal environment where people thought about belief change and found ways to make change processes operational. Given some probability distribution P, Bayesian conditioning P(B|A) allows the derivation of probabilistic statements about B when new information about the truth of A has come in. Later on, Jeffrey's rule generalised this to taking uncertain evidence into account, and even more sophisticated and powerful approaches to probabilistic belief revision have been put forward. This lecture talks about important achievements of probabilistic belief change from its first beginnings to the advanced application of information-theoretical principles which are able to handle most general revision problems. I will also explain how this is related to more classical and iterated belief revision theories. PREREQUISITES: Some familiarity with probabilistic reasoning may be useful, but the talk will be self-contained, starting from the definition of probability. REFERENCES: J.E. Shore, Relative entropy, probabilistic inference and AI, in L.N. Kanal, J.F. Lemmer (eds.), Uncertainty in Artificial Intelligence, pp. 211-215. North-Holland, Amsterdam, 1986. H. Chan and A. Darwiche, On the revision of probabilistic beliefs using uncertain evidence, in Proceedings 18th International Joint Conference on Artificial Intelligence, IJCAI'2003, 2003. G. Kern-Isberner, Revising and updating probabilistic beliefs, in M.-A. Williams and H. Rott (eds.), Frontiers in Belief Revision, pp. 329-344, Kluwer Academic Publishers, Dordrecht, 2001. G. Kern-Isberner, Conditionals in nonmonotonic reasoning and belief revision, Springer, Lecture Notes in Artificial Intelligence LNAI 2087, 2001.