next up previous contents
Next: About this document Up: No Title Previous: Handwriting recognition and

References

1
J. H. Abbs and G. L. Eilenberg. Peripheral mechanism of speech motor control. In N. J. Lass, editor, Contemporary Issue in Experimental Phonetics, pages 139--166. 1976.

2
C. Abry, L. J. Boe, and J. L. Schwartz. Plateaus, Catastrophes and the Structuring of Vowel Systems. Journal of Phonetics, 17:47--54, 1989.

3
C. Abry and M. T. Lallouache. Audibility and stability of articulatory movements: Deciphering two experiments on anticipatory rounding in French. In Proceedings of the XIIth International Congress of Phonetic Sciences, volume 1, pages 220--225, Aix-en-Provence, France, 1991.

4
M. L. Agronin. The Design and Software Formulation of a 9-String 6-Degree-of-Freedom Joystick for Telemanipulation. Master's thesis, University of Texas at Austin, 1986.

5
K. Aizawa, H. Harashima, and T. Saito. Model-based analysis-synthesis image coding (mbasic) system for person's face. Image Communication, 1:139--152, 1989.

6
M. Akamatsu and S. Sato. A multi-modal mouse with tactile and force feedback. Int. Journ. of Human-Computer Studies, 40:443--453, 1994.

7
P. Astheimer et al. Die Virtuelle Umgebung -- Eine neue Epoche in der Mensch-Maschine-Kommunikation. Informatik-Spektrum, 17(6), December 1994.

8
D. Baggi, editor. Readings in Computer-Generated Music. IEEE CS Press, 1992.

9
J. Baily. Music structure and human movement. In P. Howell, I. Cross, and R. West, editors, Musical Structure and Cognition, pages 237--258. Academic Press, 1985.

10
R. Balakrishnan, C. Ware, and T. Smith. Virtual hand tool with force feedback. In C. Plaison, editor, Proc. of the Conf. on Human Factors in Computing Systems, CHI'94, Boston, 1994. ACM/SIGCHI.

11
J. Bates. The role of emotions in believable agents. Comm. of the ACM, 37(7):122--125, 1994.

12
R. J. Beaton et al. An Evaluation of Input Devices for 3-D Computer Display Workstations. In Proc. of the SPIE Vol. 761, pages 94--101, 1987.

13
R. J. Beaton and N. Weiman. User Evaluation of Cursor-Positioning Devices for 3-D Display Workstations. In Proc. of the SPIE Vol. 902, pages 53--58, 1988.

14
A. P. Benguerel and H. A. Cowan. Coarticulation of upper lip protrusion in French. Phonetica, 30:41--55, 1974.

15
A. P. Benguerel and M. K. Pichora-Fuller. Coarticulation effects in lipreading. Journal of Speech and Hearing Research, 25:600--607, 1982.

16
C. Benoît, M. T. Lallouache, T. Mohamadi, and C. Abry. A set of french visemes for visual speech synthesis. In G. Bailly and C. Benoît, editors, Talking Machines: Theories, Models and Designs, pages 485--504. Elsevier Science Publishers B. V., North-Holland, Amsterdam, 1992.

17
C. Benoît, T. Mohamadi, and S. Kandel. Effects of phonetic context on audio-visual intelligibility of French speech in noise. Journal of Speech & Hearing Research, (in press), 1994.

18
P. Bergeron and P. Lachapelle. Controlling facial expressions and body movements in the computer generated animated short 'Tony de Peltrie'. In SigGraph '85 Tutorial Notes, Advanced Computer Animation Course. 1985.

19
N. Bernstein. The Coarticulation and Regulation of Movements. London Pergamon Press, 1967.

20
P. Bertelson and M. Radeau. Cross-modal bias and perceptual fusion with auditory visual spatial discordance. Perception and Psychophysics, 29:578--584, 1981.

21
C. A. Binnie, A. A. Montgomery, and P. L. Jackson. Auditory and visual contributions to the perception of consonants. Journal of Speech & Hearing Research, 17:619--630, 1974.

22
J.-L. Binot et al. Architecture of a Multimodal Dialogue Interface for Knowledge-Based Sytems. ESPRIT II, Project No. 2474.

23
E. Bizzi. Central and peripheral mechanisms in motor control. In G. Stelmach and J. Requin, editors, Advances in psychology 1: Tutorials in motor behavior, pages 131--143. Amsterdam: North Holland, 1980.

24
E. Bizzi, A. Polit, and P. Morasso. Mechanisms underlying achievement of final head position. Journal of Neurophysiology, 39:435--444, 1976.

25
M. M. Blattner and R. Dannenberg, editors. Multimedia Interface Design (Readings). ACM Press/Addison Wesley, 1992.

26
M. M. Blattner et al. Sonic enhancement of two-dimensional graphic display. In G. Kramer, editor, Auditory Display, pages 447--470, Reading, Massachusetts, 1994. Santa Fe Institute, Addison Wesley.

27
J. Blauert. Hearing - Psychological Bases and Psychophysics, chapter Psychoacoustic binaural phenomena. Springer, Berlin New York, 1983.

28
J. Blauert, M. Bodden, and H. Lehnert. Binaural Signal Processing and Room Acoustics. IEICE Transact. Fundamentals (Japan), E75:1454--1458, 1993.

29
J. Blauert and J.-P. Col. Auditory Psychology and Perception, chapter A study of temproal effects in spatial hearing, pages 531--538. Pergamon Press, Oxford, 1992.

30
J. Blauert, H. Els, and J. Schröter. A Review of the Progress in External Ear Physics Regarding the Objective Performance Evaluation of Personal Ear Protectors. In Proc. Inter-Noise '80, pages 643--658, USA New York, 1980. Noise-Controil Found., Noise-Control Found.

31
J. Blauert and K. Genuit. Sound-Environment Evaluation by Binaural Technology: Some Basic Consideration. Journ. Acoust. Soc. Japan, 14:139--145, 1993.

32
J. Blauert, H. Hudde, and U. Letens. Eardrum-Impedance and Middle-Ear Modeling. In Proc. Symp. Fed. Europ. Acoust. Soc., FASE, 125-128, PLisboa, 1987.

33
M. Bodden. Binaurale Signalverarbeitung: Modellierung der Richtungserkennung und des Cocktail-Party-Effektes (Binaural signal processing (Modeling of direction finding and of the cocktail-party effect). PhD thesis, Ruhr-Universität Bochum, 1992.

34
M. Bodden. Modeling Human Sound Source Localization and the Cocktail-Party-Effect. Acta Acustica 1, 1:43--55, 1993.

35
M. Bodden and J. Blauert. Separation of Concurrent Speech Signals: A Cocktail-Party Processor for Speech Enhancement. In Proc. ESCA Workshop on: Speech Processing in Adverse Conditions (Cannes Mandleieu), pages 147--150. ESCA, 1992.

36
J. Bortz. Lehrbuch der Statistik. 1977.

37
D. W. Boston. Synthetic facial animation. British Journal of Audiology, 7:373--378, 1973.

38
H. H. Bothe, G. Lindner, and F. Rieger. The development of a computer animation program for the teaching of lipreading. In Proc. of the 1st TIDE Conference, Bruxelles, pages 45--49. 1993.

39
R. J. Brachman and J. G. Schmolze. An overview of the KL-ONE knowledge representation system. Cognitive Science, 9:171--216, 1985.

40
L. D. Braida. Crossmodal integration in the identification of consonant segments. Quaterly Journal of Experimental Psychology, 43:647--678, 1991.

41
C. Bregler, H. Hild, S. Manke, and W. A. Improving connected letter recognition by lipreading. In International Joint Conference of Speech and Signal Processing, volume 1, pages 557--560, Mineapolis, MN, 1993.

42
S. A. Brewster, P. C. Wright, and A. D. N. Edwards. A detailed investigation into the effectiveness of earcons. In G. Kramer, editor, Auditory Display, pages 471--498, Reading, Massachusetts, 1994. Santa Fe Institute, Addison Wesley.

43
D. E. Broadbent. The magic number seven after fifteen years. In A. Kennedy and A. Wilkes, editors, Studies in long term memory, pages 3--18. London: Wiley, 1975.

44
E. R. Brocklehurst. The NPL Electronic Paper Project. Technical Report DITC 133/88, National Physical Laboratory (UK), 1994.

45
N. M. Brooke. Development of a video speech synthesizer. In Proceedings of the British Institute of Acoustics, Autumn Conference, pages 41--44, 1979.

46
N. M. Brooke. Computer graphics synthesis of talking faces. In G. Bailly and C. Benoît, editors, Talking Machines: Theories, Models and Designs, pages 505--522. Elsevier Science Publishers B. V., North-Holland, Amsterdam, 1992.

47
N. M. Brooke and E. D. Petajan. Seeing speech: Investigation into the synthesis and recognition of visible speech movement using automatic image processing and computer graphics. In Proceedings of the International Conference on Speech Input and Output, pages 104--109, 1986.

48
F. P. Brooks, Jr. et al. Project GROPE - Haptic Displays for Scientific Visualization. ACM Computer Graphics, 24(4):177--185, Aug. 1990.

49
H. W. Buckingham and H. Hollien. A neural model for language and speech. Journal of Phonetics, 6:283, 1993.

50
J. Burgstaller, J. Grollmann, and F. Kapsner. On the Software Structure of User Interface Management Systems. In W. Hansmann et al., editors, EUROGRAPHICS '89, pages 75--86. Elsevier Science Publishers B. V., 1989.

51
T. W. Calvert et al. The evolution of an interface for coreographers. In Proc. INTERCHI, 1993.

52
H. W. Campbell. Phoneme recognition by ear and by eye: a distinctive feature analysis. PhD thesis, Katholieke Universiteit te Nijmegen, 1974.

53
R. Campbell. Tracing lip movements: making speech visible. Visible Language, 8(1):33--57, 1988.

54
R. Campbell and B. Dodd. Hearing by eye. Quaterly Journal of Experimental Psychology, 32:509--515, 1980.

55
A. Camurri et al. Dance and Movement Notation. In P. Morasso and V. Tagliasco, editors, Human Movement Understanding. North-Holland, 1986.

56
A. Camurri et al. Music and Multimedia Knowledge Representation and Reasoning: The HARP System. Computer Music Journal (to appear), 1995.

57
A. Camurri and M. Leman. AI-based Music Signal Applications. Techn. rep., IPEM - Univ. of Gent and DIST - Univ. Of Genova, 1994.

58
S. Card, W. K. English, and B. J. Burr. Evaluations of Mouse, Rate-controlled Isometric Joystick, Step Keys, and Text Keys for Text Selection on a CRT. Ergonomics, 21(8):601--613, Aug. 1978.

59
S. K. Card, T. P. Moran, and A. Newell. The Keystroke-Level Model for User Performance Time with Interactive Systems. Communications of the ACM, 23(7):396--410, 1980.

60
S. K. Card, T. P. Moran, and A. Newell. The Psychology of Human-Computer Interaction. Lawrence Erlbaum Ass., Publishers, 1983.

61
E. Casella, F. Lavagetto, and R. Miani. A time-delay neural network for speech to lips movements conversion. In Proc. Int.Conf. on Artificial Neural Networks, Sorrento, Italy, pages 26--27. 1994.

62
M. A. Cathiard. Identification visuelle des voyelles et des consonnes dans le jeu de la protrusion-retraction des levres en francais. Technical report, Memoire de Maitrise, Departement de Psychologie, Grenoble, France, 1988.

63
M. A. Cathiard. La perception visuelle de la parole : apercu des connaissances. Bulletin de l'Institut de Phonetique de Grenoble, 17/18:109--193, 1988/1989.

64
M. A. Cathiard, G. Tiberghien, A. Tseva, M. T. Lallouache, and P. Escudier. Visual perception of anticipatory rounding during pauses: A cross-language study. In Proceedings of the XIIth International Congress of Phonetic Sciences, volume 4, pages 50--53, Aix-en-Provence, France, 1991.

65
M. Chafcouloff and A. Di Cristo. Les indices acoustiques et perceptuels des consonnes constrictives du francais, application la synthese. In Actes des 9emes Journes d'Etude sur la Parole, Groupe Communication Parlee du GALF, pages 69--81, Lannion, France, 1978.

66
A. Chapanis and R. M. Halsey. Absolute Judgements of Spectrum Colors. Journ. of Psychology, pages 99--103, 1956.

67
H. J. Charwat. Lexikon der Mensch-Maschine-Kommunikation. Oldenbourg, 1992.

68
M. Chen, S. J. Mountford, and A. Sellen. A Study in Interactive 3-D Rotation Using 2-D Control Devices. ACM Computer Graphics, 22(4):121--129, Aug. 1988.

69
N. Chomsky and M. Halle. Sound Pattern of English. Harper and Row, New-York, 1968.

70
M. M. Cohen and D. W. Massaro. Synthesis of visible speech. Behaviour Research Methods, Instruments & Computers, 22(2):260--263, 1990.

71
M. M. Cohen and D. W. Massaro. Modelling coarticulation in synthetic visual speech. In M.-T. . Thalmann, editor, Proceedings of Computer Animation '93, Geneva, Switzerland, 1993.

72
J.-P. Col. Localisation auditiv d'un signal et aspects temporels de l'audition spatiale (Auditory localization of a signal and temporal aspects of spatial hearing). PhD thesis, Marseille, 1990.

73
J. Cotton. Normal `visual-hearing'. Science, 82:582--593, 1935.

74
H. D. Crane and D. Rtischev. Pen and voice unite: adding pen and voice input to today's user interfaces opens the door for more natural communication with your computer. Byte, 18:98--102, Oct. 1993.

75
J. L. Crowley and Y. Demazeau. Principles and techniques for sensor data fusion. Signal Processing, 32:5--27, 1993.

76
S. D. Visual-neural correlate of speechreading ability in normal-hearing adults: reliability. Journal of Speech and Hearing Research, 25:521--527, 1982.

77
R. B. Dannenberg and A. Camurri. Computer-Generated Music and Multimedia Computing. In IEEE ICMCS Intl. Conf. on Multimedia Computing and Systems, Proc. ICMCS 94, pages 86--88. IEEE Computer Society Press, 1994.

78
R. Davis, H. Shrobe, and P. Szolovits. What is a Knowledge Representation? AI Magazine, 14(1), 1993.

79
B. de Graf. Performance facial animation notes. In Course Notes on State of the Art in Facial Animation, volume 26, pages 10--20, Dallas, 1990. SigGraph '90.

80
G. De Poli, A. Piccialli, and C. Roads, editors. Representations of Musical Signals. MIT Press, 1991.

81
W. N. Dember and J. S. Warn. Psychology of Perception - 2nd Edition. Holt, Rinehart & Winston, New York, 1979.

82
L. Dikmans. Future intelligent telephone terminals: A method for user interface evaluation early in the design process. Technical report, IPO/Philips rapport '94, Eindhoven: Institute for Perception Research, 1994.

83
N. F. Dixon and L. Spitz. The detection of audiovisual desynchrony. Perception, (9):719--721, 1980.

84
B. Dodd and R. Campbell, editors. Hearing by Eye: The Psychology of Lip-reading, Hillsdale, New Jersey, 1987. Lawrence Erlbaum Associates.

85
E. H. Dooijes. Analysis of Handwriting Movements. PhD thesis, University of Amsterdam, 1983.

86
R. A. Earnshaw, M. A. Gigante, and H. Jones, editors. Virtual Reality Systems. Academic Press, 1993.

87
B. Eberman and B. An. EXOS Research on Force Reflecting Controllers. SPIE Telemanipulator Technology, 1833:9--19, 1992.

88
P. Ekman and W. V. Friesen. Facial Action Coding System. Consulting Psychologists Press, Stanford University, Palo Alto, 1977.

89
S. R. Ellis. Nature and Origins of Virtual Environments: A Bibliographical Essay. Computing Systems in Engineering, 2(4):321--347, 1991.

90
H. Els. Ein Meßsystem für die akustische Modelltechnik (A measuring system for the technique of acoustic modeling). PhD thesis, Ruhr-Universität Bochum, 1986.

91
H. Els and J. Blauert. Measuring Techniques for Acoustic Models - Upgraded. In Proc. Internoise'85, Schriftenr. Bundesanst. Arbeitsschutz, Vol. Ib 39/II, pages 1359--1362. Bundesanst. Arbeitsschutz, 1985.

92
H. Els and J. Blauert. A Measuring System for Acoustic Scale Models. In 12th Int. Congr. Acoust., Proc. of the Vancouver Symp. Acoustics & Theatre Planning for the Performing Arts, pages 65--70, 1986.

93
N. P. Erber. Interaction of audition and vision in the recognition of oral speech stimuli. Journal of Speech & Hearing Research, 12:423--425, 1969.

94
N. P. Erber. Auditory, visual and auditory-visual recognition of consonants by children with normal and impaired hearing. Journal of Speech and Hearing Research, 15:413--422, 1972.

95
N. P. Erber. Auditory-visual perception of speech. Journal of Speech & Hearing Disorders, 40:481--492, 1975.

96
N. P. Erber and C. L. de Filippo. Voice/mouth synthesis and tactual/visual perception of /pa, ba, ma/. Journal of the Acoustical Society of America, 64:1015--1019, 1978.

97
C. W. Eriksen and H. W. Hake. Multidimensional Stimulus Differences and Accuracy of Discrimination. Psychological Review, 67:279--300, 1955.

98
P. Escudier, C. Benoît, and M. T. Lallouache. Identification visuelle de stimuli associes a l'opposition /i/ - /y/ : etude statique. In Proceedings of the First French Conference on Acoustics, pages 541--544, Lyon, France, 1990.

99
C. Faure. Pen and voice interface for incremental design of graphics documents. In Proceedings of the IEE Colloquium on Handwriting and Pen-based input, Digest Number 1994/065, pages 9/1--9/3. London: The Institution of Electrical Engineers, March 1994.

100
W. Felger. How interactive visualization can benefit from multidimensional input devices. SPIE, 1668:15--24, 1992.

101
I. A. Ferguson. TouringMachines: An Architecture for Dynamic, Rational, Mobile Agents. Phd thesis, University of Cambridge, 1992.

102
P. M. Fitts. The information capacity of the human motor system in controlling the amplitude of movement. Journal of Experimental Psychology, 47(6):381--391, June 1954.

103
J. D. Foley. Neuartige Schnittstellen zwischen Mensch und Computer. In Spektrum der Wissenschaft, number 12, pages 98--106. Dec. 1987.

104
J. W. Folkins and J. H. Abbs. Lip and jaw motor control during speech: motor reorganization response to external interference. J. S. H. R., 18:207--220, 1975.

105
C. Fowler, P. Rubin, R. Remez, and M. E. Turvey. Implications for speech production of a general theory of action. In Language Production, Speech and Talk, volume 1, pages 373--420. Academic Press, London, 1980.

106
C. A. Fowler. Coarticulation and theories of extrinsic timing. Journal of Phonetics, 1980.

107
C. A. Fowler. Current perspective on language and speech production: A critical overview. In Speech Science, pages 193--278. Taylor and Francis, London, 1985.

108
C. A. Fowler. An event approach to the study of speech perception from a direct-realist perspective. Journal of Phonetics, 14:3--28, 1986.

109
D. M. Frohlich. The Design Space of Interfaces. Technical report, Hewlett-Packard Company, 1991.

110
O. Fujimura. Elementary gestures and temporal organization. what does an articulatory constraint means? In The cognitive representation of speech, pages 101--110. North Holland Amsterdam, 1981.

111
Y. Fukui and M. Shimojo. Edge Tracing of Virtual Shape Using Input Device with Force Feedback. Systems and Computers in Japan, 23(5):94--104, 1992.

112
W. Gaik. Untersuchungen zur binauralen Verarbeitung kopfbezogener Signale (Investigations into binaural signal processing of head-related signals). PhD thesis, Ruhr-Universität Bochum, 1990.

113
W. Gaik. Combined Evaluation of Interaural Time and Intensity Differences: Psychoacoustic Results and Computer Modeling. Journ. Acoust. Soc. Am., 94:98--110, 1993.

114
W. Gaik and S. Wolf. Multiple Images: Psychological Data and Model Predictions. In J. Duifhuis, J. W. Horst, and H. P. Wit, editors, Basic Issues of Hearing, pages 386--393, London, 1988. Academic Press.

115
P. Gårdenfors. Semantics, Conceptual Spaces and the Dimensions of Music. In Rantala, Rowell, and Tarasti, editors, Essays on the Philosophy of Music, volume 43 of Acta Philosophica Fennica, pages 9--27, Helsinki, 1988.

116
W. R. Garner. An Informational Analysis of Absolute Judgments of Loudness. Journ. of Experimental Psychology, 46:373--380, 1953.

117
F. Garnier. Don Quichotte. Computer-generated movie, 2:40, 1991.

118
W. W. Gaver. Using and creating auditory icons. In G. Kramer, editor, Auditory Display, pages 417--446, Reading, Massachusetts, 1994. Santa Fe Institute, Addison Wesley.

119
T. H. Gay. Temporal and spatial properties of articulatory movements: evidence for minimum spreading across and maximum effects within syllable boundaries. In The cognitive representation of speech, pages 133--138. North Holland Amsterdam, 1981.

120
G. Geiser. Mensch-Maschine Kommunikation. Oldenbourg, 1990.

121
J. J. Gibson. The senses considered as perceptual systems. Houghton Mifflin, Boston, 1966.

122
J. J. Gibson. The ecological approach to visual perception. Hougton Miffin, Boston, 1979.

123
M. A. Gigante. Virtual Reality: Definitions, History and Applications. In R. A. Earnshaw, M. A. Gigante, and H. Jones, editors, Virtual Reality Systems, chapter 1. Academic Press, 1993.

124
J. Glasgow and D. Papadias. Computational Imagery. Cognitive Science, 16:355--394, 1992.

125
D. Goldberg and C. Richardson. Touch-Typing with a Stylus. In InterCHI '93 Conference Proceedings, pages 80--87. Amsterdam, 1993.

126
A. J. Goldschen. Continuous Automatic Speech Recognition by Lip Reading. PhD thesis, School of Engineering and Applied Science of the George Washington University, 1993.

127
M. Good. Participatory Design of A Portable Torque-Feedback Device. In P. Bauersfeld, J. Bennett, and G. Lynch, editors, Proc. of the Conf. on Human Factors in Computing Systems, CHI'92, pages 439--446. ACM/SIGCHI, 1992.

128
K. W. Grant and L. D. Braida. Evaluating the articulation index for auditory-visual input. Journal of the Acoustical Society of America, 89:2952--2960, 1991.

129
K. P. Green, E. B. Stevens, P. K. Kuhl, and A. M. Meltzoff. Exploring the basis of the McGurk effect: Can perceivers combine information from a female face and a male voice? Journal of the Acoustical Society of America, 87:125, 1990.

130
L. Grimby, J. Hannerz, and B. Hedman. Contraction time and voluntary discharge properties of individual short toe extensors in man. Journal of Physiology, 289:191--201, 1979.

131
R. Gruber. Handsteuersystem für die Bewegungsführung. PhD thesis, Universität Karlsruhe, 1992.

132
T. Guiard-Marigny. Animation en temps reel d'un modele parametrise de levres. PhD thesis, Institut National Polytechnique de Grenoble, 1992.

133
T. Guiard-Marigny, A. Adjoudani, and C. Benoît. A 3-D model of the lips for visual speech synthesis. In Proceedings of the 2nd ESCA-IEEE Workshop on Speech Synthesis, pages 49--52, New Paltz, NY, 1994.

134
R. Hammarberg. The metaphysics of coarticulation. Journal of Phonetics, 4:353--363, 1976.

135
P. H. Hartline. Multisensory convergence. In G. Adelman, editor, Encyclopedia of Neuroscience, volume 2, pages 706--709. Birkhauser, 1987.

136
Y. Hatwell. Toucher l'espace. La main et la perception tactile de l'espace. Technical report, Universitaires de Lille, 1986.

137
Y. Hatwell. Transferts intermodaux et integration intermodale. 1993.

138
C. Henton and P. Litwinowicz. Saying and seeing it with feeling: techniques for synthesizing visible, emotional speech. In Proceedings of the 2nd ESCA-IEEE Workshop on Speech Synthesis, pages 73--76, New Paltz, NY, 1994.

139
D. R. Hill, A. Pearce, and B. Wyvill. Animating speech: an automated approach using speech synthesised by rules. The Visual Computer, 3:176--186, 1988.

140
D. R. Hill, A. Pearce, and B. Wyvill. Animating speech: an automated approach using speech synthesised by rules. The Visual Computer, 3:277--289, 1989.

141
W. Hill et al. Architectural Qualities and Principles for Multimodal and Multimedia Interfaces, chapter 17, pages 311--318. ACM Press, 1992.

142
G. Hirzinger. Multisensory Shared Autonomy and Tele-Sensor-Programming -- Key Issues in Space Robotics. Journ. on Robotics and Autonmous Systems, (11):141--162, 1993.

143
X. D. Huang, Y. Ariki, and M. A. Jack. Hidden Markov Models for Speech Recognition. Edinburgh University Press, 1990.

144
H. Hudde. Messung der Trommelfellimpedanz des menschlichen Ohres bis 19kHz (Measurement of eardum impedance up to 19 kHz). PhD thesis, Ruhr-Universität Bochum, 1980.

145
H. Hudde. Estimation of the Area Function of Human Ear Canals by Sound-Pressure Measurements. Journ. Acoust. Soc. Am., 73:24--31, 1983.

146
H. Hudde. Measurement of Eardrum Impedance of Human Ears. Journ. Acoust. Soc. Am., 73:242--247, 1983.

147
H. Hudde. Measurement-Related Modeling of Normal and Reconstructed Middle Ears. Acta Acustica, submitted, 1994.

148
H. Iwata. Artificial Reality with Force-feedback: Development of Desktop Virtual Space with Compact Master Manipulator. Computer Graphics, 24(4):165--170, 1990.

149
R. Jakobson, G. Fant, and M. Halle. Preliminaries to Speech Analysis. The distinctive features and their correlates. MIT Press, Cambridge MA, 1951.

150
B. M. Jau. Anthropomorphic Exoskeleton dual arm/hand telerobot controller. pages 715--718, 1988.

151
P. N. Johnson-Laird. Mental models. Cambridge University Press, Cambridge, 1983.

152
P. Kabbash, W. Buxton, and A. Sellen. Two-Handed Input in a Compound Task. Human Factors in Computing Systems, pages 417--423, 1994.

153
E. R. Kandel and J. R. Schwartz. Principles of neural sciences. Elsevier North Holland, 1993. (1981 is the first edition; there is a new one with some update on neurotransmitters and molecular biology aspects, probably dated 1993).

154
J. A. S. Kelso, D. Southard, and D. Goodman. On the nature of human interlimb coordination. Science, 203:1029--1031, 1979.

155
R. D. Kent and F. D. Minifie. Coarticulation in recent speech production models. Journal of Phonetics, 5:115--133, 1977.

156
D. Kieras and P. G. Polson. An Approach to the Formal Analysis of User Complexity. Int. Journ. of Man-Machine Studies, 22:365--394, 1985.

157
D. H. Klatt. Speech perception: A model of acoustic-phonetic analysis and lexical access. Journal of Phonetics, 7:279--312, 1979.

158
D. H. Klatt. Software for a cascade/parallel formant synthesizer. Journal of the Acoustical Society of America, 67:971--995, 1980.

159
J. Kleiser. Sextone for president. Computer-generated movie, 0:28.

160
J. Kleiser. A fast, efficient, accurate way to represent the human face. Course Notes on State of the Art in Facial Animation, SigGraph '89, 22:35--40, 1989.

161
D. B. Koons, C. J. Sparrel, and K. R. Thorisson. Integrating Simultaneous Output from Speech, Gaze, and Hand Gestures. In M. Maybury, editor, Intelligent Multimedia Interfaces, pages 243--261. Menlo Park: AAAI/MIT Press, 1993.

162
G. Kramer. In introduction to auditory display. In G. Kramer, editor, Auditory Display, pages 1--77, Reading, Massachusetts, 1994. Santa Fe Institute, Addison Wesley.

163
N. Kugler, J. A. S. Kelso, and M. T. Turvey. On the concept of coordinative structures as dissipative structures: I. theoretical line. In Tutorials in motor behaviour, pages 32--47. North Holland Amsterdam, 1980.

164
P. N. Kugler, J. A. S. Kelso, and M. T. Turvey. On control and coordination of naturally developing systems. In J. A. S. Kelso and J. E. Clark, editors, The development of movement control and coordination, pages 5--78. New York: Wiley, 1982.

165
P. K. Kuhl and A. N. Meltzoff. The bimodal perception of speech in infancy. Science, 218:1138--1141, 1982.

166
T. Kurihara and K. Arai. A transformation method for modeling and animation of the human face from photographs. In N. M.-T. . D. Thalmann, editor, Computer Animation '91, pages 45--58. Springer-Verlag, 1991.

167
P. Ladefoged. Phonetics prerequisites for a dinstictive feature theory. In Papers in linguistics and phonetics to the memory of Pierre Delattre, Mouton, The Heague, pages 273--285. 1972.

168
P. Ladefoged. A course in phonetics. Hardcourt Brace Jovanovich Inc. NY, 1975.

169
P. Ladefoged. What are linguistic sounds made of? Language, 56:485--502, 1980.

170
J. Laird, A. Newell, and P. Rosenbloom. Soar: an architecture for general intelligence. Artificial Intelligence, 33:1--64, 1987.

171
M. T. Lallouache. Un poste "visage-parole" couleur. Acquisition et traitement automatique des contours des levres. PhD thesis, Institut National Polytechnique de Grenoble, 1991.

172
D. R. J. Laming. Information theory of choice-reaction times. London: Academic Press, 1968.

173
K. S. Lashley. The problem of serial order in behaviour. In L. A. Jeffres, editor, Cerebral Mechanisms in Behaviour, pages 112--136. 1951.

174
H. G. Lauffs. Bediengeräte zur 3-D-Bewegungsführung. PhD thesis, RWTH Aachen, 1991.

175
B. Laurel, R. Strickland, and T. Tow. Placeholder: Landscape and Narrative in Virtual Environments. Computer Graphics, 28(2):118--126, 1994.

176
D. Lavagetto, M. Arzarello, and M. Caranzano. Lipreadable frame animation driven by speech parameters. In IEEE Int. Symposium on Speech, Image Processing and neural Networks, pages 14--16, Hong Kong, April 1994.

177
B. Le Goff, T. Guiard-Marigny, M. Cohen, and C. Benoît. Real-time analysis-synthesis and intelligibility of talking faces. In Proceedings of the 2nd ESCA-IEEE Workshop on Speech Synthesis, pages 53--56, New Paltz, NY, 1994.

178
M. Lee, A. Freed, and D. Wessel. Real time neural network processing of gestural and acoustic signals. In Proc. Intl. Computer Music Conference, Montreal, Canada, 1991.

179
H. Lehnert. Binaurale Raumsimulation: Ein Computermodell zur Erzeugung virtueller auditiver Umgebungen (Binaural room simulation: A computer model for generation of virtual auditory environments). PhD thesis, Ruhr-Universität Bochum, 1992.

180
H. Lehnert and J. Blauert. A Concept for Binaural Room Simulation. In Proc. IEEE-ASSP Workshop on Application of Signal Processing to Audio & Acoustics, USA-New Paltz NY, 1989.

181
H. Lehnert and J. Blauert. Principles of Binaural Room Simulation. Journ. Appl. Acoust., 36:259--291, 1992.

182
M. Leman. Introduction to auditory models in music research. Journal of New Music Research, 23(1), 1994.

183
M. Leman. Schema-Based Tone Center Recognition of Musical Signals. Journ. of New Music Research, 23(2):169--203, 1994.

184
U. Letens. Über die Interpretation von Impedanzmessungen im Gehörgang anhand von Mittelohr-Modellen (Interpretation of impedance measurements in the ear canal in terms of middle-ear models). PhD thesis, Ruhr-Universität Bochum, 1988.

185
J. S. Lew. Optimal Accelerometer Layouts for Data Recovery in Signature Verification. IBM Journal of Research & Development, 24(4):496--511, 1980.

186
J. P. Lewis and F. I. Parke. Automated lip-synch and speech synthesis for character animation. In Proceedings of CHI '87 and Graphics Interface '87, pages 143--147, Toronto, Canada, 1987.

187
A. Liberman and I. Mattingly. The motor theory of speech perception revisited. Cognition, 21:1--36, 1985.

188
I.-S. Lin, F. Wallner, and R. Dillmann. An Advanced Telerobotic Control System for a Mobile Robot with Multisensor Feedback. In Proc. of the 4th Intl. Conf. on Intelligent Autonomous Systems (to appear), 1995.

189
W. Lindemann. Die Erweiterung eines Kreuzkorrelationsmodells der binauralen Signalverarbeitung durch kontralaterale Inhibitionsmechanismen (Extension of a cross-correlation model of binaural signal processing by means of contralateral inhibition mechanisms). PhD thesis, Ruhr-Universität Bochum, 1985.

190
W. Lindemann. Extension of a Binaural Cross-Correlation Model by Means of Contralateral Inhibition. I. Simulation of Lateralization of Stationary Signals. Journ. Acoust. Soc. Am., 80:1608--1622, 1986.

191
W. Lindemann. Extension of a Binaural Cross-Correlation Model by Means of Contralateral Inhibition. II. The Law of the First Wave Front. Journ. Acoust. Soc. Am., 80:1623--1630, 1986.

192
P. H. Lindsay and D. A. Norman. Human Information Processing. Academic Press, New York, 1977.

193
J. F. Lubker. Representation and context sensitivity. In The Cognitive Representation of Speech, pages 127--131. North Holland Amsterdam, 1981.

194
F. J. Maarse, H. J. J. Janssen, and F. Dexel. A Special Pen for an XY Tablet. In W. S. F.J. Maarse, L.J.M. Mulder and A. Akkerman, editors, Computers in Psychology: Methods, Instrumentation, and Psychodiagnostics, pages 133--139. Amsterdam: Swets and Zeitlinger, 1988.

195
L. MacDonald and J. Vince, editors. Interacting with Virtual Environments. Wiley Professional Computing, 1994.

196
T. Machover and J. Chung. Hyperinstruments: Musically intelligent and interactive performance and creativity systems. In Proc. Intl. Computer Music Conference, Columbus, Ohio, USA, 1989.

197
I. S. MacKenzie and W. Buxton. Extending Fitts' Law to Two-Dimensional Tasks. In P. Bauersfeld, J. Bennett, and G. Lynch, editors, Human Factors in Computing Systems, CHI'92 Conf. Proc., pages 219--226. ACM/SIGCHI, ACM Press, May 1992.

198
I. S. MacKenzie, A. Sellen, and W. Buxton. A comparison of input devices in elemental pointing and dragging tasks. In S. P. Robertson, O. G.M., and O. J.S., editors, Proc. of the ACM CHI'91 Conf. on Human Factors, pages 161--166. ACM-Press, 1991.

199
A. MacLeod and Q. Summerfield. Quantifying the contribution of vision to speech perception in noise. British Journal of Audiology, 21:131--141, 1987.

200
P. F. MacNeilage. Motor control of serial ordering of speech. Psychol. Review, 77:182--196, 1970.

201
P. Maes, editor. Designing autonomous agents: Theory and practice from biology to engineering and back, Cambridge, MA, 1990. The MIT Press/Bradford Books.

202
N. Magnenat-Thalmann, E. Primeau, and D. Thalmann. Abstract muscle action procedures for human face animation. Visual Computer, 3:290--297, 1988.

203
N. Magnenat-Thalmann and D. Thalmann. The direction of synthetic actors in the film Rendez-vous Montral. IEEE Computer Graphics & Applications, 7(12):9--19, 1987.

204
E. Magno-Caldognetto et al. Automatic analysis of lips and jaw kinematics in vcv sequences. In Proc. Eurospeech '92, pages 453--456. 1992.

205
E. Magno-Caldognetto et al. Liprounding coarticulation in italian. In Proc. Eurospeech '92, pages 61--64. 1992.

206
E. Magno-Caldognetto et al. Articulatory dynamics of lips in italian /'vpv/ and /'vbv/ sequences. In Proc. Eurospeech '93. 1993.

207
C. Marsden, P. Merton, and H. Morton. Latency measurements compatible with a cortical pathway for the stretch reflex in man. Journal of Physiology, 230:58--59, 1973.

208
D. W. Massaro. Categorical partition: A fuzzy-logical model of categorization behaviour. 1987.

209
D. W. Massaro. Multiple book review of `speech perception by ear and eye'. Behavioral and Brain Sciences, 12:741--794, 1989.

210
D. W. Massaro. Connexionist models of speech perception. In Proceedings of the XIIth International Congress of Phonetic Sciences, volume 2, pages 94--97, Aix-en-Provence, France, 1991.

211
D. W. Massaro and M. M. Cohen. Evaluation and integration of visual and auditory information in speech perception. Journal of Experimental Psychology: Human Perception & Performance, 9:753--771, 1983.

212
D. W. Massaro and M. M. Cohen. Perception of synthesized audible and visible speech. Psychological Science, 1:55--63, 1990.

213
D. W. Massaro and D. Friedman. Models of integration given multiple sources of information. Psychological Review, 97:225--252, 1990.

214
T. H. Massie and J. K. Salisbury. The PHANToM Haptic Interface: a Device for Probing Virtual Objects. In Proc. of the ASME Winter Annual Meeting, Symp. on Haptic Interfaces for Virtual Environment and Teleoperator Systems, Chicago, 1994.

215
K. Matsuoka, K. Masuda, and K. Kurosu. Speechreading trainer for hearing-impaired children. In J. Patrick and K. Duncan, editors, Training, Human Decision Making and Control. Elsevier Science, 1988.

216
M. Maybury, editor. Intelligent Multimedia Interfaces. Menlo Park: AAAI/MIT Press, 1993.

217
N. Mayer. XWebster: Webster's 7th Collegiate Dictionary, Copyright © 1963 by Merriam-Webster, Inc. On-line access via Internet, 1963.

218
S. McAdams and E. Bigand, editors. Thinking in Sound - The Cognitive Psychology of Human Audition. Clarendon Press, Oxford, 1993.

219
N. P. McAngus Todd. The Auditory ``Primal Sketch'': A Multiscale Model of Rhytmic Grouping. Journ. of New Music Research, 23(1), 1994.

220
H. McGurk and J. MacDonald. Hearing lips and seeing voices. Nature, 264:746--748, 1976.

221
M. L. Meeks and T. T. Kuklinski. Measurement of Dynamic Digitizer Performance. In R. Plamondon and G. G. Leedham, editors, Computer Processing of Handwriting, pages 89--110. Singapore: World Scientific, 1990.

222
M. A. Meredith and B. E. Stein. Interactions among converging sensory inputs in the superior colliculus. Science, 221:389--391, 1983.

223
G. A. Miller. The magical number seven, plus or minus two: Some limits on our capacity to process information. Psychological Review, 63:81--97, 1956.

224
G. S. P. Miller. The audition. Computer-generated movie, 3:10, 1993.

225
A. G. Mlcoch and D. J. Noll. Speech production models as related to the concept of apraxia of speech. In Speech and Language. Advances in basic research and practise, volume 4, pages 201--238. Academic Press NY, 1980.

226
T. Mohamadi. Synthese partir du texte de visages parlants: ralisation d'un prototype et mesures d'intelligibilit bimodale. PhD thesis, Institut National Polytechnique, Grenoble, France, 1993.

227
A. A. Montgomery and G. Soo Hoo. ANIMAT: A set of programs to generate, edit and display sequences of vector-based images. Behavioral Research Methods and Instrumentation, 14:39--40, 1982.

228
P. Morasso and V. Sanguineti. Self-Organizing Topographic Maps and Motor Learning. In Cliff et al., editors, From Anamals to Animats 3, pages 214--220. MIT Press, 1994.

229
S. Morishima, K. Aizawa, and H. Harashima. Model-based facial image coding controlled by the speech parameter. In Proc. PCS-88, Turin, number 4. 1988.

230
S. Morishima, K. Aizawa, and H. Harashima. A real-time facial action action image synthesis driven by speech and text. In Visual Communication and Image processing '90, the Society of Photo optical Instrumentation Engineers, volume 1360, pages 1151--1158, 1990.

231
S. Morishima and H. Harashima. A media conversion from speech to facial image for intelligent man-machine interface. IEEE Journal on Sel. Areas in Comm., 9(4):594--600, 1991.

232
H. Morita, S. Hashimoto, and S. Otheru. A computer music system that follows a human conductor. IEEE COMPUTER, 24(7):44--53, 1991.

233
P. Morrel-Samuels. Clarifying the distinction between lexical and gestural commands. Intl. Journ. of Man-Machine Studies, 32:581--590, 1990.

234
A. Mulder. Virtual Musical Instruments: Accessing the Sound Synthesis Universeas a Performer. In Proc. First Brazilian Symposium on Computer Music, 14th Annual Congress of the Brazilian Computer Society, Caxambu, Minas Geiras, Brazil, 1994.

235
A. Murata. An Experimental Evaluation of Mouse, Joystick, Joycard, Lightpen, Trackball and Touchscreen for Pointing - Basic Study on Human Interface Design. In H.-J. Bullinger, editor, Human Aspects in Computing: Design and Use of Interactive Systems and Work with Terminals, 1991.

236
L. E. Murphy. Absolute Judgements of Duration. Journ. of Experimental Psychology, 71:260--263, 1966.

237
E. D. Mynatt. Auditory representations of graphical user interfaces. In G. Kramer, editor, Auditory Display, pages 533--553, Reading, Massachusetts, 1994. Santa Fe Institute, Addison Wesley.

238
M. Nahas, H. Huitric, and M. Saintourens. Animation of a B-Spline figure. The Visual Computer, (3):272--276, 1988.

239
N. H. Narayanan, editor. Special issue on Computational Imagery, volume 9 of Computational Intelligence. Blackwell Publ., 1993.

240
N. P. Nataraja and K. C. Ravishankar. Visual recognition of sounds in Kannada. Hearing Aid Journal, pages 13--16, 1983.

241
K. K. Neely. Effect of visual factors on the intelligibility of speech. Journal of the Acoustical Society of America, 28:1275--1277, 1956.

242
N. Negroponte. From Bezel to Proscenium. In Proceedings of SigGraph '89, 1989.

243
L. Nigay and J. Coutaz. A design space for multimodal systems - concurrent processing and data fusion. In INTERCHI '93 - Conference on Human Factors in Computing Systems, Amsterdam, pages 172--178. Addison Wesley, 1993.

244
Nishida. Speech recognition enhancemant by lip information. ACM SIGCHI bulletin, 17:198--204, 1986.

245
S. G. Nooteboom. The target theory of speech production. In IPO Annual Progress Report, volume 5, pages 51--55. 1970.

246
D. A. Norman. Cognitive Engineering. In D. A. Norman and S. W. Draper, editors, User Centered System Design, pages 31--61. Lawrence Erlbaum Association, 1986.

247
C. H. Null and J. P. Jenkins, editors. NASA Virtual Environment Research, Applications, and Technology. A White Paper, 1993.

248
S. Oehman. Numerical models of coarticulation. J.A.S.A., 41:310--320, 1967.

249
A. O'Leary and G. Rhodes. Cross Modal Effects on Visual and Auditory Object Perception. Perception and Psychophysics, 35:565--569, 1984.

250
J. R. Olson and G. Olson. The growth of cognitive modeling in human-computer interaction since goms. Human-Computer Interaction, 5:221--265, 1990.

251
P. L. Olson and M. Sivak. Perception-response time to unexpected roadway hazard. Human Factors, 26:91--96, 1986.

252
P. O'Rorke and A. Ortony. Explaining Emotions. Cognitive Science, 18(2):283--323, 1994.

253
O. Ostberg, B. Lindstrom, and P. O. Renhall. Contribution to speech intelligibility by different sizes of videophone displays. In Proc. of the Workshop on Videophone Terminal Design, Torino, Italy, 1988. CSELT.

254
E. Owens and B. Blazek. Visems observed by hearing-impaired and normal-hearing adult viewers. Journal of Speech and Hearing Research, 28:381--393, 1985.

255
A. Paouri, N. Magnenat-Thalmann, and D. Thalmann. Creating realistic three-dimensional human shape characters for computer-generated films. In N. Magnenat-Thalmann and D. Thalmann, editors, Computer Animation'91, pages 89--99. Springer-Verlag, 1991.

256
F. I. Parke. Computer-generated animation of faces. In Proceedings of ACM National Conference, volume 1, pages 451--457, 1972.

257
F. I. Parke. Parameterized models for facial animation. IEEE Computer Graphics and Applications, 2:61--68, 1981.

258
F. I. Parke. Facial animation by spatial mapping. PhD thesis, University of Utah, Department of Computer Sciences, 1991.

259
E. C. Patterson, P. Litwinowicz, and N. Greene. Facial animation by spatial mapping. In N. Magnenat-Thalmann and D. Thalmann, editors, Computer Animation'91, pages 31--44. Springer-Verlag, 1991.

260
S. J. Payne. Task action grammar. In B. Shackel, editor, Proc. Interact '84, pages 139--144. Amsterdam: North-Holland, 1984.

261
A. Pearce, B. Wyvill, G. Wyvill, and D. Hill. Speech and expression: A computer solution to face animation. In Graphics Interface '86, pages 136--140, 1986.

262
C. Pelachaud. Communication and coarticulation in facial animation. PhD thesis, University of Pennsylvania, USA, 1991.

263
C. Pelachaud, N. Badler, and M. Steedman. Linguistics issues in facial animation. In N. Magnenat-Thalmann and D. Thalmann, editors, Computer Animation'91, pages 15--30. Springer-Verlag, 1991.

264
A. Pentland and K. Masi. Lip reading: Automatic visual recognition of spoken words. Technical Report 117, MIT Media Lab Vision Science Technical Report, 1989.

265
J. S. Perkell. Phonetic features and the physiology of speech production. In Language Production, pages 337--372. Academic Press NY, 1980.

266
J. S. Perkell. On the use of feedback in speech production. In The Cognitive Representation of Speech, pages 45--52. North Holland Amsterdam, 1981.

267
E. Petajan. Automatic Lipreading to Enhance Speech Recognition. PhD thesis, University of Illinois at Urbana-Champain, 1984.

268
J. Piaget. The Origins of Intelligence in Children. International University Press, New York, 1952.

269
K. Pimentel and K. Teixeira. Virtual Reality: through the new looking glass. Windcrest Books, 1993.

270
B. Pinkowski. LPC Spectral Moments for Clustering Acoustic Transients. IEEE Trans. on Speech and Audio Processing, 1(3):362--368, 1993.

271
R. Plamondon and F. J. Maarse. An evaluation of motor models of handwriting. IEEE Transactions on Systems, Man and Cybernetics, 19:1060--1072, 1989.

272
S. M. Platt. A structural model of the human face. PhD thesis, University of Pennsylvania, USA, 1985.

273
S. M. Platt and N. I. Badler. Animating facial expressions. Computer Graphics, 15(3):245--252, 1981.

274
I. Pollack. The Information of Elementary Auditory Displays. Journ. of the Acoustical Society of America, 25:765--769, 1953.

275
W. Pompetzki. Psychoakustische Verifikation von Computermodellen zur binauralen Raumsimulation (Psychoacoustical verification of computer-models for binaural room simulation). PhD thesis, Ruhr-Universität Bochum, 1993.

276
C. Pösselt. Einfluss von Knochenschall auf die Schalldämmung von Gehörschützern (Influence of bone conduction on the attenuation of personal hearing protectors). PhD thesis, Ruhr-Universität Bochum, 1986.

277
C. Pösselt et al. Generation of Binaural Signals for Research and Home Entertainment. In Proc. 12th Int. Congr. Acoust. Vol. I, B1-6, CND-Toronto, 1986.

278
W. K. Pratt. Digital Image Processing. Wiley, New York, 1991.

279
J. Psotka, S. A. Davison, and S. A. Lewis. Exploring immersion in virtual space. Virtual Reality Systems, 1(2):70--92, 1993.

280
M. Radeau. Cognitive Impenetrability in Audio-Visual Interaction. In J. Alegria et al., editors, Analytical Approaches to Human Cognition, pages 183--198. North-Holland, Amsterdam, 1992.

281
M. Radeau. Auditory-visual spatial interaction and modularity. Cahiers de Psychologie Cognitive, 13(1):3--51, 1994.

282
M. Radeau and P. Bertelson. Auditory-Visual Interaction and the Timing of Inputs. Psychological Research, 49:17--22, 1987.

283
J. Rasmussen. Information Processing and Human-Machine Interaction. An Approach to Cognitive Engineering. North-Holland, 1986.

284
C. M. Reed, W. M. Rabinowitz, N. I. Durlach, and L. D. Braida. Research on the tadoma method of speech comunication. Journal of the Acoustical Society of America, 77(1):247--257, 1985.

285
W. T. Reeves. Simple and complex facial animation: Case studies. In Course Notes on State of the Art in Facial Animation, volume 26. SigGraph '90, 1990.

286
D. Reisberg, J. McLean, and G. A. Easy to hear but hard to understand: A lip-reading advantage with intact auditory stimuli. In B. Dodd and R. Campbell, editors, Hearing by eye: The psychology of lip-reading, pages 97--114. Lawrence Erlbaum Associates, Hillsdale, New Jersey, 1987.

287
J. Rhyne. Dialogue Management for Gestural Interfaces. Computer Graphics, 21(2):137--142, 1987.

288
D. Riecken, editor. Special Issue on Intelligent Agents, volume 37 of Communications of the ACM, 1994.

289
B. Rime and L. Schiaratura. Gesture and speech. In F. R. S. and R. B., editors, Fundamentals of Nonverbal Behaviour, pages 239--281. New York: Press Syndacate of the University of Cambridge, 1991.

290
A. Risberg and J. L. Lubker. Prosody and speechreading. Quaterly Progress & Status Report 4, Speech Transmission Laboratory, KTH, Stockholm, Sweden, 1978.

291
J. Robert. Integration audition-vision par reseaux de neurones: une etude comparative des modeles d'integration appliques la perception des voyelles. Technical report, Rapport de DEA Signal-Image-Parole, ENSER, Grenoble, France, 1991.

292
J. Robert-Ribes, P. Escudier, and J. L. Schwartz. Modeles d'integration audition-vision: une etude neuromimetique. Technical report, ICP, 1991. Rapport Interne.

293
G. G. Robertson, S. K. Card, and J. D. Mackinlay. The Cognitive Coprocessor Architecture for Interactive User Interfaces. ACM, pages 10--18, 1989.

294
D. Salber and J. Coutaz. Applying the Wizard of Oz Technique to the Study of Multimodal Systems, 1993.

295
V. J. Samar and D. C. Sims. Visual evoked responses components related to speechreading and spatial skills in hearing and hearing-impaired adults. Journal of Speech & Hearing Research, 27:162--172, 1984.

296
B. Scharf. Loudness, chapter 6, pages 187--242. Academic Press, New York, 1978.

297
T. Schiphorst et al. Tools for Interaction with the Creative Process of Composition. In Proc. of the CHI `90, pages 167--174, 1990.

298
D. Schlichthärle. Modelle des Hörens - mit Anwendungen auf die Hörbarkeit von Laufzeitverzerrungen (Models of hearing - applied to the audibility of arrival-time distortions). PhD thesis, Ruhr-Universität Bochum, 1980.

299
E. M. Schmidt and J. S. McIntosh. Excitation and inhibition of forearm muscles explored with microstimulation ofprimate motor cortex during a trained task. In Abstracts of the 9th Annual Meeting of the Society for Neuroscience, volume 5, page 386, 1979.

300
L. Schomaker et al. --- Multimodal Integration for Advanced Multimedia Interfaces. Annex i: Technical annex, Commission of the European Communities, December 1993.

301
L. R. B. Schomaker. Using Stroke- or Character-based Self-organizing Maps in the Recognition of On-line, Connected Cursive Script. Pattern Recognition, 26(3):443--450, 1993.

302
L. R. B. Schomaker and R. Plamondon. The Relation between Pen Force and Pen-Point Kinematics in Handwriting. Biological Cybernetics, 63:277--289, 1990.

303
L. R. B. Schomaker, A. J. W. M. Thomassen, and H.-L. Teulings. A computational model of cursive handwriting. In R. Plamondon and M. L. Suen, C. Y. andSimner, editors, Computer Recognition and Human Production of Handwriting, pages 153--177. Singapore: World Scientific, 1989.

304
J. Schröter. Messung der Schalldämmung von Gehörschützern mit einem physikalischen Verfahren- Kunstkopfmethode (Measurement of the attenuation of personal hearing protectors by means of a physical technique - dummy-head method). PhD thesis, Ruhr-Universität Bochum, 1983.

305
J. Schröter. The Use of Acoustical Test Fixures for the Measurement of Hearing-Protector Attenuation, Part I: Review of Previous Work and the Design of an Improved Test Fixure. Journ. Acoust. Soc. Am., 79:1065--1081, 1986.

306
J. Schröter and C. Pösselt. The Use of Acoustical Test Fixures for the Measurement of Hearing-Protector Attenuation, Part II: Modeling the External Ear, Simulating Bone Conduction, and Comparing Test Fixure and Real-Ear Data. Journ. Acoust. Soc. Am., 80:505--527, 1986.

307
J. A. Scott Kelso. The process approach to understanding human motor behaviour: an introduction. In J. A. Scott Kelso, editor, Human motor behaviour: an introduction, pages 3--19. Lawrence Erlbaum Ass. Pub., Hillsdale NJ, 1982.

308
G. M. Shepherd. Neurobiology. Oxford Univ. Press, 2nd edition edition, 1988.

309
K. B. Shimoga. A Survey of Perceptual Feedback Issues in Dexterous Telemanipulation: Part II. Finger Touch Feedback. In Proc. of the IEEE Virtual Reality Annual International Symposium. Piscataway, NJ : IEEE Service Center, 1993.

310
B. Shneiderman. Designing the User Interface: Strategies for Effective Human-Computer Interaction. New York: Addison-Wesley, 1992.

311
D. Silbernagel. Taschenatlas der Physiologie. Thieme, 1979.

312
R. Simon. Pen Computing Futures: A Crystal Ball Gazing Exercise. In Proc. of the IEEE Colloquium on Handwriting and Pen-based Input, Digest Number 1994/065, page 4. London: The Institution of Electrical Engineers, March 1994.

313
A. D. Simons and S. J. Cox. Generation of mouthshapes for a synthetic talking head. In Proceedings of the Institute of Acoustics, volume 12, pages 475--482, Great Britain, 1990.

314
H. Slatky. Algorithmen zur richtungsselektiven Verarbeitung von Schallsignalen eines binauralen Cocktail-Party-Prozessors (Algorithms for direction-selective processing of sound signals by means of a binaural cocktail-party processor). PhD thesis, Ruhr-Universität Bochum, 1993.

315
P. M. T. Smeele and A. C. Sittig. The contribution of vision to speech perception. In Proceedings of 13th International Symposium on Human Factors in Telecommunications, page 525, Torino, 1990.

316
S. Smith. Computer lip reading to augment automatic speech recognition. Speech Tech, pages 175--181, 1989.

317
P. Smolensky. A proper treatment of connectionism. Behavioural and Brain Sciences, 11:1--74, 1988.

318
H. E. Staal and D. C. Donderi. The Effect of Sound on Visual Apparent Movement. American Journal of Psychology, 96:95--105, 1983.

319
L. Steels. Emergent frame recognition and its use in artificial creatures. In Proc. of the Intl. Joint Conf. on Artificial Intelligence IJCAI-91, pages 1219--1224, 1991.

320
L. Steels. The artificial life roots of artificial intelligence. Artificial Life, 1(1-2):75--110, 1994.

321
B. E. Stein and M. A. Meredith. Merging of the Senses. MIT Press, Cambridge, London, 1993.

322
R. Steinmetz. Multimedia-Technologie. Springer-Verlag, 1993.

323
K. Stevens. The quantal nature of speech: Evidence from articulatory-acoustic data. In E. E. D. Jr and P. B. Denes, editors, Human communication: A unified view, pages 51--66. McGraw-Hill, New-York, 1972.

324
K. N. Stevens and J. S. Perkell. Speech physiology and phonetic features. In Dynamic aspects of speech production, pages 323--341. University of Tokyo Press, Tokyo, 1977.

325
S. S. Stevens. On the Psychophysical Law. Psychological Review, 64:153--181, 1957.

326
R. J. Stone. Virtual Reality & Telepresence -- A UK Iniative. In Virtual Reality 91 -- Impacts and Applications. Proc. of the 1st Annual Conf. on Virtual Reality, pages 40--45, London, 1991. Meckler Ltd.

327
D. Stork, G. Wolff, and E. Levine. Neural network lipreading system for improved speech recognition. In International Joint Conference of Neural Networks, Baltimore, 1992.

328
N. Suga. Auditory neuroethology and speech processing: complex-sound processing by combination-sensitive neurons. In G. M. Edelmann, W. Gall, and W. Cowan, editors, Auditory Function: Neurobiological Bases of Hearing. John Wiley and Sons, New York, 1988.

329
J. W. Sullivan and S. W. Tyler, editors. Intelligent User Interfaces. ACM Press, Addison-Wesley, 1991.

330
W. H. Sumby and I. Pollack. Visual contribution to speech intelligibility in noise. Journal of the Acoustical Society of America, 26:212--215, 1954.

331
A. Q. Summerfield. Use of visual information for phonetic perception. Phonetica, 36:314--331, 1979.

332
Q. Summerfield. Comprehensive account of audio-visual speech perception. In B. D. . R. Campbell, editor, Hearing by eye: The psychology of lip-reading. Lawrence Erlbaum Associates, Hillsdale, New Jersey, 1987.

333
Q. Summerfield. Visual perception of phonetic gestures. In G. Mattingly and M. Studdert-Kennedy, editors, Modularity and the Motor Theory of Speech Perception. Lawrence Erlbaum Associates, Hillsdale, New Jersey, 1991.

334
I. E. Sutherland. The ultimate display. In Information Processing 1965, Proc. IFIP Congress, pages 506--508, 1965.

335
C. C. Tappert, C. Y. Suen, and T. Wakahara. The State of the Art in On-line Handwriting Recognition. IEEE Trans. on Pattern Analysis & Machine Intelligence, 12:787--808, 1990.

336
L. Tarabella. Special issue on Man-Machine Interaction in Live Performance. In Interface, volume 22. Swets & Zeitlinger, Lisse, The Netherlands, 1993.

337
D. Terzopoulos and K. Waters. Techniques for realistic facial modeling and animation. In N. Magnenat-Thalmann and D. Thalmann, editors, Computer Animation'91, pages 59--74. Springer-Verlag, 1991.

338
H. L. Teulings and F. J. Maarse. Digital Recording and Processing of Handwriting Movements. Human Movement Science, 3:193--217, 1984.

339
M. T. Turvey. Preliminaries to a theory of action with reference to vision. In R. Shaw and J. Bransford, editors, Perceiving, Acting and Knowing: Toward an ecological psychology, pages 211--265. Hillsdale, NJ: Erlbaum, 1977.

340
T. Ungvary, S. Waters, and P. Rajka. NUNTIUS: A computer system for the interactive composition and analysis of music and dance. Leonardo, 25(1):55--68, 1992.

341
Väänänen, K. and Böhm, K. Gesture Driven Interaction as a Human Factor in Virtual Environments -- An Approach with Neural Networks, chapter 7, pages 93--106. Academic Press Ltd., 1993.

342
T. van Gelderen, A. Jameson, and A. L. Duwaer. Text recognition in pen-based computers: An empirical comparison of methods. In InterCHI '93 Conference Proceedings, pages 87--88, Amsterdam, 1993.

343
D. Varner. Olfaction and VR. In Proceedings of the 1993 Conference on Intelligent Computer-Aided Training and Virtual Environment Technology, Houston, TX, 1993.

344
B. Verplank. Tutorial notes. In Human Factors in Computing Systems, CHI'89. New York: ACM Press, 1989.

345
M.-L. Viaud. Animation faciale avec rides d'expression, vieillissement et parole. PhD thesis, Paris XI-Orsay University, 1993.

346
K. J. Vicente and J. Rasmussen. Ecological Interface Design: Theoretical Foundations. IEEE Transactions on Systems, Man, and Cybernetics, 22(4):589--606, Aug. 1992.

347
P. Viviani and N. Stucchi. Motor-perceptual interactions. In J. Requin and G. Stelmach, editors, Tutorials in Motor Behavior, volume 2. Elsevier Science Publishers B. V., North-Holland, Amsterdam, 1991.

348
J. H. M. Vroomen. Hearing voices and seeing lips: Investigations in the psychology of lipreading. PhD thesis, Katolieke Univ. Brabant, Sep. 1992.

349
W. J. Wadman. Control mechanisms of fast goal-directed arm movements. PhD thesis, Utrecht University, The Netherlands, 1979. Doctoral dissertation.

350
W. J. Wadman, W. Boerhout, and J. J. Denier van der Gon. Responses of the arm movement control system to force impulses. Journal of Human Movement Studies, 6:280--302, 1980.

351
E. A. Wan. Temporal Back-propagation for FIR Neural Networks. In Proc. Int. Joint Conf. on Neural Networks, volume 1, pages 575--580, San Diego CA, 1990.

352
J. R. Ward and M. J. Phillips. Digitizere Technology: Performance and the Effects on the User Interface. IEEE Computer Graphics and Applications, (April '87):31--44, 1987.

353
D. H. Warren. Spatial Localization Under Conflicting Conditions: Is There a Single Explanation? Perception and Psychophysics, (8):323--337, 1979.

354
D. H. Warren, R. B. Welch, and T. J. McCarthy. The role of visual-auditory compellingness in the ventriloquism effect: implications for transitivity among the spatial senses. Perception and Psychophysics, 30:557--564, 1981.

355
K. . Waters. A muscle model for animating three-dimensional facial expression. In Proceedings of Computer Graphics, volume 21, pages 17--24, 1987.

356
K. Waters. Bureaucrat. Computer-generated movie, 1990.

357
P. Weckesser and F. Wallner. Calibrating the Active Vision System KASTOR for Real-Time Robot Navigation. In J. F. Fryer, editor, Close Range Techniques and Machine Vision, pages 430--436. ISPRS Commision V, 1994.

358
R. B. Welch and D. H. . Warren. Handbook of perception and human performanceHandbook of perception and human performance, chapter Chapter 25: Intersensory interactions. 1986.

359
E. M. Wenzel. Spatial sound and sonification. In G. Kramer, editor, Auditory Display, pages 127--150, Reading, Massachusetts, 1994. Santa Fe Institute, Addison Wesley.

360
D. Wessel. Improvisation with high highly interactive real-time performance systems. In Proc. Intl. Computer Music Conference, Montreal, Canada, 1991.

361
W. A. Wickelgren. Context-sensitive coding, associative memory and serial order in speech behaviour. Psycho. Rev., 76:1--15, 1969.

362
N. Wiener. Cybernetics: or control and communication in the animal and the machine. New York: Wiley, 1948.

363
L. Williams. Performance driven facial animation. Computer Graphics, 24(3):235--242, 1990.

364
C. G. Wolf and P. Morrel-Samuels. The use of hand-drawn gestures for text editing. Intl. Journ. on Man-Machine Studies, 27:91--102, 1987.

365
S. Wolf. Lokalisation von Schallquellen in geschlossenen Räumen (Localisation of sound sources in enclosed spaces). PhD thesis, Ruhr-Universität Bochum, 1991.

366
P. Woodward. Le speaker de synthese. PhD thesis, ENSERG, Institut National Polytechnique de Grenoble, France, 1991.

367
P. Woodward, T. Mohamadi, C. Benoît, and G. Bailly. Synthese partir du texte d'un visage parlant francais. In Actes des 19emes Journees d'Etude sur la Parole, Bruxelles, 1992. Groupe Communication Parlee de la SFA.

368
M. Wooldridge and N. R. Jennings. Intelligent Agents: Theory and Practice. (submitted to:) Knowledge Engineering Review, 1995.

369
R. H. Wurtz and C. W. Mohler. Organization of monkey superior colliculus enhanced visual response of superficial layer cells. Journal of Neurophysiology, 39:745--765, 1976.

370
B. L. M. Wyvill and D. R. Hill. Expression control using synthetic speech. In SigGraph '90 Tutorial Notes, volume 26, pages 186--212, 1990.

371
N. Xiang, , and J. Blauert. A Miniature Dummy Head for Binaural Evaluation of Tenth-Scale Acoustic Models. Journ. Appl. Acoust., 33:123--140, 1991.

372
N. Xiang. Mobile Universal Measuring System for the Binaural Room-Acoustic-Model Technique. PhD thesis, Ruhr-Universität Bochum, 1991.

373
N. Xiang and J. Blauert. Binaural Scale Modelling for Auralization and Prediction of Acoustics in Auditoria. Journ. Appl. Acoust., 38:267--290, 1993.

374
B. P. Yuhas, M. H. Goldstein Jr., and T. J. Sejnowski. Integration of acoustic and visual speech signal using neural networks. IEEE Communications Magazine, pages 65--71, 1989.



Esprit Project 8579/MIAMI (Schomaker et al., '95)
Thu May 18 16:00:17 MET DST 1995