Next: Control and Manipulation
Up: The bimodality of
Previous: The synergetic bimodality
To sum up these various
psycholinguistic findings: As concerning speaker localization, vision is
dominant on audition; for speaker comprehension, vision greatly improves
intelligibility, especially when acoustics is degraded and/or the message
is complex; this speech-reading benefit generally holds even when the
channels are slightly desynchronized; due to articulatory anticipation, the
eye often receives information before the ear, and seems to take advantage
of it; and finally, as for localization, vision can bias auditory
comprehension, as in the McGurk effect. The Motor Theory of speech
perception supposes that we have an innate knowledge of how to produce
speech [187]. Recently, in a chapter of a book devoted to the
reexamination of this theory, Summerfield suggested that the human ability
to lipread could also be innate [333]. His assumption allows a
partial explanation of the large variability observed in human performance
at speech-reading, as this ability seems to be related to the visual
performance capacities of the subject [76,295].
Summerfield also hypothesized that evolutionary pressure could have led to
refined auditory abilities for biologically significant sounds, but not for
lipreading abilities. Therefore, whatever the innate encoding of speech,
whether in an auditory or visual form, an intermediate stage of motor
command coding allowing us to perceive speech would provide us not only
with the coherence of acoustic and visual signals in a common metric, but
also with an improvement in the processing of the speech percept (whose
final storage pattern is still an open question). This is my
interpretation of the famous formula ``Perceiving is acting'', recently
revised by Viviani and Stucchi [347] into ``Perceiving is knowing
how to act''.
Esprit Project 8579/MIAMI (Schomaker et al., '95)
Thu May 18 16:00:17 MET DST 1995