next up previous contents
Next: The Subcortical Auditory Up: The Ears-and-Head Array: Previous: Binaural measurement and

Binaural simulation and displays

There are many current applications in binaural simulation and displays, with the potential of an ever-increasing number. The following list provides examples: binaural mixing [277] binaural room simulation [180,181] advanced sound effects (for example, for computer games), provision of auditory spatial-orientation cues (e.g., in the cockpit or for the blind), auditory display of complex data, and auditory representation in teleconference, telepresence and teleoperator systems.

  
Figure A.1 : Binaural-Technology equipment of different complexity: (a) probe-microphone system on a real head, (b) artificial-head system, (c) artificial-head system with signal-processing and signal-analysis capabilities, (d) binaural room-simulation system with head-position tracker for virtual-reality applications.

Figure A.1 , by showing Binaural-Technology equipment in an order of increasing complexity, is meant to illustrate some of the ideas discussed above. The most basic equipment is obviously the one shown in panel (a). The signals at the two ears of a subject are picked up by (probe) microphones in a subject's ear canal, then recorded, and later played back to the same subject after appropriate equalization. Equalization is necessary to correct linear distortions, induced by the microphones, the recorder, and the headphones, so that the signals in the subject's ear canals during the playback correspond exactly to those in the pick-up situation. Equipment of this kind is adequate for personalized binaural recordings. Since a subject's own ears are used for the recording, maximum authenticity can be achieved.

Artificial heads (panel b) have practical advantages over real heads for most applications; for one thing, they allow for auditory real-time monitoring of a different location. One has to realize, however, that artificial heads are usually cast or designed from a typical or representative subject. Their directional characteristics will thus, in general, deviate from those of an individual listener. This fact can lead to a significant decrease in perceptual authenticity. For example, errors such as sound coloration or front-back confusion may appear. Individual adjustment is only partly possible, namely, by equalizing the headphones specifically for each subject. To this end, the equalizer may be split into two components, a head equalizer (1) and a headphone equalizer (2). The interface between the two allows some freedom of choice. Typically, it is defined in such a way that the artificial head features a flat frequency response either for frontal sound incidence (free-field correction) or in a diffuse sound field (diffuse-field correction). The headphones must be equalized accordingly. It is clear that individual adjustment of the complete system, beyond a specific direction of sound incidence, is impossible in principle, unless the directional characteristics of the artificial head and the listener's head happen to be identical.

Panel (c) depicts the set-up for applications were the signals to the two ears of the listener are to be measured, evaluated and/or manipulated. Signal-processing devices are provided to work on the recorded signals. Although real-time processing is not necessary for many applications, real-time play back is mandatory. The modified and/or unmodified signals can be monitored either by a signal analyzer or by binaural listening.

The most complex equipment in this context is represented by panel (d). Here the input signals no longer stem from a listener's ears or from an artificial head, but have been recorded or even generated without the participation of ears or ear replicas. For instance, anechoic recordings via conventional studio microphones may be used. The linear distortions which human ears superimpose on the impinging sound waves, depending on their direction of incidence and wave-front curvature, are generated electronically via a so-called ear-filter bank (electronic head). To be able to assign the adequate head-transfer function to each incoming signal component, the system needs data of the geometry of the sound field. In a typical application, e.g. architectural-acoustics planning, the system contains a sound-field simulation based on data of the room geometry, the absorption features of the materials implied, and the positions of the sound sources and their directional characteristics. The output of the sound-field modeling is fed into the electronic head, thus producing so-called binaural impulse responses. Subsequent convolution of these impulse responses with anechoic signals generates binaural signals as a subject would observe in a corresponding real room. The complete method is often referred to as binaural room simulation.

To give subjects the impression of being immersed in a sound field, it is important that perceptual room constancy is provided. In other words, when the subjects move their heads around, the perceived auditory world should nevertheless maintain its spatial position. To this end, the simulation system needs to know the head position in order to be able to control the binaural impulse responses adequately. Head position sensors have therefore to be provided. The impression of being immersed is of particular relevance for applications in the context of virtual reality.

All of the applications discussed in this section are based on the provision of two sound-pressure signals to the ear-drums of human beings, or on the use of such signals for measurement and application. They are built on our knowledge of what the ears-and-head array does, i.e., on our understanding of the physics of the binaural transmission chain in front of the eardrum. We shall now proceed to the next section, which deals with the signal processing behind the eardrum and its possible technical applications.



next up previous contents
Next: The Subcortical Auditory Up: The Ears-and-Head Array: Previous: Binaural measurement and



Esprit Project 8579/MIAMI (Schomaker et al., '95)
Thu May 18 16:00:17 MET DST 1995