next up previous contents
Next: Keyboards Up: Control and Manipulation Previous: Relevance of these

Computer Input Modalities

The human output channels are capable of producing physical energy patterns varying over time: potential, kinetic and electrophysiological energy. Of these, the resulting force, movement and air pressure signals are the most suitable candidate for the development of transducer devices, such as has been done in practice already: keys, switches, mouse, XY-digitizer, microphone, or teleoperation manipulandum. As regards the electrophysiological signals, the following can be said. From the development of prosthetic devices for disabled persons, it is known that the peripheral nervous signals are very difficult to analyse and to interpret in a meaningful way. This is because the basic biomechanical transfer function characteristic is not as yet applied on the measured neuro-electric (ENG) or myoelectric (EMG) signals. Some recent advances have been made, incorporating the use of artificial neural network models to transform the electrophysiological signals, but a number of electrodes must be placed on the human skin, and often needle or axial electrodes are needed to tap the proprioceptive activity, which is usually needed to solve the transfer function equation. As regards the development of electrophysiological transducers, measuring central nervous system signals, it can be safely assumed that the actual use of these signals in CIM is still fiction (as is functional neuromuscular stimulation in the case of COM, page 2.2.4 gif ). Surface EEG electrodes only measure the lumped activity in large brain areas and the measured patterns are seemingly unrelated with implementation details of willful and conscious activity. As an example, the Bereitschafspotential is related to willful activity, in the sense that ``something'' is going to happen, but it cannot be inferred from the signal what that motor action will be.

Disregarding their impracticality for the sake of argument, implanted electrode arrays into the central nervous system would allow for a more subtle measurement of neural activity, but here the decoding problem is even more difficult than in the case of the electrophysiological signals derived from the peripheral nervous system. Therefore, we will concentrate on those types of computer input modalities, where input devices measure the force, movement, and sound patterns which are willfully generated by the human user by means of his skeleto-muscular system, under conscious control.

However, before looking into detail at all kinds of possible input and output interfaces that are possible on a computer, first it makes sense to look at what already has been achieved in modern user-interfaces. The first interactive computers only had a keyboard and a typewriter in the form of a Teletype (Telex) device. Later in the development, a CRT screen became the standard output. But not for all applications the keyboard is the most reasonable way of input. Many systems nowadays have a 'mouse', more or less as a standard feature. In the near future more input and output devices will be connected to computers. Already, the microphone for audio input is emerging as a standard facility on workstations and on the more expensive personal computers. Video cameras will probably be considered as a standard feature in the near future. It is good to consider what kind of operations are needed in many different applications, and what devices can handle these operations effectively.

Input devices can be divided into multiple groups, selected for functionality:

In fact it is possible for other devices than pointing devices only to input coordinates. For instance many systems allow the keyboard TAB-key to select input fields in a form, which has the same effect as pointing to the appropriate field.

Image input devices are currently not part of a user interface. They are used to input images, for further processing. Video cameras, however, gradually start being used interactively in videoconferencing applications.

Some input devices supply coordinates indirectly: The position of a joystick indicates movement in some direction, which can be translated into coordinate changes. Quick jumps from one place to another (as is possible with a pen) is difficult with a joystick. Also the arrow or cursor keys can be used to change the position on the cursor (In the GEM window manager, the ALT-key in combination with the cursor keys is used as an alternative for moving the mouse)

In principle, it is also possible to translate two speech parameters (e.g. volume and pitch) into screen coordinates. This might be useful for some applications for disabled people, being able to control the position of a screen object by voice only.

The simplest pointing device supplies two coordinates (x & y) and a button status. Mice might have more buttons and a pen might have a continuous pressure/height parameter. Pens might have additional parameters about the angles they make with the tablet (in x and y directions) and about the rotation of the pen. But these are only likely to be used in special applications.





next up previous contents
Next: Keyboards Up: Control and Manipulation Previous: Relevance of these



Esprit Project 8579/MIAMI (Schomaker et al., '95)
Thu May 18 16:00:17 MET DST 1995