Next: Interaction
Up: Bi- and Multimodal
Previous: Recording and replay
For humans, the most important sensorial system is their vision
system
. Nearly all actions which are
performed are fed back and supervised by observation. Therefore,
the combination of the two modalities vision and motoric
control is a very natural and intuitive one, leading to a bimodal
visual-motoric control. In this section, we do not deal with low-level
visual-motoric coupling, like the muscular control of the eye which is
necessary to fix an object over time (see 1.2.1 ), but with
the interaction of visual and tactile feedback in motoric control tasks.
With today's standard computer equipment, every human-computer interaction
includes some kind of visual-motoric coupling, no matter whether the user
types in some text with a keyboard, performs click or drag-and-drop actions
with a mouse or trackball, draws a model in a CAD system with a graphic
tablet or a 6D input device, or controls a manipulator or mobile robot with
a master-slave manipulator or a 6D input device.
In any case, the effects of the actions are --- at least --- observed on a
monitor. But, as far as we know, the influence of visual and tactile
feedback to these standard control tasks has not been sufficiently
investigated yet. Although several people have performed experiments,
usually only small numbers of subjects
have been used and
only few aspects of device/task combinations have been analyzed and
evaluated. Even worse, most researchers did not take into account any
tactile feedback, e.g.
[102,58,12,13,68,235,198,197,100,152],
with the exception of, e.g. [48,111,6].
Therefore, we designed several experiments which will be directed towards
the
- Analysis and evaluation of the effect of different input devices
for several interaction/manipulation tasks and the
- Analysis and evaluation of input devices with tactile feedback.
In order to get sufficient sample data, comprehensive tests with a large
number of subjects have to be carried out. Otherwise, statistical errors
will be introduced and the results obtained might not be
transferable. Unfortunately, the number of experiments grows which each
additional free variable because of combinatorial explosion. A simple
example might illustrate the situation:
If we limit one of our experiments (a positioning task, see below) to 2D
space (#Dimensions Dim = 1), take three different angles (#Angles
= 3), three distances (#Distances D = 3), and use objects
with five different sizes (#Sizes S = 5), we will need

i.e. 45 different scenes (graphical setups). When each of our five
devices will be used (see below), and each one is tested with all
combinations of feedback modes, we will get a number of 18 different
device/feedback mode combinations here (#Modes = 18). Because every test
has to be carried out by at least 15 subjects (#Subjects = 15) (see
above), the total number of tests will be

which is a rather large number, even under the limitations given
above. And it is the number of tests for only one experiment!
The hypothesis which we want to test are, among others, that
- tactile feedback will reduce the execution time and the accuracy in
simple 2D positioning tasks, if the same device is used with and without
tactile feedback;
- tactile feedback will significantly reduce the execution time and
increase the accuracy if the target region is very small;
- the changes in execution time and accuracy will be independent of the
angle and distance to the target region;
- the changes described above are more significant if the objects are
not highlighted when they are reached by the cursor, i.e. without any
visual feedback;
- Fitts' law
(see [102]) will hold for input devices
with tactile feedback as well.
Various experiments have been designed and implemented to cover basic
operations --- in which the variable test parameters can be measured
exactly --- as well as interaction and manipulation tasks which are more
oriented towards common applications, like selection from a menu or
dragging an icon. They will be carried out in 2D and 3D space,
respectively. Some experiments will be described below:
- Positioning:
- A pointer shall be placed as fast and accurate as
possible at a rectangular region (with width W and height H) in an angle
, thereby covering distance D. This will be investigated in 2D
as well as in 3D space. The influence of visual and tactile feedback will
be determined. The applicability of the well-known Fitts'
law [102] will be analyzed under the conditions described above.
The results will be relevant for all kinds of graphical interaction
tasks.
- Positioning and selection:
- Positioning of an object at a specified
target region in a fixed plane which is presented in 2D or 3D space. This
is a basic experiment for any drag-and-drop operation.
- Selection, positioning, grasping, and displacement:
- One of several
objects shall be grasped, retracted, and put at another position. This
includes the features of the experiments described above and extends them
with respect to robotic applications like assembly and disassembly.
- Positioning and rotation with two-handed input:
- The first
(predominant) hand controls the movement of a mobile robot, whereas the
second hand controls the direction of view of a stereo camera system
which is mounted on the mobile robot. The task is to find specific
objects in the robot's environment. This is a very specialized experiment
in which repelling forces of obstacles can be used and in which the
possibility to move the camera might be directly related to the velocity
of the robot and the potential danger of the situation.
Because most input devices with tactile feedback which are available on the
market are either very simple, not available on the market, or really
expensive (see 2.2.4 ), two input devices with tactile and
force feedback, respectively, have been designed and built:
- Mouse with tactile feedback:
- Following the idea of Akamatsu and
Sato [6], a standard 2-button mouse for an IBM PS/2 personal
computer has been equipped with two electromagnets in its base and a pin
in the left button. For input, the standard mouse driver is used; for
output, the magnets and the pin can be controlled by a bit combination
over the parallel printer port by our own software, so that the magnets
will attract the iron mouse pad and the pin will move up and down. Both
magnets and the pin can be controlled independently. In order to make the
mouse usable with our SGI workstation, a communication between the PC and
the workstation is established over the serial communication line. In
principle, any standard mouse can be easily equipped with this kind of
tactile feedback.
- Joystick with force feedback:
- A standard analog joystick has been
equipped with two servo motors and a micro controller board.
Communication between the joystick controller and a computer is realized
over a serial communication line. The joystick's motors can be controlled
in order to impose a force on the stick itself, thus making force
reflection possible.
Another device, Logitech's CYBERMAN, has been bought. It is the
cheapest device on the market (< 200,- DM) with tactile feedback,
although in this case there is only a vibration of the device itself. For
the experiments, five devices are available at the moment: the mouse with
tactile feedback, the joystick with force feedback, the CYBERMAN, and
two 6D input devices, the SPACEMASTER and the SPACEMOUSE. An
interesting question is how the tactile feedback will be used ---
considering the hardware as well as the software --- in different
applications. Some suggestions and comments will be given in the following
paragraphs.
Obviously, the devices which are equipped with tactile feedback
capabilities realize this feedback in completely different ways. The
mouse with tactile feedback uses two electromagnets as a kind of
``brake'', i.e. if a current is applied to them, the movement of the
mouse will be more difficult for the user, depending on the current. In
addition, a pin in the left mouse button can be raised and lowered
frequently, causing a kind of vibration. This will motivate the
user to press the button. Although in principle the current of the magnets
and the frequency of the pin vibration can be controlled continuously, this
will usually not be used, therefore we call this kind of feedback
binary. Logitech's CYBERMAN can also only generate binary
feedback: If a special command is sent to the device, it starts to vibrate.
Again, the frequency and duration of the vibration can be controlled with
parameters, but a continuous feedback is not possible.
The situation changes completely when the joystick with force
feedback is considered. Here, two servo motors control the position of the
joystick, thus allowing a continuous control in the x/y-plane. When the
user pushes the stick, but the servo motor tries to move it in the opposite
direction, the user gets the impression of force feedback, because the
movement becomes more difficult or even impossible.
Figure 3.3
: Schematic description of the Meta Device Driver (MDD)
In order to make the usage of the different devices as easy as possible, a
common ``meta device driver'' (MDD) has been developed for all tools (see
figure 3.3 ). The parameters which are sent to the devices follow
the same structure as well as the values received from them. This concept
has been developed in order to hide the specific characteristic of a device
behind a common interface (cf. figures 3.1 and
3.2 ). It has been realized as a C++--library and can
be linked to any application. If more devices will be available, the MDD
can easily be extended.
With respect to the software, several different possibilities exist to
give the user a visual and/or tactile feedback. Visual feedback is used by
every window manager, e.g. the border of a window is highlighted when it
is entered by the mouse cursor. In order to study the effect of tactile
feedback, various feedback schemes have been developed. Two of them
will be described in more detail below:
Figure 3.4
: A typical scene which is used for simple 2D positioning
tasks with visual and tactile feedback. The circle marks the
start position, the black object is the target, and all
grey objects are used as obstacles.
- The first scheme is used for simple objects in 2D that are divided in
targets and obstacles for the
experiments. Figure 3.4 shows a typical scene with five
obstacles and one target. Whenever the cursor enters an obstacle or the
target region, the tactile feedback is launched.
For the mouse, the magnets and the pin (or a combination of both) may be
used. For the CYBERMAN, the vibration is switched on. For the
joystick, things get more complicated. A force function, like the one
shown in figure 3.5 needs to be implemented. In this case,
the user ``feels'' some resistance when entering the object, but if the
center is approached, the cursor will be dragged into it.
Figure 3.5
: A force function which may be applied to objects in 2D space
in order to control the joystick
- The second scheme is applied to objects in 3D space which are treated
as obstacles, e.g. walls in a mobile robot collision avoidance task.
The magnets of the mouse can be used to stop further movement against an
obstacle, and the CYBERMAN's vibration can be switched on for the
same purpose. Again, the joystick has explicitly to be programmed with a
predefined, parametrized function in order to prevent the ``mobile
robot'' from being damaged. Figure 3.6 shows the principle
implementation of this function.
Figure 3.6
: A force function which may be applied to objects in 3D space
in order to control the joystick. The x-axis denotes the distance
between the cursor and the object, and the y-axis the applied force.
\
Next: Interaction
Up: Bi- and Multimodal
Previous: Recording and replay
Esprit Project 8579/MIAMI (Schomaker et al., '95)
Thu May 18 16:00:17 MET DST 1995