next up previous contents
Next: An Introduction to Up: Scenarios & Dreams Previous: The Multimodal Orchestra

Multimodal Mobile Robot Control

 

One of the most interesting research areas in robotics is the development of fully autonomous systems. In order to achieve autonomy, the mobile systems have to be equipped with sophisticated sensory systems, massive computing power, and what might be called an ``intelligent controller''. Because this is not possible with today's technology, many researchers try to incorporate the human operator in the sense and control loop, thus replacing parts of the system where appropriate. For instance, the recognition and decision capabilities of humans are much better than that of a mobile robot at the present time. Although a mobile system might be equipped with a vision system, the image processing and object recognition still takes to much time to drive autonomously with reasonable speed. On the other hand, the sensory systems of the robot which act locally can perceive and process data which is usually not detectable by humans, like the reflection of ultrasound or gas molecules which might be ``smelt'' by a special detector. Thus, a teleoperation system where parts of the controller will run autonomously whereas others will need the operator's interaction seems to be a very promising approach.

  
Figure 6.1 : A sketch and a real picture of the mobile robot system PRIAMOS

The control of a mobile vehicle (or robot) with advanced multimodal methods and sophisticated I/O devices is both, very attractive and useful. Take, e.g., the mobile platform PRIAMOS (shown in figure 6.1 ) which is currently under development at the University of Karlsruhe [188]. It is equipped with a multisensor system which includes the following:

Ultrasonic sensors
24 sensors are placed as a ring around the robot. The system offers several modes and each sensor can be addressed independently.
Active vision system
The active stereo vision system KASTOR is mounted on top of the platform, allowing the operator to get a panoramic view of the remote world. 18 degrees of freedom can be controlled independently [357].
Structured light
Two laser diodes emit structured light which is especially easy to ``see'' for a third camera due to the filters applied. The intersection of the two laser lines allows an easy detection of obstacles on the floor.

  
Figure 6.2 : The controller configuration of PRIAMOS (taken from [188])

The control concept of PRIAMOS which is shown in figure 6.2 follows the ``shared autonomy'' approach (see [142]), i.e. the robot receives its programs and data from a supervisory station and carries out its tasks autonomously. If an unforeseen event occurs, the robot asks the human operator for assistance. In addition, the operator can take over control at any time. Supervision of the robot's tasks is performed with the sensory data sent to the operator and a simulation model which runs in parallelgif to the real execution.

The software and hardware architecture of PRIAMOS is suitable to be extended with multimodal functionalities. One of several possible system configurations will be described in the following scenario (which is only fictitious at the moment):

``The mobile robot's platform operates in either one of two different modes. In Autonomous Mode, the robot uses an internal map and its sensory system to navigate without any help from the operator who only supervises the movements. In the Supervisory Mode, the operator remotely controls the robot. For this task he uses a joystick which not only controls all degrees of freedom (x- and y-translation, z-rotation) but also reflects several characteristics of the robot's environment.

Therefore, the joystick has been equipped with force feedback capabilities. Whenever the robot approaches an obstacle which is sensed by the ultrasonic sensors, the operator is able to feel it when the distance falls short of a certain threshold. Thus, collisions can be avoided without charging the operator's ``vision system'' with additional information.

This is especially necessary because the operator is already burdened with the images of the active stereo vision system KASTOR. Both cameras send their images independently to one of the two displays in the Head-Mounted Display (HMD) that the operator is wearing. Thus, he gets a full 3D impression of the remote world. The camera head can also operate in autonomous or supervisory mode. In the latter, the operator uses his second hand to move the cameras in the desired direction. By fully controlling all 18 degrees of freedom with a kind of master-slave manipulator or a 6D mouse, the operator is able to focus any object in the view of the cameras.

When using the helmet, the operator is no longer able to use a keyboard for additional command input. This is no drawback, because all commands can simply be spoken as the control system is equipped with a natural language processing (NLP) unit. Although only keywords and values can be recognized by it, it is comfortable and easy to use because it is a speaker independent system which doesn't have to be trained. Therefore, simple commands like ``stop'' or ``faster'' can be executed in a very intuitive way.

Instead of the force feedback described above, another way to inform the operator about nearby obstacles is to convert the signals of the ultrasonic sensors into acoustical information. For instance, the reflection of a wall detected by the sensors can be artificially created by a sound generator, thus giving the operator the impression of ``walking'' through the remote environment. Of course, alarm signals are transmitted to him by acoustical stimulus, too.''

The scenario described above has not been realized yet, but it is based exclusively on techniques which are either already existing or under current development. The most difficult problem will be the complexity of this system, which also complicates the integration of all the separate system components. We are convinced that the techniques and methods which are going to be developed in will contribute significantly to this integration process.

\



next up previous contents
Next: An Introduction to Up: Scenarios & Dreams Previous: The Multimodal Orchestra



Esprit Project 8579/MIAMI (Schomaker et al., '95)
Thu May 18 16:00:17 MET DST 1995