In the context of theorizing on multimodal man-machine interaction, it must be pointed out that phenomena in motor control (HOC) at the physical/physiological level may not and cannot be disconnected from the higher, cognitive and intentional levels. Control problems at first considered intractable, like the degrees of freedom problem (inverse kinetics and kinematics) may become very well tractable if task-level constraints are taken into consideration. A simple example is a 3 df planar, linearly linked arm, for which the mapping from (x, y) to joint angles is an ill-posed problem, unless also the object approach angle for the end effector (the last link) is known for a specific grasp task.
The second observation is related to the consequences of multi-channel HOC. What happens if more human output channels are used in human-computer interaction? Two hypotheses can be compared:
Hypothesis (1): The combination of different human output channels is functionally interesting because it effectively increases the bandwidth of the humanmachine channel. Examples are the combination of bimanual teleoperation with added speech control. A well-known example is the typing on a keyboard where key sequences which require alternating left and right hand keying are produced 25% faster (50 ms)  than a within-hand keying sequence. The reason is thought to reside in the independent, parallel control of left and right hand by the right and left hemispheres, respectively. It may be hypothesized that other forms of multimodality profit in a similar way, if their neural control is sufficiently independent between the modalities involved.
Hypothesis (2): The alternative hypothesis goes like this: Adding an extra output modality requires more neurocomputational resources and will lead to deteriorated output quality, resulting in a reduced effective bandwidth. Two types of effects are usually observed: (a) a slowing down of all the output processes, and (b) interference errors due to the fact that selective attention cannot be divided between the increased number of output channels. Examples are writing errors due to phonemic interference when speaking at the same time, or the difficulty people may have in combining a complex motor task with speaking such as in simultaneously driving a car and speaking, or playing a musical instrument and speaking at the same time. This type of reasoning is typical for the cognition-oriented models of motor control and may provide useful guidelines for experimentation.
At the physical level, however, the combination of output channels may also result in a system which can be described as a set of coupled non-linear oscillators. In the latter case, it may be better to use the Coordinative Structure Models to try to explain inter-channel interaction phenomena, rather than trying to explain phenomena on a too high, cognitive, level.
And finally, the effective target area of Cybernetical, Closed-loop Theories will be those processes in multimodal motor control which can be classified as tracking behaviour or continuous control (as opposed to discrete selection tasks).