Viewing dance as instrumental to music

Axel Mulder, School of Kinesiology, Simon Fraser University

This text appeared in Interface vol 4 no 2, nov 1992, published by ACCAD, Ohio State University, Columbus OH.

© Copyright 1994 Axel Mulder. All rights reserved.


The previous issue of Interface addressed issues related to dance technology, such as dance notation software, based on movement notation systems, possibly equipped with graphical animation of the human body. These programs provide choreographers with new ways of composing dance performances. Clearly dance composition via computers still needs to go a long way to general acceptance, unlike the use of computers in music composition. It seems obvious that systems that record human movement and dance via such technology as used in virtual reality (e.g. bodysuits) are essential to shorten the road to acceptance. It would free the composer of the cumbersome task of writing up dance patterns in a movement notation system. With the advent of such systems the amount of formalized knowledge of dance and movement will grow, similar to the wealth of formalized musical knowledge that has developed over the ages and which is accessible to any individual that acquired the skills to interpret a music notation system.

Now dance knowledge, Peg Brightman reflects, "is not passed on through libraries or on the walls of museums, but in the studio, from person to person, through physical bodies in real time, primarily by imitation. This kinetic process takes place in the dance studio between teacher and student, choreographer and dancer, enhanced by verbal descriptions [..] Dancers learn primarily from each other, and their first-hand knowledge is in their muscles - sensory-motor and affective [..]" [1]. Musical knowledge however is passed on mainly through libraries, although some knowledge is passed on from teacher to student or from musician to musician. To perform a piece musicians usually study and interpret a score, even in many cases when the composer is still alive. Apparently it has been easier to formalize music than dance.

In order to provide an explanation for this difference, let's first state that in dance expression is transferred as continuously changing shape, visually, where the body is the instrument of the dancer (who is the "instrument" of the choreographer) and in music expression is transferred (primarily) as continuously changing sound, acoustically, where the musical instrument is the instrument of the musician (who is the "instrument" of the composer). Each part of the chain contributes to the final result that is presented to an audience. Furthermore let's define knowledge not only as experiences that are electrically represented in the nervous systems but also include as a form of knowledge the topologies of neural systems, the physiology and physical form of the body and its parts, etc..

Having said this, one explanation could be that much dance knowledge is stored in the muscles and low-level nervous systems of the dancer, next to knowledge that is stored in the brain, perhaps mostly originating from the representations of visual information. It is the latter knowledge that is most easy to access and formalize. The multidimensionality of dance and the for a great deal large "cognitive distance" of dance knowledge (meaning that the knowledge is difficult to access due to its hierarchical distance from our cognitive proces) makes this knowledge difficult to pass on in a formalized way. Conversely, music knowledge originates first of all from the auditory system of which the neural part, where the representation of acoustical information is created, can be deemed part of the brain. Our body only serves as an instrument to interact with nature in order to create sounds. So, music knowledge is stored in the brain and processed mainly as a cognitive action, in music composition, but largely also in music performance. This has become even more so with the rise of instrumental music, facilitating abstraction and categorization of sound and musical action. The formalization of music has always closely followed mathematics. Many systems that generate music, which is deemed artistically interesting, based on one or another formal system have been constructed. It may be that this difference between music and dance knowledge is due to the fact that music is dominated by the dimension of pitch and the discretization of the timbral space by the development of musical instruments. In effect the physical constraints of acoustical sound generation have simplified and hidden the multidimensionality of timbral space.

Music performance involves more muscle-memory than music composition. In singing and improvisation (e.g. jazz) muscle-memory is more than just instrumental to the cognitive processes in the human mind that create. In these cases muscle-memory at the very least contributes to the creative process, by generating new patterns. So one might say that a location where dance and music meet in real time, more than they might do in the brain as cognitively processed visual and acoustical experiences, is in the muscles of the human body. As yet very little of the knowledge that is stored in these muscles has been formalized. Movement notation and analysis systems are comparatively recent and I have not yet heard of mathematically based systems that generate artistically interesting choreographies. But this may change with electronic technology becoming more common in dance.


Muscle-memory music

An approach to expose this knowledge is to extend the notion of improvisation in music by designing musical instruments that allow for further control of sound and musical structure [2][3]. Traditionally musical instruments, basically due to physical limitations, enabled the musician to access only a limited number of parameters that control the creation of sound and music. Nowadays with digital signal processors, possibly linked together in parallel, realtime access to the multiple dimensions of the space of sounds and music is possible, in principle. Such a wealth of degrees of freedom does impose demands on the design of the musical instrument [4]. But it seems more than obvious that the multiple dimensionality of dance is the ideal generator of control values for the multiple dimensions of sound and music. From the scientific point of view such a mapping is a "visualization" (audification or sonification) of human movement and dance. From the artistic viewpoint it may be designated a "transmodal" artform, due to its exploration of the realms of ancient knowledge buried in the structures of the muscles and lowlevel cognitive processes, communicated as sounds and music to the audience [5]. Many musicians might lack the physical interaction with and force feedback of the musical instrument [6]. I would suggest them to consider the body as a virtual musical instrument that indirectly, via computers, generates sound, although it may take a while to incorporate this view. Physical interaction with your own body .... seems quite incestuous though. At any rate, let's speculate that in the future it will be possible to program a virtual physical interaction with a designed object, so that such essentials as force feedback provide for more natural musical action.

There are several problems that arise when attempting to implement the above ideas. First of all, one needs to distinguish between body movements as sound control parameters and dancepatterns as generators of musical passages. Then one must seek clues as to what might constitute a meaningful mapping of movement to sound (or dance to music). Certainly models of music and dance composition are needed, to create a framework. It is only recently that efforts are undertaken in this direction [7][8]. Next, and equally important, is a sound synthesis model that is sufficiently general and continuous. Many synthesizers allow for soundprogramming but in most cases the space of sounds is discretized (i.e. not continuous) into clusters or islands such as organ-like sounds, piano-like sounds etc., so that all kinds of sounds are either inaccessible or very difficult to program [9]. Last but not least, technology may impose constraints. Movement registration technology has only just begun developing, due to the surge of interest in virtual reality, but fairly good human interfaces that registrate gestural and postural information have already become available, though expensive. Sound synthesis technology has become entirely digital and therefore follows computer technology.


Sumo-music

Now what can we say about an interesting mapping ? Surely it must be sufficiently sophisticated. Recently developed new musical instruments such as "the video-harp" and various new midi controllers such as "the hands", "the sweatstick" and "the web" have basically remained instruments, adding, if any, only few new degrees of freedom to control the space of sounds [10][11]. They do not focus specifically on the knowledge stored in our muscles, or try to establish a closer relationship between human movement in its entirety and musical expression. This relationship is somewhat more apparent in systems that use an instrumented glove as a tool for conductors of (MIDI) orchestras [12]. An interesting experiment would be to try out a mapping such that the notion of a musical instrument disappears. As a performer, one would go about moving in a natural way, unconscious of (and unable to track) all the detailed musical consequences of his/her movements, although it would be necessary to determine points of reference where certain movement (patterns) can be programmed to generate or control certain musical passages or sounds. Once these reference points are defined a world of expression can be explored ! One of the ideas I have is to get two sumo-wrestlers wear a movement registration suit each and instruct them to effectively cancel each others music .....


References

[1] Brightman, P. Choreography and Computers, Interface 4 nr 1 pp 2-4

[2] Pressing, J., "Cybernetic issues in interactive performance systems", Computer Music Journal 14 nr 1 (spring 1990) pp 12-25

[3] Appleton, J., "Live and in concert: Composer/performer views of real-time performance systems", Computer music journal 8 nr 1 (spring 1984) pp 48-51

[4] Emusic-l electronic mailing list discussion in July 1992 on "muscle-memory synth interfaces"

[5] Gibet, S., Marteau, P.-F., "Gestural control of sound synthesis", ICMC Glasgow 1990 proceedings pp 387-391, Computer music association, San Fransisco CA, USA, 1990

[6] Cadoz, C., Luciani, A., Florens, J., "Responsive input devices and sound synthesis by simulation of instrumental mechanisms: The CORDIS system", Computer music journal 8 nr 3 (fall 1984)

[7] Camurri, A. et al, "Interactions between music and movement: A system for music generation from 3D animations", Proceedings of the 4th international conference on event perception and action, Trieste 1987

[8] Ungvary, T., Waters, S. and Rajka, P., "NUNTIUS: A computer system for the interactive composition and analysis of music and dance.", Royal Institute of Technology, Stockholm, Sweden, 1991.

[9] Wessel, D., "Timbre space as a musical control structure", In: Foundations of computer music, Roads, C., Strawn, J., (eds.) p 640-657, MIT Press, Cambridge MA, USA, 1987

[10] Krefeld, V., "The Hand in the Web: An interview with Michel Waisvisz", Computer Music Journal 14 nr 2 (summer 1990) pp 28-33

[11] Rubine, D., McAvinney, P., "Programmable finger tracking instrument controllers", Computer Music Journal 14 nr 1 (spring 1990) pp 26-41

[12] Morita, H., Hashimoto, S., Ohteru, S., "A computer music system that follows a human conductor" IEEE Computer July 1991 pp 44-53