1 Synopsis
The objective of the current study is to apply technologies from the field of emerging media in order to reverse the traditional flow of performance dance in tango; that is subverting music to body movement.
In the most common exploration of tango as performative dance, dancers interpret with their bodies pre-recorded audio of popular tango songs, with music being indifferent and unaltered by the performative act of body movement. The lack of interactivity and/or responsiveness between dance and music also extends to situations in which music is played live, with musician's performative act occurring almost decoupled from dancers performance. Consequently, dancers bodies are restricted to the 'interpretative dimension' of tango without establishing a true dialogue with the musical source.
In the work presented here, the musical source is subverted to body movement as metaphorical means to question the essence of dance. That is, the flow of playback of a particular tango song is altered by the sequence of body movements realized by a tango couple.
Because many of the tango songs we dance today were recorded during the first half of the twentieth century, the past is being continually re-casted in the present time with each performative act of dancing. In the context of this work, altering the flow of music playback as consequence of body movement implies the notion of dancing in present time as mechanism to control how the past could be re-casted. This notion is extended to pre-recorded video altered in real time by body movement. All this was achieved by the integration of computer vision, programming, networked communication, image and sound processing technologies.
2 Ideation & implementation of technical system_
2.1_ Image to sound
At the core of the technical system put in place is the mapping of live image capture and accelerometer data to control the playback of sound and video attributes, respectively (Figure 1).
Performance of tango dancers is captured via the 'jit.qt.grab' object in Max-MSP-Jitter, with live image translated to monochromatic color via the 'jit.rgb2luma', with resulting image subdivided into 9 sections with 'jit.scissors' object. Changes in the mean luminosity for each image section (as result of dancers moving in space) are calculated with the use of 'jit.3m' object and are mapped and correlated to 9 different segments of the tango song being performed by the dancers (Figure 2). The rendition of 'El Flete' tango song by Francisco Canaro's orchestra is artificially subdivided into 9 different segments, with the playback for each segment assigned to changes in mean luminosity for each image section through the use of 'change' and 'splay~' object in conjunction with 'seek' messages.
Figure 1. Sketch displaying overall concept for implementation of technical system to capture live image of tango performance as means to control playback of sound and video recordings. Dance performance is captured through a laptop's camera and resulting image is subdivided into 9 different sections. Changes in luminosity for each image section are mapped to playback and volume attributes of 9 corresponding audio segments of a single tango song. Accelerometer's data from iPhone app 'Touch OSC' is mapped to control visual attributes of pre-recoded video (of same tango couple dancing to the same song in a different place from the live performance) that is projected on screen. iPhone is taped to right leg of female dancer and send data via networked connection to Max-MSP-Jitter via OSC protocol. Pre-recoded videos are subjected to image processing algorithms using Processing programming language and subsequently processed live in Max-MSP-Jitter and altered with streaming accelerometer data from movement of female dancer's leg.
Figure 2. Max-MSP-Jitter patch created to capture live image and translate changes in luminosity (produced by dancers movement) to control the flow of sound playback (which segment of song is played) and at which volume. The sonic output of this patch is conceived with a white surface as background image and dancers dressed in black. When luminosity for each image section is less than a given threshold, triggering of playback for an audio segment and changes in volume are produced. This code was inspired, derived and remixed from Matt Romein's code examples and V.J Manzo's book Max/MSP/Jitter for music.
2.2_ Projected visuals from image processing of pre-recorded video footage
The creation of visuals to be projected on the screen during performance time are based on the recording of two short video pieces of me and my wife dancing at a popular milonga in NYC (named Tango Cafe) taken with my own iPhone's camera. These videos were imported into Adobe Premiere and recombined into a single video with removal of sound (removal of sound facilitates live image processing in Max-MSP-Jitter during performance time). This video was subsequently transformed with image processing algorithms using Processing that resulted in two different versions of the original video, each version with distinctive visual aesthetics (Video 1 and 2). These videos were used as input for a patch created in Max-MSP-Jitter and were altered in real time with accelerometer data from iPhone's Touch OSC app (Video 3).
Video 1. Recombined video footage was subjected to image processing algorithms using Processing and screen recorded with QuickTime Player. Aspects of the image processing are audio-reactive based on the use of Processing's sound library. The audio file used corresponded to 'El Flete' tango, the same song to be danced live during performance.
Video 2. Recombined video footage was subjected to a slightly different image processing algorithm using Processing and screen recorded with QuickTime Player. Aspects of the image processing are also audio-reactive based on the use of Processing's sound library. The audio file used corresponded to 'El Flete' tango.
Video 3. Live image processing of videos 1 and 2 using Max-MSP-Jitter that reacts to incoming accelerometer data from my iPhone containing Touch OSC app. As I move my phone around, the blending of videos 1 and 2 changes and produces an interesting visual outcome with a distinctive aesthetic result.
The 'jit.op' and 'jit.lumakey' were used to as method to integrate and/or blend videos 1 and 2 during live performance (Figure 3). Parameters of 'jit.lumakey' are altered in real time by incoming accelerometer data (changes in x, y, z spatial position over time) from iPhone taped to female's right leg as means to alter visuals in response to movement of body during dance. Accelerometer data is transmitted via WiFi by the use of 'udpreceive' object and routed into separate data value streams using the 'route' and 'unpack' objects, respectively.
Figure 3. Max-MSP-Jitter patch created to blend video footage and altered in response to accelerometer data from Touch OSC app installed in iPhone. This code was inspired, derived and remixed from Matt Romein's code examples.
The resulting visual to be projected on big screen during performance is represented on Video 4. Based on the resulting visual output of this methodology, I also explored with previously recorded video footage of me and my wife dancing (and subjected to image processing algorithm using Processing) and place it instead of Video 2 on the Max-MSP-Jitter patch, giving me an interesting visual outcome as well (Video 5). Both of these videos could perfectly be used during live performance to be projected on big screens. Thus, the combination of Processing with Max-MSP-Jitter within a single pipeline of work is really worth exploring.
Video 4. Visual outcome of code shown on figure 3 derived from video 1 and 2.
Video 5. Visual outcome of code shown on figure 3 derived from using video 1 in conjunction with a previously recorded video of my wife and me dancing tango at home (data not shown).
2.3_ Body movement recombines segments of a single tango song into a new sound composition
The selected tango song for the current study is a rendition of 'El Flete' by the orchestra of Fransisco Canaro, who recorded it in 1939. It is an instrumental version with a very rhythmic style and thus it is very stimulating to dance (Audio 1).
Audio 1. Three different renditions of 'El Flete' tango song by Canaro, D'Arienzo and Varela orchestras. Although any of the three can be played and remixed during performance, the rendition from Canaro will be selected (track 1).
When changes in luminosity ought to body movement in from of camera were mapped to playback of audio sections within the same song, a sound composition of chaotic nature emerges and the dance becomes a mechanism to explore the sonic space available, finding the right place in space in front of camera to 'preserve' that section of the song playing without initiating the playback of other sections superimposing their sound and making hard which music segment to follow and interpret (Audio 2). The constrain and affordances of the system definitely impacted the way we dance and forced us to develop a unique style suited to the task.
Audio 2. Sound composition recorded from body movement during dance performance in front of camera, with each song segment played in response to changes in luminosity across 9 different section of the captured live image.
2.4_ Practicing performance with whole system setup
It was important to practice the performance to ensure the whole system was properly set up and functioning under the conditions envisioned and described in Figure 1. Visuals were projected on a whiteboard for quick prototyping, while I danced with female classmates in front of laptop's camera to test the resulting sound composition and real time alteration of projected visuals via accelerometer data from Touch OSC app from iPhones placed in leg of female dancer (Figure 4). This practice period was extremely valuable as to realize all the setup needed prior to a successful performance, specially referring to iPhone's placement in dancer's leg as well as the resulting sound output.
Figure 4. Still video image from practice period at ITP-NYU. Laptop's camera register dancers body movement and translate changes in luminosity to sound (see speaker placed on table) while iPhone on women's leg (not shown in figure) alter visuals projected on whiteboard as dance progressed. Special thanks to Asha Veeraswamy and Daniella Garcia Rosales for helping.
2.5_ Creation of interface in Max-MSP-Jitter to facilitate performance
The patch shown on Figure 2 and 3 were combined into a single patch and an interface was created to control the flow of the program during performance. This greatly facilitated the identification of parts of the program that needed input to initialize the system (Figure 5).
Figure 5. Interface to control the flow of the program in Max-MSP-Jitter during performance. This patch is a combination of patches shown in Figures 2 and 3. Code was based, inspired and remixed from code examples give in class by Matt Romein.
3 Class performance as proof-of-concept
The proof-of-concept for this work was realized during class (Live Image Processing & Performance taught by Matt Romein at ITP, New York University) as techno-tango performance piece on Monday February 26 of 2018 (Video 6). The opportunity to perform in front of 22 other persons gave me a sense of how to prepare and put in place the system previously described, with setting up of light conditions and placement of camera being of paramount importance. Because the piece was conceived to be danced with a white surface on the background and the resulting volume of the sound composition produced was dependent on dancers movement dressed in blue or black clothes, the piece required proper physical and space conditions.
Because I was performing the dance in conjunction with my wife, I had to initiate the visuals a few moments before start dancing. In ideal performance conditions, an additional person will be in charge of initiating visuals at the right moment of dance initialization. Same with ending visuals with end of performance.
Setting up the proper sound volume for the piece was challenging since volume levels were changing according to dance movements, and I felt there was room for improvement in this aspect of the performance.
Since audio segments from 'El Flete' tango song were continually looping, the dance performance did not have a pre-defined end, and thus was left to the dancers decide when to finish the piece rather than strictly ending when music ends. This aspect of the piece was strikingly different from traditional tango dance performances and resulted as unintended consequence of body controlling music: as long as there was body movement, there was music!
Taking the above into consideration, it was clear there are two beginnings and two endings for the piece: the technical beginning and ending (initializing and ending the technical system in charge of capturing movement, producing sound and visuals); and the beginning-ending of the performative act of dance.
Video 6. Excerpt from class demo of techno-tango piece described in this study. My dance partner had an iPhone placed on her ankle that sent data via networked communication to alter aesthetic elements of the visual projected on the wall. By dancing in from of laptop's camera on a white wall as background, changes in luminosity from our bodies in motion (dressed in dark colors) altered the playback of audio segments derived from 'El Flete' tango song.
4 Discussion
4.1 Surfing sonic spaces through body movement
One of the most interesting outcomes of this piece was the constrain of the technological system upon the dance style we had to adopt in order to realize the concept idea. Dancing became a tool to explore segments of audio file we wanted to hear in order to dance it. By traversing the physical space in front of laptop's camera we were changing segments of audio as well as their respective volume at which they were played at. As long as we found a point in space relative to a desired audio segment we wanted to dance, the only manner to listen the unmodified audio was to move extremely slowly or to stay still. As soon as we started to move more emphatically, the other audio segments started to play, producing a chaotic sound composition as result. This produced in us an extra layer of awareness as to which audio segment were we dancing at each given moment. This meant that as soon as we input energy (as body movement) into the system, we were changing it; only by staying still we could appreciate the music in its original state. Interestingly, staying still ('la pausa' in Spanish) is also considered part of dancing in tango.
We will continue exploring our dance within the constrain and affordances of the system built not only to perform but also as means to advance our style of social dancing in tango.
4.2 Body movement as recombination mechanism to generate audio and visual compositions
The performative act of dancing became a generative force that created sonic and visual compositions as result of this piece. This means that different dancers would produce different compositions (of the same aesthetics) unique to their performative style, making this system a framework for other artists to try. In the particular case of audio, the resulting sonic composition became the recorded history of our bodies moving in time and through space. With regard to projected visuals, the unique composition created as result of my partner's leg movement during performance achieved the status of generative art (Figure 6), as it could well be considered that she was painting with her leg.
Figure 6. Stills from video footage resulting from visual alterations as consequence of my dance partner's leg movement. These artworks are also self-portraits of us dancing tango.
4.3 Controlling the flow of how the past is re-casted
The media elements used in this piece contained audio and video recorded in the past, but re-casted in different ways through an act of performative dance conducted in the present. In this sense, technology allowed remixing of the past to construct the present in a different state. I have previously explored this aspect of tango using Kinect sensor to paint an artwork in real time based on movement from tango dancers and old tango pictures:
Much have been said about the status of contemporary tango music and the lack of musical innovation when it comes to social dance at milongas; where tango enthusiasts always dance to the same old songs. In this piece, remixing of a song to procure a new composition is encouraged through dance as prelude to actively call for re-mixing as plausible mechanism to bring musical innovation into the tango community and the social dance floor. Although the aesthetics of the resulting sound composition presented in this work needs great improvement as to be pleasantly stimulating to dance on its own right, its signal a beginning of a new era at the intersection of new media art and tango.
5 Next steps
The next phase for this project is to bring it into tango social gathering knowns as milongas across New York City, particularly at those venues where there is a dedicated space for 'alternative tango'.
Acknowledgements
I would like to thanks Jusleine Daniel for her support and helping with performance; Asha Veeraswamy and Daniella Garcia Rosales for helping during practice time and testing; Fanni Fazakas, Isabella Vento and Mohammad Hossein Rahmani for video recording and editing of performance during class; Matt Romein and classmates of Live Image Processing and Performance at ITP-NYU.