On Friday February 9 of 2018 I had the wonderful opportunity to perform live with Howard Loomis at the 1+1=3 series (for link to event click here) held on the ITP lounge of Tisch School of the Arts, New York University. The 1+1=3 series has its focus on providing opportunities for creative and spontaneous collaboration between Clive Davis Institute of Recorded Music and ITP students. The format of the event consists on ITP students creating and supporting visuals to the music performance of Clive Davis students for the realization of an integrative audio-visual show.
It is spontaneous because the organizers let us know we would be working together four days before the event took place, receiving the notice on Monday February 5 of 2018. This meant that within the established time constrain we had to work something out to present in public. Here I describe the creative process during the time period and the outcome of the collaboration.
Ideation
The first meeting with Howard Loomis took place on Monday night. We shared each other's work and discussed possible avenues for creative exploration. We established that the visuals should be integrated with music as much as possible as means to have a coherent body of work. This signified that what I was about to create would be exclusive for the occasion instead of re-adapting any of my previous work to fit their music.
I asked Howard Loomis to capture short video pieces of their hands touching their musical instruments and microphones as means to compose an art pieces utilizing image processing algorithms using Max-MSP-Jitter. The occasion to do so was just right as I wanted to apply what I learnt from the class Live Image Processing & Performance taught by Matt Romein at ITP this semester.
The idea started to take form by defining its first two components: video recordings as raw material and Max-MSP-Jitter as the programming language to compose and perform.
Video recordings as raw material
I received from Howard Loomis 14 short videos that I started to experiment with. This early and rapid experimentation helped me define the idea for the art piece. I took additional inspiration in one of their songs to be performed. This prompted me to record 13 additional videos of Howard Loomis rehearsal and my train commute from NJ to NYC. These two set of videos would be combined to create visuals for the song 'Move with me'.
When needed, I combined short pieces of video into a longer recording using Adobe Premiere.
Algorithmic composition and performance
The intention was to create from the beginning patches that included concepts of modularity, parameterization, and gesture control based on code examples given in class. I opted for simplicity, reliability and easiness of interaction because I wanted to achieve a state of flow during my first performance VJing. Algorithmic simplicity with maximum visual impact was the goal for the occasion.
I created three distinct patches, all exploring different concepts, one for each song to be performed by Howard Loomis. Nonetheless, the main focus was to create a solid patch for the song 'Move with me' given the time constraint.
Patch_#2 & 'Move with me' song
Here I took advantage of the 'jit.op' and 'jit.lumakey' objects to explore the visuality of Olivia Reid and Jack Kleinick playing with the moving image of the train commute playing on the background (Figure 1).
Figure 1. Patch created using the jit.op and jit.lumakey objects. In order to facilitate interaction during live performance, I labeled interaction points where input was needed to modulate the image processing capabilities of Max-MSP-Jitter. This Patch is derived from code examples provided by Matt Romein.
Visual results derived from the application of this patch to the recorded videos gathered as raw material are shown in Video 1 and 2. I had the opportunity to borrow a projector to bring home and practice interacting and performing the visuality as I played Howard Loomis song 'Move with me'. This allowed me to 'experience' what would be like to perform at the actual event. Having the image projected on a wall gave me the opportunity to look at my work from a relative distance and take more accurate aesthetic choices while interacting with the patch. Input from family members as 'test audience' gave me an additional layer to experiment with the relationship between the visuals created and the music played.
Video 1. Exploring possible visual outcomes as result of the created system using Max-MSP-Jitter patch shown on Figure 1. Video was generated by using the 'screen recording' feature of QuickTime Player.
Video 2. A different set of vide footage was used to explore additional visual outcomes as result of the created system using Max-MSP-Jitter patch shown on Figure 1. Video was generated by using the 'screen recording' feature of QuickTime Player.
I proceeded to import video 1 and 2 into Adobe Premiere in order to employ the visuality produced in Max-MSP-Jitter to create a vido clip for the song (shown in Video 3). Although the videos recorded as raw material were not originally intended to be used with this goal in mind, the exercise allowed me to realize unexpected uses for Max-MSP-Jitter beyond live image processing and performance.
Video 3. The visuality created in Max-MSP-Jitter was compiled and rendered as video clip using Adobe Premiere. I tried to select imagery that was visually consistent throughout the song and gave a felling of continuous movement. The color palette focused on grays, blues, black, oranges, brown and a soft violet. It gives the impression of old remembrance but modern look at the same time. The focus on the texture of the musical instruments being played was intentional.
Patch_#1 & 'Code' song
The rationale behind this patch was to integrate 'jit.rota' and 'jit.op' objects and localize on parts of the moving images by enlarging them with 'zoom_x' and 'zoom_y' messages (Figure 2).
Figure 2. Patch created using the jit.op and jit.rota objects. This Patch is derived from code examples provided by Matt Romein.
The visual outcome of using this patch is displayed in Video 4. It was intended to be performed with the song of Howard Loomis named 'Code' and is remarkably different from the visuality explored in videos 1 and 2. Although I practiced at home the performance of this patch I did not get to play it live.
Video 4. Screen recording of displaying visual output of patch shown in Figure 2.
Patch_#3 & 'Don't leave me behind' song
The patch created to perform along 'Don't leave me behind' is similar to the one shown in Figure 1 with the addition of MSP elements such as a sine wave in order to periodically control elements of the patch (Figure 3). I did get to perform and it was a lot of fun to play and see the visuals change in real time to my action while Howard Loomis was playing. The visual output explored from this patch is shown in Video 5.
Figure 3. Patch similar to the one on Figure 1 with the addition of MSP elements to periodically control aspects of 'jit.op' object. This Patch is derived from code examples provided by Matt Romein.
Video 5. Screen recording of visual output of derived from the application of patch shown on Figure 3.
Documentation and comments on performance
I arrived to the event venue with enough time to test all my patches on the big screens. I brought with me notes indicating with visual effects did I tried previously that produced interesting results so that I could replicate them during the performance. Asked the order of the songs to be played and re-organized my patches accordingly into folders with their corresponding videos as raw material to be played. In addition to manipulation of image processing and visuality, I played with concepts of pause and stillness during the performance by pausing one video while letting the other play. When the music prompted a strong pause, I paused the whole patch an started it again when the music gained in intensity. I allowed my hands pressing the keys on the keyboard to follow the beat of the music as to be in sync with the musicians.
Following are several photographs of documenting my performance (Figure 4). Special thanks to Michelle Hessel Alves for the pictures and video recording while VJing.
Figure 4. Selected photographs depicting my performance at 1+1=3 series together with Olivia Reid and Jack Kleinick ('Howard Loomis').
Video 6. Short video featuring Olivia and Jack playing with visuals from Martin shown on the screen
Acknowledgements
I want to thank to the organizing committee of 1+1=3 series at Tisch School of the Arts, NYU for inviting me to participate.
To classmates and teacher (Matt Romein) of the course Live Image Processing & Performance at ITP-NYU
References
Music from Howard Loomis can be accessed at their soundCloud link: