MIR Raport by Joey van Gessel

From MDD wiki
Jump to navigation Jump to search

Pecha Kucha: https://docs.google.com/presentation/d/1YRN_lcYyTYdVupFdUaf1-Nbj2ycd5eAoGY9Mz8-bnO4/edit?usp=sharing

Digital report on MDD: http://www.mdd-tardis.net/mediawiki/index.php/MIR_Raport_by_Joey_van_Gessel

Introduction When I heard that I had the chance to play with tools and materials without the boundaries of clients, projects and deadlines, I knew straight away what I wanted to research.

I wanted to create (realtime) music visuals, but without rendering and editing for hours and hours. More like a visual canvas with elements on it that are triggered by (live) music frequencies, so that the visuals are always live and unique. From the beginning I was interested in using new tools, no Premiere or After Effects this time. But what? After going through the different options I decided to make a list:

https://astrofox.io/ | Astrofox is a free, open-source motion graphics program that lets you turn your audio into custom, shareable videos. https://cycling74.com/products/max/ | Max is an infinitely flexible space to create your own interactive software. Use objects, patch cords and controls. https://cables.gl/home | Cables is a tool for creating beautiful interactive content. With an easy to navigate interface and real time visuals, it allows for rapid prototyping and fast adjustments. https://derivative.ca/ | Touchdesigner visualizes data flow through each step of the process. Perceive behavior at a glance and get instant visual feedback from your creation. Start thinking visually.

I chose to use Max, because it is integrated seamlessly with Ableton Live 10 (Max is actually owned by Ableton these days). It also has a lot of connection possibilities and extensions for stuff like Makey Makey and Arduino. For me it was logical to choose this tool because I started with Ableton Live 10 to produce music a year ago, and I am still in the deep learning curve there. This will extend, or better say, widen my curve. Another reason:

“Max includes full-featured, expandable video and graphics tools with Jitter. Jitter is optimized for realtime audiovisual work, and is easy to combine with audio, sequencing, and modulation like everything else in Max.”

My target for this project is to create realtime audiovisual work. Let’s try it out… I started to play with Max (it’s a program, not a person called Max, that would be strange ;) in the lab. Just opening preset projects and discovering boxes, and connections in between them, called patches. I quickly found it can do A LOT of stuff, but it also needs A LOT of work to output (and sometimes input) stuff asw well. First I started off with 3D, cause I have little knowledge in that kind of environments. I quickly had a 3D shape, but that was already hard.


// Playing with presets, in this case a tone visual generator


// Generating a 3D shape in a fresh project with the jit.world object.

This 3D shape was doing nothing with the audio that I had in Ableton. To get Audio, I composed some loops consisting of a broad frequency range (bass, mid and highs) so that I could use the full spectrum. This audio sends from the Ableton plugin and sends the audio, after processing, back into the plugin again. I added a VU meter and a Jit.Catch object which can convert audio signals into data. To begin I grabbed the full audio spectrum and converted into frequency flashing vertical lines. This already looked cool and intense enough, so I continued with this. To do color manipulation I also added some pixel data filters. To not only have flashing lines, I also added some existing footage from Max. To mix the two (or better say three, cause I still had a 3D object as well) visual sources I added a video mixer.


// Left: Audio in and out. Analysing the audio and converting it to the white lines. Middle: 3D object, filters and the video mixer (the yellow is the final output). Right: existing footage from Max. // Link to the video

After this I realised that a small 320 by 240 screen won’t do a lot on big screens that they use for music visuals, so I started working on generating an external output. I spoke to people in my DJ/Producer/VJ network about how they would like to control “audio generated visuals”. They would love to see a simple controller, not another big MIDI device that is complicated. The focus will still be on performing the music, but they wanted some control of the visuals as well.


// Right bottom: Generating an external output (for external screens) that could be HD and fullscreen, maybe even let the mouse disappear!

That’s why I started searching for connections with Arduino, which took me some time to make it work. I enabled another Max extension in ableton that connects to ableton, which ten let me map out the controls. I started with a potmeter on the Arduino board, for the sake of time and proof of concept. This worked after some struggles, I quickly found out that I need to do some custom coding work in the INO (Arduino’s config file that you will upload to the minicomputer) as well.

// Example of a big MIDI controller. We need a smaller version of this one, specified to the plugin in Ableton that is built in Max.


// Coding the ino file for the Arduino and Ableton connection.

Giving the plugin users in Ableton some control to enable and disable the visuals, as well as the controlling of the intensity of it. For the sake of proof of concept and time these are the controls for now:

// Link to the video

But what if there’s no audio going in while the plugin is enabled? Then it shows blackscreen. That’s not good. There must be an option to give noise, the same as audio can give noise. So I tried to match the noise from the audio with graphic noise, like the old TV’s had back in the days. I will refresh the amount of noise based on the BPM info or custom tapping on the BPM counter. In this way the noise will be on-time as well.

// The noise generator on BPM.

To make the video mixing more interesting concerning existing footage, I also added a video sequencer where you can repeat certain parts of existing footage as well, but you can also use the video audio that will be sended back to an Ableton return track. Over there you can give the video audio more audio effects if you want, which is pretty awesome! I also extended the 3D world creator a bit.


// Video sequencer


// 3D creator

Well, the result of my proof of concept can be watched only in this video. Everything is created myself, so also the audio that you’re hearing.

Link to the video