Audio Video patch

magnetmus's picture

I've started to use Kineme plugins for Quartz Composer in my live set. Here is a good example based on 'tb soundflower' composition by alx toneburst. I've modified it to my performance and controlled variables live through VDMX. It worked great when Richie was playing with the mixer. Special thanks to alx toneburst and Kineme!

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

itsthejayj's picture
Richie, Richie Hawtin? class

Richie, Richie Hawtin? class to see peoples work used in the wild. Especially in the club environment. liked how a|x composition was blended into the eye!

toneburst's picture
:D

:D And a massive thanks to cwright, for making it possible with his magic code!

a|x

toneburst's picture
Variation

Here's a variation.

Inspired by a recent thread on the forum I now can't find. It's basically an oscillator type thing that shows the audio waveform over time in 3D. Not at all an original idea, but looks nice nonetheless.

Used Kineme GL Line Structure and Audio Tools patches, and a Queue, an Iterator and a touch of JavaScript. I'm doing all the point calculation in a JavaScript, rather than inside the Iterator, in an attempt to speed things up a bit.

The name is because I used to be fascinated by the 3D waveform display on the Fairlight CMI 'cost-as-much-as-a-house' sampler from the 80s ; ) If you've seen it, you'll know what I mean.

EDIT New version uploaded. The old one stopped working, for some reason. This new one has more options, anyway. Screenshots and explanation on my blog: http://machinesdontcare.wordpress.com/2008/12/09/new-audio-reactive-thing/

a|x http://machinesdontcare.wordpress.com

toneburst's picture
Ooops

Managed to post the same thing twice.

Oh well, let's make this post into something else, since I can't work out how to delete it...

cwright, do you think there's any mileage in any kind of pre-processing of the levels data for the Audio Tools Waveform and Waveform Image outputs? I'm thinking it might be useful to have the option of compressing the dynamic range of the data a little. Maybe I'm just not using the right audio sources though...

a|x

cwright's picture
scaling

It's a snap to scale lots of data quickly in C, so it'd definitely make more sense than doing it in JS/math expression.

More fancy pre-processing though, might be difficult, and somewhat special-case (reverb, anyone?) -- that's more the domain of out hypothetical AudioTools patch (which uses VST/AudioUnit stuff, and requires a million lines of code, and all that jazz, but gives QC a dynamic, on-the-fly configurable voice)

(As a side note: If you want a post deleted, just edit the original title to say something to that effect ("delete me" or whathave you). If you double-post, don't worry, we usually catch them within an hour or two (or 6, if we're sleeping :), and we remove those ourselves. Happens to everyone every now and then)

gtoledo3's picture
yeya

My first experiments with connecting patches to the Audio Tools, were giving me strobing blindness... so I have been using the math expressions to scale the data- haven't used Java yet, but that would be cool!

I've had much success loading the discrete tracks from audio compositions, to control various parameters, while only having the "whole mix" actually play, using the kineme audio player. When those haven't been available, it has been interesting to just do some serious bandpassing.

Another idea I am going to throw out there.... if you have a sparse track, flip it in reverse, and apply reverb, then flip it back around. Now you have a pretty common sounding special effect... BUT... the reverb is actually "preceding" your main attacks, so you get this cool numerical ramp up. Load that track to effect various parameters, but as stated, no one has to "hear" it.

Doing some serious gating to audio is another idea.

....I mean doing all this in some other app, and just using the resulting audio in QC in case I am being unclear.

If you guys ARE thinking about implementing some kind of AU patch, then a simple hi and lo pass would be really effective in being able to "tweak" and getting some intuitive and useable results. That would make it easy for some of the visuals to follow the bass end, and others to react to the high pitched stuff. If there was a notch in there around 1k, up or down, it would probably also be usual in giving visual results... I'm not talking about "hearing" this effected output though, only in the resulting numbers being passed on...

gtoledo3's picture
Oh yeah, and don't do it on

Oh yeah, and don't do it on my account!

I was all jazzed about audio reactive stuff for awhile, but I end up just figuring out bpm (which in most modern music is pretty consistent throughout the song), and then triggering lfo's, interpolation, or whatever accordingly. Even if a song isn't a consistent bpm, I will just figure out how many seconds into the song I want whatever it is to happen, and trigger it...

If a song is a solid bpm, I would just set a lfo or something to make a "quarter note" blip.... though I DO understand that there are those that are more interested in the actual audio triggering patches.

I'm just throwing this out there for those that may not have thought about doing audio type of comps that don't react, but still "interplay".

toneburst's picture
Pre-Scaling

I've used Interpolators to scale the Waveform output in the past, but this has to be done per-iteration, and would be tricky to implement in something like the setup above. It would be much better to be able to pre-scale the data before it comes in.

Something equivalent to an audio compressor would do it nicely. Basically, it would reduce the level of higher values, then boost the levels of all values, resulting in a signal with more sensitivity at the quiet end, and fewer big peaks in level.

It would also be great to have the output stay within the 0 > 1 range.

Re. the other things you mentioned: It would be great to have an AudioUnit host patch for QC, with parameter automation. I don't know if this is at all feasible though, given the program's poor audio support generally.

a|x

echolab's picture
nice one

would be cool to add zero start and endpoints to make it look more like the one from Peter Saville

PreviewAttachmentSize
jd_up.jpg
jd_up.jpg70.86 KB

toneburst's picture
Ah, that Classic Joy Division Cover..

It's actually a little more complicated to recreate. If you look carefully, you'll see that you'd need an opaque black area underneath each line. I've thought about this in the past, but I'm not sure how you'd make it work, off the top of my head...

a|x

psonice's picture
shaders can accomplish it..

There was a discussion of something just like this on the QC mailing list, and I suggested that it's possible to draw the line in a shader (CI or GLSL). I think that could be the best way to do it here.

If I remember right, the kineme audio tools output the volume level as a 1d texture, so all the shader needs to do is sample from the waveform image at the current x coord, and then match the volume level against the Y position. If it's less than Y, the pixel is transparent, equal (or close) to Y, white, less, black.

You'd then run that through an iterator (or queue it if it's the old "waveform travels back along the z axis" effect) to get depth.

toneburst's picture
Slowww

You'd be doing an awful lot of per-pixel operations here though. I think you'd very quickly end up with a very slow frame-rate. With the GL Line method I use above, you're dealing with far less data. On the other hand, the example pic has far fewer lines than I'm using in the screenshots I took, so it might even out.

a|x

psonice's picture
Depends..

..on how you're doing it.

If the further back you go along Z, the older the samples are in time (so effectively the waveform just moves back one step along Z for each frame) then you'd only need to draw one waveform per frame. I can't see that being slow :)

On the other hand, if you're doing it so it's more like a true '3d waveform' and you need to draw every layer every frame, and you're doing a lot of layers at high res, yeah it'd crawl.

The other method I suggested on the list was to render a grid and use vertex displacement to make the waveform shape and a fragment shader to draw white lines at intervals on Y. That would also work for this case, assuming you can get your sample data into the shader in a good way, but it'd be harder.

You'd need to use say a 256x256 grid, and in the vertex shader set the vertex height to the sample scale (and hit the evil 'sample in a vertex shader and i crash your stuffs' bug). You'd also need to set alternative rows of vertices to zero on Y so it looks like separate flat layers and not a mountain range.

And all this reminds me, i've not replied to your email yet have I? I'll do that this afternoon :)

echolab's picture
i know,

but i just thought about adding 2 more points (fixed 0,0) to the structure. this will flatten the left and right borders.

toneburst's picture
VDM

Wish Apple would sort out Vertex Shader texture lookup though. The method you suggest is actually one of the major reasons I wanted the texture-from-audio-levels thing in the first place. In the end though, it turned out the structure output was actually more useful for a lot of things.

Your other method, with the fragment shader/CI filter method might be the way to go to replicate the Joy Division cover though. You've convinced me. With a Queue, it would probably run faster than the multiple GL lines method I was using, actually.

a|x

toneburst's picture
True

You could also apple a gaussian-type curve to the levels, so the displacement is larger in the centre.

a|x

dust's picture
Plastic Man

Way to go kineme,

plastic man used to just eat at you with his acid minimal 909 + fx. haven't messed with the sound flower thing yet got get the vdmx program. i got a mate NI or native instruments who sent me a link to richie talking about his taktor set up. i guess he uses 2 traktors set ups, and a midi / audio mixer. might be cool to tap into his midi sends from the mixer, and tap into the OSC that traktor sends as control messages to his partners system.

ive been messing with that latley, went all virtual like him with traktor pro. i got vci-100 which unfortunately can not be used as a osc control send because if its high resolution midi or something. but i got 2 machines going with my iPod via OSC one night. might be cool to map some parameters from his mixer in addition to the audio spectrum stuff.

way to go, looks brilliant, i'm all about that audio visual controllersim fusion stuff, haven't seen richie in years. guess i got to make it over to germany or ibiza to be graced by the plastic man.

do you got any more dilated clips chris ?

if your into electro tech stuff, i suppose you can check out some of toons...

http://www.myspace.com/dustrecords

thats my label, need some more time to push it and some other artist besides my self...

spread the love