Audio data type / audio patch

Hi all, with all that goodness comin' (structure based buffer and such...), i was thinking that QC doesn't really have an equivalent for sound, and can't even play the sound of quicktime files.... Would you think it might be possible to imagine an audio data type for QC, and a sound renderer (there's already a plugin patch for this, but that doesn't allow external timebase, so you can't really synch' it with the video). Better, for a QC sound renderer: channel routing would be awesome. Then, eventually, a QC-Audio Unit patch.... I'm just curious, what do you guys use when you mix sound+visuals with QC ? I can think of a PDtoQC connectivity, through Midi, but this is kind of tedious because you spend more time linking the 2 in a descent manner than actually doing interesting things.... What's your opinion ?

smokris's picture

I've been dreaming about this for years.

I think adding a new data type is the way to go --- I think we've already figured out most of the mechanics of this.

Timebase scaling and an AudioUnit patch would be sweet.

Also some Audio-to-Image and Image-to-Audio patches might be interesting, allowing you to use CoreImage filters to process Audio, and AudioUnits to filter images.

tobyspark's picture
++ to "image and sound with movie"

anything beyond this is a superbonus. channel routing and audio units especially.

i use qc patches primarily within vidvox's vdmx "pro vj app", where vdmx handles the quicktime playback and passes the image through a qc patch. so i've got round this so far, but i want it sooo much for too many reasons to list.


cwright's picture
Audio processing

Should have beat detection too. input would be the audio data, outputs would be beat (boolean?), approx bpm, and confidence (0.0 -> 1.0).

Tim Devine's picture

For sure beat detection would be great. I use bonk~ by Miller Puckette in Max/MSP. You can download the object and source code here I am not a programer but maybe you can port the source to quartz composer?

I use it in an application ( that sends data (via MIDI) to Quartz if you want to check out how responsive it is.

The app also has a feature like the Audio Input patch in QC but the refresh rate is adjustable and much faster.


tobyspark's picture

...we should be in the audio port feature request thread. but quickly, requirements would be something like - 'image with movie' gets partner patch of 'image and audio with movie'. - 'audio with aiff' or somesuch is able to access an audio file - 'audio unit host' passes the audio through a selected plugin - 'audio out' sends the audio to the set audio channel (v. important we have device and channel support, also no need for a mixer as you send different streams to the same channel, and audio units can eq/effect as desired)

the huge difference with audio rather than qc's visual output is that IT CANNOT DROP. it has to soldier on regardless of what moment-to-moment twists qc's visual framerate is encountering. which effectively means qc is completely the wrong environment to consider this for (ie all other a/v apps i've known work at the audio sample rate, which makes scheduling the video a breeze).

so how do we get out of that (i have an idea or two, but a little knowledge is probably dangerously misguiding here so i'll bow out until i hear something from somebody who knows).


cwright's picture

We've got an audiothread macro patch. It only accepts "audio-type" patches, to keep it disconnected from the rest of the graph (QC really doesn't like using stale contexts etc., understandably). I think number ports are safe, maybe a couple others, but that's about it.

You had mentioned this earlier in a different thread; it's the only way to accomplish this behaviour (don't drop, don't vary with framerate) that we know of with graph evaluation as it is.

tobyspark's picture
cool, yep

yep, compiling the audio patches into their own code block and running that in a separate thread which is 'remote controlled' from the main graph evaluation is where i'd go to - just didn't know it was possible within qc's framework. supercool to hear it is.


cwright's picture
Fate of AudioUnit

So, I've been working on this off and on behind the scenes for a few months now, with a few various ideas for how to compose it.

After the last failed attempt (it generated sound and ran a macro in its own thread, and firewalled off the rest of the composition and everything), I settled into another idea: Using AudioUnits, with patches creating the AUGraph, and having the graph eval behind the scenes in its own happy world. Way less firewalling and all that.

However, Now I've come to another realization: Audio Units apparently suck. The documentation is largely non-existent, much of what exists is outdated (nothing like getting endless circular links to ancient Pascal source examples and bottomless Carbon samples). And that's not even including the user-end issues (architecture stuff, OS version stuff, etc).

So, Being completely disconnected from Audio Application Reality, what is it really like for end-users? Do y'all use AU, or VST? Or something else completely? Do AU's really suck as much as I'm beginning to think they do, or am I just missing the good parts?

robot_music's picture

Hi, first time poster here...

For sound+visuals I use Supercollider3, which has a built-in Quartz Composer viewer (SCQuartzComposerView), so linking is easy (get/set input and output keys of your patch). The newest version (3.1.1) includes advanced sound analysis like MFCC, Keytracking, Beat Induction, Spectral Centroid, etc. I've done a couple of shows with QC+SC3 and have been quite happy with the results.

Community Page:

franz's picture
thxx 4 info

Thanks a lot for the info. i'll dive into this...

Chris: as far as i know, AU isn't used anymore (at least in my studio) Sound guys tend to use VST for everything.

ding's picture
Supercollider vote

I think a Supercollider patch would be awsome.