Hello from Washington (State)

Bojo's picture

I'm from washington obviously. I've just discovered QC and decided this seemed to be one of the more active forums and joined. I've made a couple of things, nothing great, just a globe screen saver, and have been working with itunes visualizers. Before i go making new threads on the matter. Can someone point me to something that will help me with my visualizer? I'm trying to make it so it uses the computers internal audio, instead of the Microphone audio. I know the Protocol is strict and i've been following it, but it always uses the microphone. Any help would be great.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

cwright's picture
Re: Hello from Washington (State)

click on the audio input patch in the composition.

Press cmd-2 -- this will bring up the inspector panel (cmd-i will do the same thing, but cmd-2 will take you to the patch settings portion, which is where you want to go.)

It'll look like this:
Audio Input Patch Settings

Hopefully you know what to do at that point ;)

(p.s. welcome aboard! :)

PreviewAttachmentSize
AudioInputSettings.png
AudioInputSettings.png47.08 KB

cybero's picture
Re: Hello from Washington (State)

Sorry to be the one bringing coals home to Newcastle, or should I say Athens, but the use of the built in input on the Apple Audio Input patch lops of frequency ranges, taking only the bottom two. One could then queue and present that as a structure, giving a slightly faked audio range

I find that the following example that employs both the iTunes Music visualizer protocol and [ahem] Kineme Audio Tools, works really nicely in Snow Leopard's iTunes.

Of course, its back to the Audio Input / Microphone channel again.

There might be a way around this using Soundflower in Leopard, haven't got a reliable installation of Soundflower working in the SL I am working in right now, so can't say for sure.

Welcome Bojo

Hope that helps [after a fashion]

PreviewAttachmentSize
tryout.qtz502.46 KB

dust's picture
Re: Hello from Washington (State)

cybero check out google code sound flower they have gone pubic with a new release that seems to work form although i haven't tried it but i installed it over the one i had working and still no kext errors so i think its good to go.

Bojo's picture
Re: Hello from Washington (State)

Thanks for the info. I downloaded the Audio tools, now im just trial and error trying to figure it out. the device UID kind of has me confused, I think its the the Device Serial Number. Is that correct? and where would i find that information?

cybero's picture
Re: Hello from Washington (State)

Take a look at the example compositions that came with the Audio Tools download.

audio-input.qtz example.

AppleHDAEngineInput:1 [basically your in built microphone] - works - Linear setting gives you a longer frequency spread result, Quadratic and Logarithmic Averages giving somewhat shorter frequency range results.

If I'm using Apple's Audio Input patch [set to built in mike] , I find that queuing the peak output gives a nice faked range that is fairly audio responsive [and representative].

Using the Built-in Input [Audio Tools equivalent setting AppleHDAEngineInput:2] will result in your getting only the last two [Apple's] or three [Kineme's] frequency bands being represented graphically using the template composition.

cybero's picture
Re: Hello from Washington (State)

Cheers for the heads up, dust.

Will look that up later today.

cybero's picture
Re: Hello from Washington (State)

actually, come to think of it, if you are using the music visualizer template , then the audio will be read straight from the track.

have a look at the template composition, run it in iTunes, or look at Jelly, one of the default QC visualizers installed with Leopard.

play a track

cut the volume down in iTunes

cut the volume down on the System, cut the volume down on the microphone input and what do you get in iTunes with a track playing, but it's volume set to nought in iTunes?

what you get is that the track still plays; because the protocol uses an audio input path published to a pre existing API, you do have access in your compositions to creating audio interactive visualizations that shall track the volume peaks and range, with audio spectrum data too.

they aren't found to be with all of the problems of queuing data from an audio input patch, although that can and does have its uses, limited only by the extent of our imagination and the ability of QC to handle data and graphically generate in response to that raw & treated data.