GL Tools Read Pixels Howto

toneburst's picture

Can someone give me a quick walkthrough of using the new Read Pixels patch to get scene depth values? I've tried just dropping a sphere patch into the top level of a comp, and adding a Read Pixels patch connected to a Billboard, but I just get a white.

a|x

cwright's picture
sample

try this

PreviewAttachmentSize
readPixels.qtz3.32 KB

toneburst's picture
Don't Worry Guys

I worked it out myself. Or rather, I actually read the description properly.

Sorry for wasting your time...

a|x

gtoledo3's picture
It's cool isn't it... it is

It's cool isn't it... it is interesting that there are choices for the color buffer, vs. the depth buffer.

usefuldesign.au's picture
Download complete?

This patch (readPixels.qtz) shows in QC app. I have Kineme GL Tools v1.1 installed (with the v cool alpha blending, thks 4 that). In the Patch Creator palette there is no sign of the "Kineme GL Read Pixels" patch – even though it's there and working in the composition?!

No "choices for the color buffer, vs. the depth buffer", either.

I'm missing what exactly?

Alastair

gtoledo3's picture
Sorry, I should have

Sorry, I should have clarified that I was referring to the beta.

cwright's picture
privacy

If you're not using the latest version, ReadPixels is a private patch that won't show up if you have private patches disabled (by default, QC doesn't display them, but there are defaults tricks or KinemeCore tricks to change that).

Sounds like you're using the older version (before depth buffer was introduced.)

dust's picture
tb.kineme.plasticman

so this is a bit off topic. but seeing you guys are both in this thread i would share something that i tweaked of yours....

http://cordova.asap.um.maine.edu/~oconnord/deyeilate.mp4

so i watched the plasticman implementation of the soundflower patch. thought it was cool being used in the eye. so i have this this vj project for interactivity class utilizing isadora. well the guts of the example is a patch called "eye" which is just a isadora version of blob tracking. so i use the blob tracking "eye" patch to play music i call literal "air guitar", memo all ready got the "webcam piano" coined so i had to make a different name up. so that laptop gets fed into another laptop that is frequency watching or spectrum indexing or what ever you want to call it.

as well as using the kineme and tb structure codes. then i added another source of inspiration, what is inside the eye, brain. well tb is doing some cool stuff with the mri files or what ever so i threw in a few dciom pics of brain scans, to make some sort of metaphor reference someone told me once that the eye is the window to your soul. well soul i guess can be though of as inside a person so an xray or mri etc... represents the inside in the metaphor and literal as well. my prof wanted some meaning behind the project. i guess in college you cant say you do something for "art" you need to articulate a meaning and stuff or what ever, either implicitly or explicitly to the viewer. im not to good at making up mumbo jumbo bout something i think looks cool, especially when i just stole it from someone else or the visual part etc.. i guess picaso said good artist copy great artist steal. when in a since this is only good cause you can't steal something that is free. nor am i trying to claim any credits or what ever. its just something im doing to save time on a homework assignment, you guys built it so i thought i would share.

so this is just a test screen capture, with some random song off my ipod. i got tired of connecting patches while waving my hands at the camera to see the changes so i just used a random tune, but i explained how it will actually be presented. or partly there is some controllerism stuff and iphone stuff going on as well. i have group members that need to do something.

i do get some interesting results creating a visual feedback loop though. so the audio is being made from the camera in the presentation but the camera is pointed at the screen which is reacting to the audio being made from the camera that is pointed at the screen. some sort of recursive AV feedback loop.

so i think the concept is brilliant actually taking the waveforms audio representation and making a circle etc...

is there a way of conforming the kineme spectrum structure to apple visualizer compliance ?????

anyways i will record the performance next week with all the parts working not just this test...

cwright's picture
waves

dust wrote:
is there a way of conforming the kineme spectrum structure to apple visualizer compliance ?????

Kineme spectrum structure doesn't exist. we output waveforms (which are not spectral data).

The Apple Vis structure is a horribly crippled, very small data set. 16 samples. That's all. No channel information.

Waveform stuff (from kineme audio stuff) is mutli-channel, and several more samples.

Using a simple Structure Index Member (to select which channel to use) will sort of work.

PreviewAttachmentSize
waveforms.png
waveforms.png247.28 KB

yanomano's picture
like it ;

new readPixels = awesome ;