Image With GLBuffer Patch

inputs:
   - dropdown to select which buffer (RGBA, depth, stencil, accum, ...)

outputs:
   - image

and perhaps a corresponding GLBufferOverridePatch:

inputs:
   - dropdown to select which buffer to override
   - image

outputs: (none)

franz's picture
use ?

Can i use this to have an image of my "whole" comp, and so i can bypass the "render in image" patch, and then, again, apply effects over it ?

smokris's picture
theoretically

yes, that's what i have in mind.

smokris's picture
GL Tools implements part of

GL Tools implements part of this feature --- retrieving the RGBA buffer --- via the "Kineme GL Read Pixels" patch.

Chris, I noticed in GLReadPixels.m, line 78, you have the following commented out:

//   glReadPixels(0,0,viewPort[2],viewPort[3], GL_DEPTH_COMPONENT, GL_FLOAT, data);

...so it looks like it shouldn't be too difficult to add depthbuffer functionality too (just need to do the appropriate conversion to CGBitmap data, since it's float-based rather than RGBA).

Could we add this to an upcoming beta sometime? (I'd like to try doing that Aperture Simulator we talked about a few weeks ago.. :^) )

cwright's picture
try before buy

it's commented out for a reason.

2 reasons, in this case.

1) QC doesn't actually support 1-channel images very well, so you have to glReadPixels (very slow), and then run a post-processing pass (slow) to make it into a form QC can use reliably.

2) reading the depth-buffer mid-frame won't always give you proper results, depending on the way the graph is evaluated.

So between horribly slow operation, and possibly wrong output, I'd highly recommend finding an alternative way to accomplish this (i.e. render stuff monochromatically without lighting, and use the Fog patch to render with pseudo-depth).

[2.5: readPixles is very sloppy in terms of how QC works... It's fine, but it doesn't fit in the QC model at all, so it's a big hack that I don't want to promote to production ever...]

smokris's picture
Thanks for the explanation.

Thanks for the explanation.

Speed isn't a huge factor (I'm guessing this Aperture Simulator thing won't run anywhere close to realtime), so I'd just be doing offline rendering.

I had hoped that I could just render an arbitrary 3d scene, use two ReadPixels patches to get the color- and depth-buffer, meld the two with a coreimage filter, and display on a fullscreen billboard as a final layer..

I'll try the fog hack.