multiple rendering

LukeNeo's picture

Hi, in the latest version of GLtools there is "GL stereo environment" patch, witch render any geometry placed inside two times, from different points of view. My question is: how is it possible to do multiple rendering of one scene in a single patch, like in the GL stereo environment? For example, if I'll be able to render one scene from the light point of view and from the observer point of view, we can implement shadow mapping (not to mention all the other techniques witch needs multiple render, like ambient occlusion). Thank you!

franz's picture
Re: multiple rendering

Multipass rendering is not supported in QC shaders as far as i know. You would have to copy the scene and render from a different angle ... then place eventually in a Render in Image patches and manually composite the results.

To display everything without pre-rendering in image, you could possibly use the GL_Viewport feature.

Have a look at the GL viewport example patch, bundled with GLTools.1.6 sample compositions.

gtoledo3's picture
Re: multiple rendering

Also, to do good shadow mapping or AO, you would need images with more bit depth than what GLSL (or CI) in QC can handle... the internal pipeline of QC would need to be changed to handle 32 bit images.

toneburst's picture
Re: multiple rendering

I'll be interested to see where this thread goes, as I'm interested in deferred-rendering myself.

One very simple way to do it is simply to place duplicate geometry inside 2 discrete Render In Image patches.

a|x

toneburst's picture
Re: multiple rendering

gtoledo3 wrote:
Also, to do good shadow mapping or AO, you would need images with more bit depth than what GLSL (or CI) in QC can handle... the internal pipeline of QC would need to be changed to handle 32 bit images.

I'm a bit confused on this one, I have to say. It certainly seems that the QC Editor likes to deal with images in 8bit/channel form, but I happen to know that Quartz Composer compositions, when being used in other applications, can deal in higher-depth material, so I think the 8-bit limit might be more to do with the context the QC Editor renders to than the actually behind-the-scenes image pipeline. I should really know this stuff... can anyone clear this up definitively?

Not that it helps in the shadow-mapping case, of course.

a|x

toneburst's picture
Re: multiple rendering

http://en.wikipedia.org/wiki/Core_Image and other info on the www suggests Core Image uses 32bits/channel internally.

I'd be really surprised if GLSL shaders aren't rendered internally in 32 bits, too. I guess the issue is with the RII patch (once again).

Again, don't know if this is an issue when using it outside the QC Editor environment. Clarification, anyone? Can a RII patch render happily in 16/32-bit mode when the containing QTZ is running inside an application other than the QC Editor?

a|x

psonice's picture
Re: multiple rendering

I'm a bit confused here too - I've been using 32bit extensively for ages, for effects that absolutely require 32bit :) It supports >0..1 values, signed values etc. for both GLSL and CI.

To use it, you just need to supply a 32bit image at the start of the chain, by either loading a 32bit float format image (tiff supports it, here's a 32bit float format tiff I made earlier: http://www.interealtime.com/32bit.tif ) or by using a render in image set to 16/32bit (note that 16 bit is only compatible with ATI GPUs, I strongly recommend avoiding it!). Any filters down the chain will use the image pixel format, so your filters will happily run in 32bit mode.

Also, I'm pretty certain CI runs natively in 8 bit mode if supplied an 8 bit image - otherwise it would be incompatible with a lot of older hardware, and waaaay slower than it is!

One of the 'hidden prefs' settings shows additional image info in the mouse over pop-up - turn that on, and you can see the pixel format of the image as it passes through your chain.

psonice's picture
Re: multiple rendering

Wow, deferred rendering in QC? That's pretty hardcore!

I recommend a read of smash's blog, he discusses his implementation of deferred rendering (and using it with AO, motion blur etc. too!) in quite some detail: http://directtovideo.wordpress.com/ (scroll down a way for the deferred rendering post, the rest is well worth reading too).

Good luck with it :)

toneburst's picture
Re: multiple rendering

All good info there, psonice. Thanks a lot for that.

This whole bit-depth thing has become an issue for me again because I've been working on Core Image Filters that need to apply colour lookup-tables to 32bit images. My solution to this issue at the moment is to create a custom filtering function to extract values from the lookup table, so I get smooth colour even when applying relatively small LUTs to 32bit images.

You tip about forcing 32-bit rendering is timely very timely, because now I know how to test how well my filter works with higher bit-depth images.

:)

a|x

toneburst's picture
Re: multiple rendering

Well, by 'deferred rendering' I didn't necessarily mean all the bells and whistles that sometimes means- just meant applying some post-process effects like DoF or SSAO, reconstructing normals from a depthmap, or creating thickness maps for subsurface-scattering setups, that kind of thing.

Thanks for the reference though. I will definitely have a look at that.

a|x

cwright's picture
Re: multiple rendering

Holy. Crap.

That's an awesome resource, thanks for the link (more reading material to make me feel even more inferior! :)

cwright's picture
Re: multiple rendering

Experience Confusion no more: what gtoledo was referring to was 32bit intensity images (CI can't deal with that, and QC seems to clamp them to 8bit everywhere? Haven't tried CL, but elsewhere (CI, GLSL), they're definitely truncated).

The normal 32bit 4-channel images work just fine (and they're a much more common path).

toneburst frequently wrote to me to just post-process the 32bit single-channel image into a 4-channel image, but I always told him no because of the performance impact (esp. when QC should just handle it properly).

I think that's a fair summary of intents, no?

psonice's picture
Re: multiple rendering

Ah, now it makes sense :D

I've not had cause to use single channel formats yet (might have some soon though, with 16 bit monochrome CCD cameras). Hopefully it won't be much of an issue though, as I can probably work around it.

I reckon George needs an additional challenge: packing 4 If pixels into one RGBAf pixel efficiently and not screaming when trying to figure out how to get CI filters/GLSL to work with it ;)

toneburst's picture
Re: multiple rendering

psonice wrote:
I reckon George needs an additional challenge: packing 4 If pixels into one RGBAf pixel efficiently and not screaming when trying to figure out how to get CI filters/GLSL to work with it ;)

Try it, George ;)

a|x

toneburst's picture
Re: multiple rendering

Can't remember why I wanted to do that, now... Must have been something to do with depthmaps, I guess.

a|x

gtoledo3's picture
Re: multiple rendering

Sounds like a blast!

psonice's picture
Re: multiple rendering

I think pretty much everyone feels like that reading smash's blog. His demos are awesome, but you really have to appreciate the technical side of what he does (and have a big enough GPU to handle it!)

I've been looking a lot at super resolution and compressed sensing lately, and getting that 'inferior' feeling rather a lot looking at the maths involved. I found a good fix for it though - I asked a friendly maths teacher what some of the symbols meant and got a nice blank look :)