depth of field?

gtoledo3's picture

I have semi-achieved some depth of field type effects by a number of cheats... zoom blur and renders...

It wouldn't be that hard to get a decent depth of field impression if only there was something available that was similar to a zoom blur, but with a selectable area of what "not to blur". Even if that shape could only be between circular and oval, it would be ideal in 90% of scenarios.

It seems like this should be able to be done with CI filters, and I am looking through similar examples, but I want to post this in case I am trying to "reinvent the wheel". Anyone messing with this? As I write this, something is telling me to look at vade's site...

psonice's picture
DOF

There's quite a few ways to do depth of field effects. A few that come to mind:

  • render the image with Z. Make a few copies (say 8), apply a different amount of blur to each. Use the Z value from the original image to select between the copies depending on distance from the focal point to get depth of field. This works, but it's pretty low quality - you need loads of copies to get it smooth, which hits performance pretty hard, and there's a big problem with edges (if you have a circle in front of some other stuff, and the circle is supposed to be blurred, it can still have a sharp edge because beyond the edge the z value changes)

  • render with Z, and calculate the DOF effect in a single shader. It's possible, can be pretty good quality, but it's very hard to figure out..

  • render the scene multiple times with a slight offset and combine. This gives perfect quality if you have enough passes, but you need a LOT of passes to get it perfect. I've done this some time back, see the attached file for a simple example.

PreviewAttachmentSize
DOF.qtz29.86 KB

gtoledo3's picture
That's some really good food

That's some really good food for thought... in some recent tests, I've just been doing stuff typical of old school cel animation, treating render layers like sheets...which is basically what you list as your first option... nothing to write home about so far.

That's a really interesting example of achieving that with a CI filter... I had been thinking CI, but I guess I wasn't expecting that! Very clever.

That's a good point about GLSL shading. I was thinking more about scenarios of applying an effect to everything within a render in image patch and being able to pick some kind of multiple oval type of coordinates of what "isn't blurred".

... so this CI setup is cool because it is very much visually what I am thinking of, though I can see how the more "realistic" you start to get with this method, the worse it would be on performance... though maybe not, because I haven't tried it yet. Thanks for sharing your thoughts on this.

cwright's picture
hack-tastic

Believe it or not, but this sort of stuff has actually had a branch in GLTools for as long as readPixels has existed (privately, we've been trying to get depth buffer reading to QC texture stuff without abysmal performance hits).

Today, I'm pleased to announce that I've finally learned enough OpenGL, and the right sequence of QC object massaging, to actually pull this off:

the low framerate is due to slow iteration/replication/rendering on a gma950...

This should allow for a real DoF shader/filter

PreviewAttachmentSize
DepthBuffer.png
DepthBuffer.png104.76 KB

psonice's picture
Looks very promising!

How good is the performance.. is it better than pushing z into the colour or alpha channel with a shader?

This is obviously a huge benefit even if it's slower though, as it presumably means you can keep rgba and still have z.

cwright's picture
perf + notes

profiling on my machine indicates that reading the depth buffer in the pictured composition consumes ~0.6% of the cpu (very little). Behind the scenes, it's performing a vram-vram copy (glCopyTexSubImage2D), which is ludicrously fast on real video cards (and even snappy on my fake one). it might not even copy: depending on the implementation, it might just make the texture point to the actual color/depth buffer (that might make for weird problems with drawing on one's self though... not sure if it's actually done for that reason).

From there, I create a CIImage with the GL texture (again, no copy), which I create a QCImage out of (no copy), which then can be reused as a gl texture (no need for copy, since it's already a gl texture...)

So overall, I'm actually quite impressed with it, and it's about as fast as it can get (even outside of qc). Previously, we were using glReadPixels, which dumped vram to system ram across the bus, creating a CoreGraphics context with the data to make a CGImageRef from (possibly one or two more copies/swizzles here), then a QCImage out of the CoreGraphics image (another copy, possibly just to vram, possibly a system ram copy, then a vram copy from that...) phew

It also means you only have to render stuff once, which can be a huge win for performance otherwise.

psonice's picture
Want!

That sounds perfect! I can shift some of my effects from 2-pass rendering to 1-pass (considering that the 2 passes sometimes both include an iterator with 100s of iterations, that's a MASSIVE win!)

Will it be in the next GLTools?

gtoledo3's picture
I resemble that remark :o) !

I resemble that remark :o) ! (reminds me of that audio reactive thing... or a lorenz attractor?)

This is extremely cool. GL Tools rocks.

toneburst's picture
Very Cool

..when do we get our hands on that? And what made you change your mind and give this a go (last time I mentioned it I seem to remember you weren't keen...).

Great stuff!

a|x

cwright's picture
oldskool vs. newskool

Our profiling using the old method (glReadPixels) indicated that while reading the color buffer (and generating a QCImage from it) was usefully fast, doing the same thing with the depth buffer was not (even currently, the glReadPixels with the depth buffer, make a CGContext, make a CGImage, make a QCImage from the CGImage, release all the intermediate stuff sequence drops the framerate from hundreds of frames per second to 6 frames per second -- not useful)

However, when I learned the more modern ways of gl texture reading (glCopyTexSubImage2D), I re-profiled, and discovered that this was acceptably fast (even with it pumping out glErrors every frame, it's still 30fps or better on my machine)l

franz's picture
meanwhile,

a modded version, that use less patches, for a not-so-different effect.

PreviewAttachmentSize
DOF frz.qtz13.54 KB

gtoledo3's picture
http://local.wasp.uwa.edu.au/

http://local.wasp.uwa.edu.au/~pbourke/miscellaneous/blur/

This is a good read...

That's an interesting example Franz.

Though... this is weird thought... I have been messing around with value historian and zoom blur/mask chains to manually track what the least blurred position should be with my mouse, and then just letting the zoom blur automate for play back. I've had interesting results, but I don't think I'm going to mess with it much more because of imminent GL Toolage...

One thing that came out of this is that it makes me realize that depth of field is also an undeniable visual phenomena, that you don't need to set out to "create" all of the time... for instance, in the post I did with the "inthebox" music visualizer qtz.... I note that when some things move quickly, and others move slowly within the same frame, and they are clearly positioned at different z values, a natural depth of field effect happens with the eye. I think this is amped up even more by the gradient edges/ field of view shifting in that particular case, and the fact that objects close in z move very quickly at parts, while some that are further away move slower and kind of percolate.

...I think that what makes the eye see the imperfection in these type of examples is that when objects are static, our own sense of depth of field is largely a byproduct of the difference between our direct line of sight, and our peripheral vision. We don't actually "see" this much blur, in this way (even though the cubes are moving, they aren't flying wildly out of our actual field of view)... it is more subtle and smooth I guess...

I would bet that examples like this would improve in perceived quality in direct correlation to how close the edges of the screen/blurred portion of the image is in line with our actual peripheral vision.

Could very well be wrong about everything I just wrote... just some thoughts.

capsaicin's picture
Re: hack-tastic

Any updates on this? I'd love something like, say, a DOF environment, which uses shaders rather than a Render In Image (for both CPU/GPU efficiency and a more accurate, photographic appearance) that I could just drop my whole 3D composition into. And then it could have published inputs for focal distance and focal depth.

Is something like this possible? It probably is, but I'm a dunce with code, I wouldn't even know where to begin... is something like this down the pipeline for GL Tools or Kineme 3D?