Rendering Live Video as 3D Volume

toneburst's picture

I've worked out a QTZ that renders live video input as a 3D volume using GLSL raycasting.

Funnily-enough, the hardest part was working out how to get around QC's lack of support for volumetric textures. Thanks to the several people who suggested making a 2D grid, with each cell representing a slice along the z-axis. It's not fast, but it does work!

a|x http://machinesdontcare.wordpress.com

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

cybero's picture
Re: Rendering Live Video as 3D Volume

Beautiful.

toneburst's picture
Re: Rendering Live Video as 3D Volume

Thanks!

Wish it ran a bit faster though (as always). I'm having lots of ideas for variations at the moment.

a|x

psonice's picture
Re: Rendering Live Video as 3D Volume

I'd like to see it animated :) So far, it's just scrolling through z (time), I'm sure it must be possible to have it "playing" instead of just scrolling. Can't think how off-hand, but it's possible :)

Good work btw.

toneburst's picture
Re: Rendering Live Video as 3D Volume

Cheers!

Not completely sure what you mean by 'animated' though. Care to explain?

a|x

vade's picture
Re: Rendering Live Video as 3D Volume

I'm assuming he means using a cache of frames and not a live stream, buffered, so you can move arbitrarily through time.

toneburst's picture
Re: Rendering Live Video as 3D Volume

Oh, yeah, you can do that. I've got some pre-prepared volumetric scan data too. The only issue is the size of each volume texture. The current setup uses a 1000x1000px image, giving 100 z-axis slices of 100px square, so each one is largish, texture-memory-wise. I'd like to make them larger, too, but that's going to slow things down even more.

I can see how it would be possible to have a Queue (or some other kind of store) of more than 100 slices, and dynamically change which slices are used by the volume texture (and therefore rendered by the shader).

There are some other rendering possibilities I want to investigate first though.

a|x

gtoledo3's picture
Re: Rendering Live Video as 3D Volume

When you flatten that kind of setup into a 2D image (or what I'm assuming you have going to achieve that), you get a great time blur, and when you fan it out, it gives the cool matrix wall effect.

toneburst's picture
Re: Rendering Live Video as 3D Volume

You could do a time-blur by using a plane rather than a cube as the base geometry, and not allowing rotation, so you always look at it straight-on. You'd probably need to force isometric rendering somehow. There are probably better ways of doing time-dependent blurs though.

a|x

gtoledo3's picture
Re: Rendering Live Video as 3D Volume

Yeah, I did mean that one would use a sprite. I'm not sure of your exact setup, so I can't comment on whether there are better ways or not :) That's part of how I did a clip I have on Vimeo, "new fractal".

psonice's picture
Re: Rendering Live Video as 3D Volume

What I mean my "animated" is kind of hard to describe. I mean it should look like playing video, but in 3d, rather than just "static" shapes that flow through the cube. That make sense?

I'm struggling to think how to do it though :(

psonice's picture
Re: Rendering Live Video as 3D Volume

I struggled, and I won \o/

I don't think my cube looks as good as yours, but I have it animating. What I did:

  • Store the video frames in a queue as normal, but store X times more than the number of layers in the cube. X is the 'step' input in my composition.

  • Iterate through the number of layers (the 'levels' input = layers). Select the current image from the queue using 'layer * step'. This makes each layer 10 frames behind the previous one, meaning that it plays 10 seconds of video before moving back one layer, animating it :)

It goes slow as hell until the cube is filled btw. Then I get 60fps.

PreviewAttachmentSize
timecube.qtz11.33 KB

toneburst's picture
Re: Rendering Live Video as 3D Volume

Ah... I seee....cool, nice work!

Not sure it's doing exactly what it should do on my machine though. I seem to get a lot of very dark frames.

Incidentally, my setup looks different from yours because I'm using raycasting to display the volume, rather than a stack of sprites. That way, you can turn the volume right round in any direction without getting those nasty artifacts you get with a stack. You also get interpolation on the z-axis. The downside is that you have to jump through some hoops to work around QCs lack of 3D texture support, and some of these hoops are framerate-killers.

I should also be able to render some kind of isosurface using a variation on the shader, though it will probably be unusably slow.

a|x

psonice's picture
Re: Rendering Live Video as 3D Volume

So, go on.. how do you do get around the 3d texture issue? :)

One thing I'm thinking is that we could use openCL to output a mesh. This could work well for your method - you'd store the mesh, then just drop the back plane and add a new plane to the front and move everything back slightly each from. You could do a simple matrix of cubes type object so it's viewable from all angles, and use the vertex colours instead of texturing. That could work pretty well, but it wouldn't support animation.

cybero's picture
Re: Rendering Live Video as 3D Volume

That is nice work, psonice; a magical video cube.

toneburst's picture
Re: Rendering Live Video as 3D Volume

psonice wrote:
So, go on.. how do you do get around the 3d texture issue? :)

By sheer brute force of genius, of course. Nah... several people suggested the workaround, so it must be a standard thing to do. I created a big 2D texture, divided-up into a grid, where each cell represents a slice on the z-axis. Comme ça

Then you just have to add some code to the shader to scale the texture coordinates and offset according to z. I added some code to do z-azis interpolation using mix().

Quote:
One thing I'm thinking is that we could use openCL to output a mesh. This could work well for your method - you'd store the mesh, then just drop the back plane and add a new plane to the front and move everything back slightly each from. You could do a simple matrix of cubes type object so it's viewable from all angles, and use the vertex colours instead of texturing. That could work pretty well, but it wouldn't support animation.

I was looking into exactly that, too, as it happens, prompted by this stupidly impressive video by Johan Holwerda

...in which he does exactly that. He's doing marching cubes though, which is a bit hardcore for me, i think. Having said that, there are probably some CUDA examples that could be adapted to OpenCL relatively easily, by someone who knows what they're doing (ie, not me).

The cube matrix thing would fall down on structure/Iterator slowness, I fear. You could have OpenCL generate arrays of cube centre-points and colours pretty quickly, but the whole thing would grind to a halt when you tried to render those cubes with an Iterator :(

a|x

psonice's picture
Re: Rendering Live Video as 3D Volume

I was thinking more along the lines of generating a complete mesh in CL, so you don't use an iterator at all (it would be a single mesh, with a 'matrix of cubes' structure to it and vertex colour / transparency instead of texture). Dunno if it's possible in CL though, I've still not had chance to try it :(

psonice's picture
Re: Rendering Live Video as 3D Volume

ASD's lifeforce demo.

I've added an extra step: edge detection. By masking out everything but the edges, the shapes are much easier to see as the cube is more hollow. I reckon a frame step of >10 is very desirable.. perhaps 30 would start to look much better.

toneburst's picture
Re: Rendering Live Video as 3D Volume

Ah, I see. I thought you were talking about something along the lines of the Radiohead volume data renderer you did a while back.

I'm sure that kind of thing IS possible with OpenCL. Having said that, I think most people still do that kind of stuff on the CPU, because it's easier to optimise that way. Whenever I look into it, I get intimidate when it starts talking about b-trees and other stuff that's waaaaay over my head.

The idea about rendering in sections and moving each section back on the z-axis is a good one though. Of course, it only works for scrolling stuff really. You could make it so that a new slice could be added at the end, if you wanted to scroll the other way, I suppose, but for constantly-changing volumes, it's not a goer, I don't think.

a|x

toneburst's picture
Re: Rendering Live Video as 3D Volume

That's cool. I tried to do something similar with a basic mask filter on the input, to make darker areas of the image transparent, but it doesn't pick out forms quite as well.

Is this raycast, or still a stack of sprites?

a|x

psonice's picture
Re: Rendering Live Video as 3D Volume

Still a stack of sprites. I'm using 80 to get it fairly solid. Actually performance is pretty good if i use low res (like 128*128) and camera input, and quality is pretty ok.

I did a quick experiment with motion detection too - that could work really well. But it wasn't quite so good as it should have been :)

I've attached the composition, but be warned - it's a real mess :) Normally I work like this, then tidy it up once it's working right, but I ran out of time today. Replace the video importer patch that's connected up with whatever video source you want.

PreviewAttachmentSize
timecube2.qtz21.42 KB

toneburst's picture
Re: Rendering Live Video as 3D Volume

I'll give that a go when I can get back on my laptop :)

I was using 100x100px x 100 slices, and getting 60fps with a small preview window, but performance dropped dramatically at larger render sizes.

I'd really like to do isosurfaces from volume input too, with lighting, and I've found a shader to do it. I'm going to give it a try later on.

a|x

psonice's picture
Re: Rendering Live Video as 3D Volume

Had another quick play last night, and got the renderer looking a lot better. I think the key is all in the masking - poor masking = vague, foggy cube. Good masking = solid, recognisable shapes. I've not got it exactly great (what we need is something like apple's background removal stuff from ichat, but with automatic scene recognition and support for moving backgrounds and the like, good luck with that ;)

Quick video of my last attempt:

No animation this time, and I set it to 256x256, 200 iterations. It would look a whole lot better with lighting, good luck getting your isosurface shader running with this stuff :)

toneburst's picture
Re: Rendering Live Video as 3D Volume

Brilliant!

Mind you, I quite like 'vague and foggy' ;)

a|x

psonice's picture
Re: Rendering Live Video as 3D Volume

Yeah, absolutely. So long as it's not TOO vague and foggy though, which it is by default :)

I think ideally we need something in the middle - nice, clear shapes like in my video there, but with cool transparency like in your first video. It's not really possible with my method I think, but I suspect it will be with your raycaster :)

I'll post the composition for this one up later. It's a serious mess and needs cleaning up first :)

toneburst's picture
Re: Rendering Live Video as 3D Volume

Well, I was thinking, since the basic form is determined by opacity, you could do some cool things like do boolean operations on two different volume textures.

I envisage a static skull dataset, for example, filled with a scrolling dynamic texture.

a|x

psonice's picture
Re: Rendering Live Video as 3D Volume

Yeah, you could do a lot by combining two sets like that. Combine your skull data with my animated version, fed with video of clouds.. skull in rolling, volumetric clouds. Endless possibilities :)

gtoledo3's picture
Re: Rendering Live Video as 3D Volume

Re: cutting out backgrounds from video : One can use CV Tools to draw a quad/line structure on top of an obvious shape in a video and use the resulting image as the mask. That way, you can cut out the edges on video with a moving background in real time... like a real time lasso basically.

toneburst's picture
Re: Rendering Live Video as 3D Volume

That does sound cool. Mind you, the other stuff is already fairly fps-sapping, so I can imagine adding CV stuff on top of that might slow things to a crawl.

a|x

psonice's picture
Re: Rendering Live Video as 3D Volume

True, and beyond that how well does the CV stuff deal with frequent scene changes, animated backgrounds and the like? I think it's going to be near impossible to do this "right", but it's a lot of fun fudging :)

psonice's picture
Re: Rendering Live Video as 3D Volume

Did another quick video with this technique. I added (fake) shadows and (fake) lighting. I also rendered it in 3D, so get those coloured glasses ready :) You need to view it at youtube to get the 3D settings (just under the video):

http://www.youtube.com/watch?v=GM2RXnXgbTU&feature=youtube_gdata

leegrosbauer's picture
Re: Rendering Live Video as 3D Volume

Really Nice!

If anyone uses Rentzch's ClickToFlash, it may need to be disabled before the 3D settings index at YouTube will appear beneath the video in the browser.

Again, very very cool video!

psonice's picture
Re: Rendering Live Video as 3D Volume

Cool, but I really screwed up the 3d separation :(

I'll redo it later, with a different video.

toneburst's picture
Re: Rendering Live Video as 3D Volume

Impressive!

That fake lighting effect is pretty effective. I've been looking at some other options for lighting, but they're all going to be slower than your method, I imagine.

The 3D effect could maybe do with a bit of work though ;)

Great stuff!

a|x

psonice's picture
Re: Rendering Live Video as 3D Volume

lighting = emboss patch to get a greyscale image with a shaded effect around the edges (it's only the edges that need lighting..) plus a CI filter to put the colour back in.

Shadows: i cloned the iterator that draws the sprites, then put it behind them. These form the shadow. I rotate them 90 degrees on x so that they all lie flat, and move them down a bit so they appear to be flat on the floor below the object (this already gives a reasonable shadow effect, but the 'light' is directly in front of the object always). Then I put it in a shader, apply a shear to the vertices (so the shadows are cast with perspective and the light is to one side) and render as black with transparency (or light grey to give it some texture...)

It'd look 10x better with self shadowing + AO though :)

The 3D effect: as I said, I screwed up the rotation.. instead of doing 'eye space' rotation it's rotating about strange angles so the effect is totally destroyed. Should have noticed that before. I've fixed it now, and the 3d effect looks cool :)

I'll do another video tonight (it takes ~5 hours to render out... 2x 200 iterations, with 2 macros in each, plus glsl shaders and some setup maths.. it's no longer realtime ;)

leegrosbauer's picture
Re: Rendering Live Video as 3D Volume

Would it be of any visual interest to attempt a point of view from within the cube of passing planes? QC3 had a developer example in the Conceptual folder called Iterator - Transparency.qtz (attached below). I've used portions of it a lot, probably excessively. Regardless, it allows for a broad range of fade-in and fade-out Z axis depth settings. Just curious.

PreviewAttachmentSize
Iterator - Transparency.qtz9.14 KB

toneburst's picture
Re: Rendering Live Video as 3D Volume

Wow, yeah, that's definitely not realtime. Looks cool, though. Mind you, once I start trying to light the raycast isosurface, my setup probably won't be realtime, either ;)

a|x

psonice's picture
Re: Rendering Live Video as 3D Volume

Yeah, I'd thought about doing some cool "fly through the object" stuff. But then i moved the camera in and saw how horribly low res it was :D It's possible, but ugly...

leegrosbauer's picture
Re: Rendering Live Video as 3D Volume

Ah. Pity. Just a thought. Maybe a viable capability will avail itself in the future. This is really nice imagery just as it is. Good work!

psonice's picture
Re: Rendering Live Video as 3D Volume

I don't know.. I think with a reasonable GPU and the right way of processing it realtime should be possible.

I know in the demoscene people are raycasting through 256^3 textures in realtime (because that's the first thing they told me to do when I posted up these videos on pouet.net...). Pommak there was kind enough to point out that his GLSL raycasting shader is available in the zip for catharsis: http://pouet.net/prod.php?which=54042 Time for a poke I think :)

Smash is also using 3d textures for some heavy duty processing in frameranger ( http://pouet.net/prod.php?which=53647 - awesome demo! But no way will my radeon 2600 run it...) Check out the liquid + sphere part towards the end (perhaps the 3d smoke too, not sure). There's a little bit about it on his blog: http://directtovideo.wordpress.com/ (scroll down a short way and look for 'non polygonal elements'). Unfortunately he's talking more about deferred shading rather than handling 3d textures at speed, and I suspect that it'd be way, way over my head anyway.

toneburst's picture
Re: Rendering Live Video as 3D Volume

Cool, I will look into that shader. The one I've been using is REALLY simple (based on information on Peter Triers blog at http://www.daimi.au.dk/~trier/?page_id=98 ), but doesn't have much in the way of optimisation. I've been reading some papers on raycasting and volume rendering, and have some things I'd like to try out. Unfortunately, there's no code in the papers, so I'll have to do some trial-and-error experimentation, I think.

a|x

toneburst's picture
Re: Rendering Live Video as 3D Volume

I downloaded the Windoze .exe from the pouet page above. There seem to be several raytracing shaders in there. I had a look at a couple, but must admit I couldn't make head nor tail of them :(

They're so heavily optimised, and completely uncommented, they're pretty impenetrable to mere mortals like me, sadly.

Talking of raycasting: http://vimeo.com/7789975

Massively impressive, as always from Mr. Quilez.

a|x

psonice's picture
Re: Rendering Live Video as 3D Volume

Yeah, I've been following the whole mandelbulb thing :) There's a fairly interesting discussion about it on pouet too, with a link to the original article about 3d mandelbrot at the beginning. Well worth a read if you like fractals.

IQ's work on it is pretty amazing, his version is near enough realtime (well, he was still optimising, perhaps it IS realtime by now :) The other renderers I've seen were taking 7 hours plus per frame (a week or more in some cases!). No small improvement! He's discussed how it works a fair bit in the pouet thread, but it's WAY over my head. I mean this might as well be written in chinese:

"Yes, it's a distance field, and I'm using the regular distance estimation G/|G'| (derived from expanding G(c+epsilon) with a order 1 Taylor series), where G=the Hubbard-Douady potential, G=(1/2^n)·log|z|. That means G'=(1/2^n)·|dz|/|z|, so distance = |z|·log|z|/|dz|"

toneburst's picture
Re: Rendering Live Video as 3D Volume

psonice wrote:
Yeah, I've been following the whole mandelbulb thing :) There's a fairly interesting discussion about it on pouet too, with a link to the original article about 3d mandelbrot at the beginning. Well worth a read if you like fractals.

I do like fractals. The renderings in James Gleick's classic book on the then emerging field of Chaos Theory http://www.amazon.com/Chaos-Making-Science-James-Gleick/dp/0140092501 were a big influence on the things I've been trying to do in QC. As were the kind of volumetric renderings you tend to see in books on cosmology. I was a bit of a space cadet as a kid- but not the cool kind.

Quote:
IQ's work on it is pretty amazing, his version is near enough realtime (well, he was still optimising, perhaps it IS realtime by now :)

I think he was saying it took a few seconds per frame, so not really realtime. Still incredibly impressive though.

Quote:
The other renderers I've seen were taking 7 hours plus per frame (a week or more in some cases!). No small improvement! He's discussed how it works a fair bit in the pouet thread, but it's WAY over my head. I mean this might as well be written in chinese:

"Yes, it's a distance field, and I'm using the regular distance estimation G/|G'| (derived from expanding G(c+epsilon) with a order 1 Taylor series), where G=the Hubbard-Douady potential, G=(1/2^n)·log|z|. That means G'=(1/2^n)·|dz|/|z|, so distance = |z|·log|z|/|dz|"

To me, too. The funniest thing is reading people who do that kind of stuff saying 'I've never been that good at maths', or words to that effect. Not good at maths compared to who...(or what)?!

It's another plane man... Literally, I guess; when you're talking about stuff that's often 3D projections of multi-dimensional forms.

a|x

psonice's picture
Re: Rendering Live Video as 3D Volume

I guess it's all relative. I've noticed before when I've done some maths stuff, people have looked at it like I was doing some kind of magic.. and I'd definitely say I was bad at maths. In fact I failed a-level maths. IQ's work on that looks like magic to me.. scary to think that he probably looks at somebody else's work in the same way :D

That book looks interesting. Unfortunately I've just ordered my reading for the next few months, and it's a book on C. Decidedly not interesting, but unfortunately kind of necessary :(

I intended to do another video of the timecube thing last night with the 3d fixed, but suddenly got a bit too busy - I got a contract writing some iphone stuff, but I need a 3GS to do it. I spent last night putting my old 2g on ebay, only to have it sell within 3 hours.. so today I've been hectic trying to track down somewhere with a 3gs in stock or I'll be left without a phone. Finally found one, but it was hard work ;( On the bright side though, by some incredible luck I sold my old iphone at a profit! \o/

Well, I fixed the 3d effect anyway. But once that was fixed, the 3d effect was so clear that it became very obvious that the shadows were VERY wrong. They seemed to be embedded in the object in 3d space, it looked really odd :D Now I have them flat on the floor, but I suspect that they're flipped backwards. Sigh.

toneburst's picture
Re: Rendering Live Video as 3D Volume

Sorry for the delayed reply. I've not looked at this example. I'll give it a look sometime. At the moment, I'm investigating raycasting-based methods though, when I get the chance to do QC stuff at all (bit caught up with other stuff right now, unfortunately) :(

a|x

toneburst's picture
Re: Rendering Live Video as 3D Volume

psonice wrote:
I guess it's all relative. I've noticed before when I've done some maths stuff, people have looked at it like I was doing some kind of magic.. and I'd definitely say I was bad at maths. In fact I failed a-level maths. IQ's work on that looks like magic to me.. scary to think that he probably looks at somebody else's work in the same way :D

Almost certainly. There's always someone to look upto.

Quote:
That book looks interesting.

It's a good overview, and Gleick is a good writer. Very good anecdotes on some of the more oddball scientists.

Quote:
Unfortunately I've just ordered my reading for the next few months, and it's a book on C. Decidedly not interesting, but unfortunately kind of necessary :(

Why C?

Quote:
I intended to do another video of the timecube thing last night with the 3d fixed, but suddenly got a bit too busy - I got a contract writing some iphone stuff, but I need a 3GS to do it. I spent last night putting my old 2g on ebay, only to have it sell within 3 hours.. so today I've been hectic trying to track down somewhere with a 3gs in stock or I'll be left without a phone. Finally found one, but it was hard work ;( On the bright side though, by some incredible luck I sold my old iphone at a profit! \o/

It's good there's a market for 2nd-hand Apple stuff. I've got a 2nd-gen 8GB iPod Touch to get rid of, that came (almost) free with my laptop in the Summer (Apple educational deal aimed at students, but I work for a college, so, why not reap the benefits, too?). Following a recent breakin at my flat, and subsequent insurance claim, I also now have 2 brand new 15" MBPs to put on eBay. Might keep one actually. The other one is going to turn into a new TV, hopefully, if I get enough on eBay for it.

Quote:
Well, I fixed the 3d effect anyway. But once that was fixed, the 3d effect was so clear that it became very obvious that the shadows were VERY wrong. They seemed to be embedded in the object in 3d space, it looked really odd :D Now I have them flat on the floor, but I suspect that they're flipped backwards. Sigh.

That's the thing with shadows- they can really enhance '3D-ness', but only if they're exactly right, perspective-wise. Otherwise, they can end up detracting from the depth.

I wish there was a way of doing anaglyph 3D rendering without resorting the the brute-force method of rendering everything twice, with an offset. I wonder if there's some way of doing it with raycasting, that's efficient. Maybe somehow casting two rays for every pixel.. Or... maybe simply doing a straight x-axis RGB channel offset on the source volume texture would work, since further-away voxels would appear to have a smaller channel-offset due to perspective. Hmm... don't have time to try it, unfortunately...

a|x

psonice's picture
Re: Rendering Live Video as 3D Volume

toneburst wrote:
Why C?

Well, I've been doing obj-c plenty but without really learning C properly. That's got me going just fine on the whole, but sometimes (especially as you start poking a bit deeper) you find there's something you need that only has a C api. At that point, I tend to be running on guesswork and a prayer ;) So learning C fully seems a good idea just now.

Quote:
It's good there's a market for 2nd-hand Apple stuff. I've got a 2nd-gen 8GB iPod Touch to get rid of, that came (almost) free with my laptop in the Summer (Apple educational deal aimed at students, but I work for a college, so, why not reap the benefits, too?). Following a recent breakin at my flat, and subsequent insurance claim, I also now have 2 brand new 15" MBPs to put on eBay. Might keep one actually. The other one is going to turn into a new TV, hopefully, if I get enough on eBay for it.

Yeah, somehow apple stuff really holds its value. A 3 year old macbook pro will cost more than a brand new pc laptop.. crazy. But excellent if you have 2 to sell! It should buy a huge TV :D Perhaps even a 3d one? Some of the newer screens support 3d I believe.

Bad news about the break-in though, there's nothing worse than coming home to that :(

Quote:
I wish there was a way of doing anaglyph 3D rendering without resorting the the brute-force method of rendering everything twice, with an offset. I wonder if there's some way of doing it with raycasting, that's efficient. Maybe somehow casting two rays for every pixel.. Or... maybe simply doing a straight x-axis RGB channel offset on the source volume texture would work, since further-away voxels would appear to have a smaller channel-offset due to perspective. Hmm... don't have time to try it, unfortunately...

There's numerous ways to do it I think, without rendering twice. Whatever way, you'll end up casting 2 rays or rendering twice, but it's possible to avoid the huge amount of duplication and set up that QC needs for 2 renderers I think. Casting 2 rays per pixel certainly makes sense.

Then again, I'd still do duplicate renderers :) Well, if performance isn't absolutely critical at least. Reason being that it gives you separate left/right images, which you can use for pretty much anything. If you render out with combined red/blue, you're stuck with that. Also, youtube's 3d mode requires separate left/right images - provide that, and you just select your glasses type (or cross eyed, or even disabled) and it'll just play.

Shifting the object a bit on x does work, you'll get a 3d effect, but rotating about y gives much better results. If you shift on x, you'll get a light 3d effect but the object will either be far behind or far in front of the screen. If you rotate on y (left image by say -5, right by +5) then you get a strong 3d effect, with the origin of the rotation being level with the screen (i.e. if you rotate about the centre of the object, the object will be embedded in the screen. If you rotate about the back end, the object will be in front of it.)

Just remember to rotate in screen space, not object space like I did for the video before ;) I.e. rotate your object however you want, and THEN put the whole thing in a 3d transform and do only the y rotation.

psonice's picture
Re: Rendering Live Video as 3D Volume

Yeah, I'm forever lacking time :( Do let us know how the raycasting goes though. If it runs pretty fast, try adding reflections + shadows :D

toneburst's picture
Re: Rendering Live Video as 3D Volume

psonice wrote:
Yeah, I'm forever lacking time :( Do let us know how the raycasting goes though. If it runs pretty fast, try adding reflections + shadows :D

That's a bit ambitious for realtime operation, I suspect. As I understand it, both involve casting lots of additional rays- from each point on a surface towards the light position for shadows, and for reflections, allowing rays to bounce off objects in the scene. I've seen demos of this kind of thing running in realtime, but only with algorithmically-generated forms in the scene (the usual spheres, cubes, cones and checkerboard planes- you know the kind of thing). I think the problem comes when you're raytracing a texture, and have to do texture-lookups every time you increment a ray. Adding more features inevitably means more lookups, and you can easily get into a situation where you're doing hundreds of lookups per output pixel. Which isn't going to be realtime :(

I've got (or rather stolen) some ideas for cool rendering options though. Just gotta try out normal-generation, and see if that brings the whole thing to a juddering halt.

a|x

dust's picture
Re: Rendering Live Video as 3D Volume

some cool stuff in this here. has anyone figured out the open cl converter 2 3d image buffer patch yet ?

here are a couple files kind of like the time cube. one is static and the other is animated. nothing fancy like ray tracing just some slices stacked.

i don't know if i would consider these volumetric although they give a pretty good illusion. im thinking i might try the tb_med scan approach and make a 10 x 10 grid. the iterator kind of sucks when trying to mask an image inside of it.

PreviewAttachmentSize
CT_VolumeStatic.qtz746.77 KB
CT_VolumeCycleLoop.qtz748.08 KB

toneburst's picture
Re: Rendering Live Video as 3D Volume

They're very nice!

I didn't know an Image Inport patch could import an image sequence and output a structure. That's very cool. Is that a new Snow Leopard feature?

I used the 10x10 grid again for this one, in fact. This time, though, I used a GLSL shader, and raycasting, using each cell of the 'sprite-sheet' as a z-axis slice. Works pretty well, with some simple maths to ensure interpolation on all 3 axes.

The main advantage of the sprite-sheet approach is that you can apply some kind of CIFilter or GLSL shader to the entire sprite-sheet. When you're using Iterators, doing any kind of image filtering slows things to a crawl very quickly.

a|x

offonoll's picture
Re: Rendering Live Video as 3D Volume 'layer == structure??'

Woww! So is that a pdf or psd layer file whitch creates a structure of images?? so same thing as CoGePSDLayers plugin ??

vade's picture
Re: Rendering Live Video as 3D Volume

You might be able to do anaglyph rendering with a shader by reading thefragment coord and depth and doing some math to compute red and blue fragments.. Hm.

I did some fun Stereoscopic rendering at a friends house using Max/MSP/Jitter a while ago rendering to two projectors on to a polarized setting. We did a brute force rendering the scene twice technique, but the effect was really really convincing.

toneburst's picture
Re: Rendering Live Video as 3D Volume

Quote:
Well, I've been doing obj-c plenty but without really learning C properly. That's got me going just fine on the whole, but sometimes (especially as you start poking a bit deeper) you find there's something you need that only has a C api. At that point, I tend to be running on guesswork and a prayer ;) So learning C fully seems a good idea just now.

Gotcha.

Quote:
Yeah, somehow apple stuff really holds its value. A 3 year old macbook pro will cost more than a brand new pc laptop.. crazy. But excellent if you have 2 to sell! It should buy a huge TV :D Perhaps even a 3d one? Some of the newer screens support 3d I believe.

I'd never get that past The Mrs. Something smallish, I think, but full HD, and preferably LED-backlit.

Quote:
Bad news about the break-in though, there's nothing worse than coming home to that :(

Yeah, it wasn't nice. I think we've been quite lucky it hasn't happened sooner though. The most upsetting thing that were lost were the photos from a weekend in Paris (which were on the memory card that was still in the camera that was stolen), and my passport (which was in the bag the thief stole to carry the other stuff in). The passport had all my visas and stamps from all the places I'd been on holiday, like Iran, Mongolia, Korea, Japan etc.

I did quite well with the claim though. The two laptops replaced one I was about to sell on eBay (it was literally in it's box, ready to go), and one I'd rescued from work because it belonged to a colleague of mine, who'd basically trashed it. Plus, I got a new replacement for a macro lens I'd bought secondhand, and a camera that wasn't the subject of a product-recall, like the one it replaced. So, all in all, not a bad result. Having said that, I've been paying insurance premiums for years, so I basically paid for all the new stuff, anyway. And, presumably, now we've been burgled, our premiums are going to go through the roof.

Quote:
There's numerous ways to do it I think, without rendering twice. Whatever way, you'll end up casting 2 rays or rendering twice, but it's possible to avoid the huge amount of duplication and set up that QC needs for 2 renderers I think. Casting 2 rays per pixel certainly makes sense.

Then again, I'd still do duplicate renderers :) Well, if performance isn't absolutely critical at least. Reason being that it gives you separate left/right images, which you can use for pretty much anything. If you render out with combined red/blue, you're stuck with that. Also, youtube's 3d mode requires separate left/right images - provide that, and you just select your glasses type (or cross eyed, or even disabled) and it'll just play.

Shifting the object a bit on x does work, you'll get a 3d effect, but rotating about y gives much better results. If you shift on x, you'll get a light 3d effect but the object will either be far behind or far in front of the screen. If you rotate on y (left image by say -5, right by +5) then you get a strong 3d effect, with the origin of the rotation being level with the screen (i.e. if you rotate about the centre of the object, the object will be embedded in the screen. If you rotate about the back end, the object will be in front of it.)

Just remember to rotate in screen space, not object space like I did for the video before ;) I.e. rotate your object however you want, and THEN put the whole thing in a 3d transform and do only the y rotation.

I wasn't aware of YouTube 3D. I will have to look into that. I have a 3D lens for my dSLR which I keep meaning to try shooting some video with.

I also didn't know about the rotation method of making anaglyphs. Do you not have to move the two versions at all on the x-axis, then? I'm a little surprised this works, but I guess what you're doing is essentially exaggerating the perspective effect, so it sort of makes sense. I'll look into that (another thing to add to the list).

a|x

psonice's picture
Re: Rendering Live Video as 3D Volume

toneburst wrote:
Yeah, it wasn't nice. I think we've been quite lucky it hasn't happened sooner though. The most upsetting thing that were lost were the photos from a weekend in Paris (which were on the memory card that was still in the camera that was stolen), and my passport (which was in the bag the thief stole to carry the other stuff in). The passport had all my visas and stamps from all the places I'd been on holiday, like Iran, Mongolia, Korea, Japan etc.

I did quite well with the claim though. The two laptops replaced one I was about to sell on eBay (it was literally in it's box, ready to go), and one I'd rescued from work because it belonged to a colleague of mine, who'd basically trashed it. Plus, I got a new replacement for a macro lens I'd bought secondhand, and a camera that wasn't the subject of a product-recall, like the one it replaced. So, all in all, not a bad result. Having said that, I've been paying insurance premiums for years, so I basically paid for all the new stuff, anyway. And, presumably, now we've been burgled, our premiums are going to go through the roof.

Yeah, the insurance definitely helps. Last break-in I had, the main thing stolen was my CD collection, so I got to renew all my favourite disks and replace the duff ones with something fresh. On the downside, that happened while we were away on our honeymoon, not the best of wedding presents ;(

Real downer that they took your passport though.

Quote:
I wasn't aware of YouTube 3D. I will have to look into that. I have a 3D lens for my dSLR which I keep meaning to try shooting some video with.

I also didn't know about the rotation method of making anaglyphs. Do you not have to move the two versions at all on the x-axis, then? I'm a little surprised this works, but I guess what you're doing is essentially exaggerating the perspective effect, so it sort of makes sense. I'll look into that (another thing to add to the list).

Youtube 3d is pretty cool - you upload one video with side-by-side images (I render 2 squashed billboards in QC, and output at 1280x720. Youtube then puts out 1280x720 anaglyph, or various other 3d formats). I think that's a great feature, and I'm going to do a lot more stuff in 3d from now on :)

How does the 3D lens for dlsr work though?! I just can't figure out how it can work with one sensor.. does it have 2 lenses with coloured filters so it's pre-merged or something?

The 'rotation method' - I don't know, maybe it's not the best way? I've honestly never bothered to find out the 'right' way, because rotation just seemed more obvious :)

If you think about it, your eyes don't stare straight ahead in parallel, they cross so that you can focus on some point in space. I used to do this in 3d packages by setting 2 cameras, a certain distance apart on x, but both pointing at a point in space. Working backwards (since we've only got the one camera in QC), you just rotate the scene about y, the origin being the point of focus. Maybe that's actually wrong, I'm interested to know now :)

dust's picture
Re: Rendering Live Video as 3D Volume 'layer == structure??'

i didn't think of the coge's plugin. i made a photoshop action script that duplicated the CT scan and added it to an alpha channel then, added a transparent background, and finally erased the masked alpha from the image and saved as a .png.

which when used with an image downloader would essentially be the same as coge's plug-in. i seemed to get better results just using add as a blend mode so i just took all those pngs i made and added them to a pdf to structure out just to save space.

i like the pdf format because the with vectors graphics and type they come in with transparent backgrounds that you can iterate through. as well as you can use alpha channel psd files with it. i think in qc4 they have replaced the image downloader patch and pdf patch with just an image patch.