Kinect + Image Stitching + Mesh?

mattgolsen's picture

Has anyone successfully stitched video input form multiple Kinects together, and generated a mesh from this?

Alternatively, has anyone generated mesh from the depth image?

gtoledo3's picture
Re: Kinect + Image Stitching + Mesh?

Check the repository for the openCL depth kernel w/ texture I posted. Then, there's another example I made that will clip near and far.(I think that's up there?)

How do you need to stitch them? Visually, or does output struct need to be joined?

mattgolsen's picture
Re: Kinect + Image Stitching + Mesh?

I'll definitely check those comps out, I think I tried them previously on Lion without much luck though.

I was thinking that it would probably be easiest to stitch the image itself, and then generate a mesh from that. Ideally though, I would think joining the output structure together would be best.

mattgolsen's picture
Re: Kinect + Image Stitching + Mesh?

Yeah the OpenCL kernel in that comp definitely fails. I'm going to poke around in it and see if I can figure out why. Which ought to be interesting since I don't know anything about OpenCL :D

gtoledo3's picture
Re: Kinect + Image Stitching + Mesh?

See if it's named "main". If so, change it to "depthkernel" or something.

If that doesn't work, you're probably screwed, and your GPU isn't going to do it w/ OpenCL, or the Mesh Creator, period.

gtoledo3's picture
Re: Kinect + Image Stitching + Mesh?

This doesn't work in Lion?

It should. It's written to spec ( kinda unlike craploads of other macros that Apple still has floating around inside of QC Lion...:-) ..... which probably should show the commitment there is to making OpenCL/mesh creator stuff work correctly, if they can't even get renamed in the course of 2ish years. I was pretty hot on openCL, and still am, but I don't think Apple seems to be really committed to having it work right, and you may be opening up Pandora's box for yourself. ALWAYS test OpenCL stuff on a given GPU, because it working or not working is totally willy nilly across Apple's product line.)

PreviewAttachmentSize
Kinect_Read Depth w Floor & Wall_gt.qtz25.22 KB

mattgolsen's picture
Re: Kinect + Image Stitching + Mesh?

Ah I was using the wrong one, this works perfectly.

Any ideas on an approach to stitching?

gtoledo3's picture
Re: Kinect + Image Stitching + Mesh?

This is a dl link for a "GL Clip Plane Environment" https://www.box.net/shared/jqytgeb6qth6djxsb7st

This will allow you to call any of the 6 GL Clip Planes that (tend to be) supported on all Mac GPUs. (I did a search on all Mac GPUs, and while I remember that no more than 6 are supported on any, I can't remember if it's supported on all).

This is alpha grade, but shouldn't really have any problems. It may do weird stuff with Lighting, if Lighting's shadows are used, or something like that. I haven't tested it with GLSL at all.

Think of each "plane" as being an invisible sprite that clips out stuff, and the ax,ay,az stuff as working in a pitch/roll/yaw sort of way, with W being the weight/"how much".

You'll see the plugin, and a test qtz that shows one plane clipping z, and another plane clipping x from left to right. I have not tested it in Lion at all.

That may help you with the mesh stitching.

You could do something like feed a third texture and use the color of that to fade right/left/up/down, in the CL kernel. You could also do some clipping/discard stuff in glsl based on pixel position, or color, or vert, etc., to get rid of mesh you don't need.

You could also just render all of your texture outputs to image (RII), do compositing like that, and then make your mesh, but you should test to see if bit depth gets decimated (if that matters to you).

mattgolsen's picture
Re: Kinect + Image Stitching + Mesh?

Crap, this is harder than I thought.

gtoledo3's picture
Re: Kinect + Image Stitching + Mesh?

Yeah.

What are you actually trying to get though, I'm fuzzy on it.

Are you trying to get one output structure of all of the vert/normal/color data? Or are you trying to just graphically join them. Do you need to chop out stuff from the sides or not (this is why I posted the clip plane thing, because if I'm lazy, I might use that.)

Here's what I think you should do if you need the mesh info not be distinct from one another (I was thinking of keeping each pipeline distinct, and then chopping the meshes to have them be side by side, but maybe that's stupid, or just weird... it makes sense to me, but maybe it's not as great for someone else).

Just take each depth output, place them in a render in image, side by side, and make sure you're drawing pixel per pixel, and that the RII is locked to exactly the right resolution. Feed that to the original kernel setup thing.

Now you have one big mesh.

I think you'll probably want to force 32-bit rendering. You'll want to investigate Billboard settings, especially the pixel accurate one, and see how that works for you. You'll want to check out how color correction skews results as well. You'll want to pay special attention to whether or not Core Image or 3rd party plugins freak out when you run the depth image through them, if you do it before you render to Billboard. You'll also want to investigate Vade's kinect plugin because I think it outputs a higher quality depth image, but I don't think it will do multiple id's :-/ It would be really awesome if that got an update and/or the code was opensourced, but I haven't mentioned it @ vade (haven't thought of it/had time), so I'm not complaining at all.

...and...

I don't know if this is a for fun, or a for profit thing, but keep in mind that as much as people are freaking out about the kinect, and as much as you can do with it given correct setup/control of environment, going over the top to bulletproof stuff... it's made to work in a living room. Always do yourself the favor of testing excessively, shining lights at it, walking up close to it, and whatnot, to be well aware of how it can flake out.

mattgolsen's picture
Re: Kinect + Image Stitching + Mesh?

It's entirely for fun. I was basically inspired by this video I saw forever ago http://www.youtube.com/watch?v=5-w7UXCAUJE

Just graphically would work, but I thought it would be interesting if a mesh could be built from it, making a pseudo 3D scanner. Then I had an idea to use 8 or 10 of them at once on a crowd, and generate a live mesh of the entire audience and perform effects based from that. Mostly just a flight of fancy thing really.

I attempted to use Vade's, and while it does have an input for Device ID, it won't address multiple Kinects. It either crashes QC, or in my exciting case earlier I tried to use two of them in conjunction with your OpenCL comp. This caused my Lion install to straight kernel panic :D

mattgolsen's picture
Re: Kinect + Image Stitching + Mesh?

Additionally, is it even possible to save a mesh out of QC that's been created? I don't think I've seen anything that does this.