KinectTools v0.1

waxtastic's picture

First of all. Thanks.

Some issues:

  • I'm getting the RGB image, but it's flipped vertically
  • No depth image
  • Pressing Stop and Run gives the beachball and I need to force quit

Is the SkankySDK Framework available somewhere? I would like to mess with the source of this one.

dust's picture
Re: KinectTools v0.1

don't know where the skank is on kineme anymore. there used to be a link on the homepage. i'm attaching a x code project template for you to make projects a la skank. this template was the latest release (leopard) it still seems to build in SL fine (see pic). welcome to the dark side, wax ;)

i can't wait to try out my kinect even if there are some issues with it. this thing gives me an excuse to get an xbox. my 3 year old i'm sure will love kinect. she is addicted to iOS gaming. wanted to get just the camera but its all sold out online so i have the last one on hold at the department store right now. i'm talking myself into for a few extra bucks to get the console.

PS:

to kineme if you guys no longer want this SDK to be available. i'm sorry for posting feel free to nuke this comment.

gtoledo3's picture
Re: KinectTools v0.1

Is there no image coming out of the depth port at all, or just black or something? (Sorry, I have no clue about how to help, just curious about the resolution.)

waxtastic's picture
Re: KinectTools v0.1

I had some email exchange about this with Steve. He said: "I am not applying the transfer function included with the original code, so you're getting the raw output."

So actually there is an image, but It looks completely black. If I add some Color Controls and Gamma Adjust patches and turn up their settings all the way up, I get something. This depth image is also flipped vertically.

gtoledo3's picture
Re: KinectTools v0.1

This may be really off base, but I seem to remember some reference to bayer filtering being needed, or part of the processing on the depth channel. Hmm. I guess it will require some poking around.

dust's picture
Re: KinectTools v0.1

well i have seen some promising things on the internet as per the kinect so im joining the party. will try the plug soon hopefully have some results ?

waxtastic's picture
Re: KinectTools v0.1

Thanks dust. I actually found that on the site, but I guess Kineme is using some updated framework for their development that is not available.

Oh well, I wouldn't probably be of much help anyway even if I could compile it.

waxtastic's picture
Re: KinectTools v0.1

Bayer to RGB conversion is needed for the RGB image. The depth image sensor would not have a bayer filter as it is a grayscale image.

cwright's picture
Re: KinectTools v0.1

This depends on how the data is presented. It could present data in a way that needs something similar to bayer reconstruction (since it's 11bit or 12bit data, some sort of swizzling will probably be necessary).

stuart's picture
Re: KinectTools v0.1

I am also getting the same results. Both images flipped and depth appears black until some sort of gamma/brightness function is applied to it.

One thought is perhaps they are using a CLUT (color lookup table) to translate the shades of gray (black) to different depth readings.

Although when I blast the brightness, I only appear to be getting two colors out of it. Black and one shade of grey.

Anyways, bought mine today. Didn't think the store would have them in stock, but I just had to ask and they had them behind the counter.

thanks for diving straight into this one Steve.

smokris's picture
Re: KinectTools v0.1

A Kinect is scheduled to arrive here in Athens by Wednesday or Thursday (so then I'll be able to test this on actual hardware). I'll try to post a new version, addressing these issues, shortly thereafter.

stuart's picture
Re: KinectTools v0.1

Just figured something out. I just put an "NI Invert Alpha" on the depth image and I'm getting more shades of gray now. They are still not totally usable yet, but it's better than where I was a few minutes ago.

I believe though this basically proves that the depth data is showing up on the alpha channel, not the RGB channels.

Now if I can come up with a CLUT or something else to remap the shades of gray around, we should be cooking with gas.

stuart's picture
Re: KinectTools v0.1

Ooops, lot's of objects in my patch. It was the "NI Alpha Channel" that got me seeing some grayscale.

stuart's picture
Re: KinectTools v0.1

Here's a picture and a test composition, flipped accordingly.

So currently black is the closest depth that can be seen. The further away, the less dark it becomes until a certain grayscale level, which i guess is 50%. That level should be considered black or invisible.

Steve, it is if they are using 128 steps of depth, not 256.

PreviewAttachmentSize
Me and Sadie.png
Me and Sadie.png519.22 KB
Kinect Test.qtz3.3 KB

gtoledo3's picture
Re: KinectTools v0.1

Stu, check this link out:

http://kineme.net/Release/Production/MakingMostDepthmapswReadPixels

It may not result in any forward steps at all, but I remember that yanamono posted some functions that handled correcting depth buffer with the old GL Read Pixels patch. Off the top of my head it occurs to me that I would look at that given the sort of close results that you're getting right now.

Nice "seeing you" ;)

dust's picture
Re: KinectTools v0.1

ok im joining the party now. this thing is pretty sweet got the only one in the store. it says do not sell on the box but seems to work well with theos demo. i got down to one error with memos demo and finally gave up fighting ofx svn nightmares. the kineme kinect plug beach balls me when i try to view the depth image ;)

will have a look at the code later im burnt out. my daughter decided to get up at 4am to play games on the ipad so i have a lack of focus at the moment but i did manage to get a rudimentary plugin working from theo's demo. its just a test at the moment but if anybody feels inclined to build this while they wait for the kineme plug to arrive, go for it. the compiled plug is to big to post here so i'm posting the source. this by no means is a polished plug just wanted to play with kinect in qc. so don't expect this to be very high res etc...

so if you want to play while you wait build this plugin don't change any settings as the ofx 2 qc plugin bridge isn't very stable. i mean don't change any settings. keep it i386 32bit 10.6 and do not build and copy just build the plugin and manually move it to your graphics folder or double click and let k core do its magic. make sure to run qc in 32 bit mode then use the depth map, rgb and blob images however you like.

i made an input to do some plugin processing with the blob image but im not sure if its working or not. to tired to test. this does produce a depth image though. the key code input is "d" to subtract background and + or - to threshold the blob image etc.. but thats all easy enough to do in qc.

will report back when i get a chance to check out the kineme source.

PreviewAttachmentSize
dKinect.zip466.67 KB

stuart's picture
Re: KinectTools v0.1

I think you are really going to like this thing a lot Steve.

I'm curious as the discovery process goes, will the joint and skeletal recognizing stuff be found in the built in hardware or the Xbox OS. Sure hope it's in the box.

I couldn't let this thing sit without getting a depth map out of it tonight. Finally found the color map object (CLUT) and I was able to get it ranged in better. It's still not optimum, but usable.

I decided to make all the game controllers face each other down. I could feel the tension in the room. Seen in the picture.

Here's what I ended up with at the end of the night. I had to use two color maps to tailor the grayscale enough to give me what I wanted to see.

The sensor doesn't like anything closer than 1 1/2 feet and it will show up as black. Also, the IR shadows can be seen as black as well.

I love it, $150 Depth Camera.

Released and hacked and running in QC in less than 2 weeks. Lovely.

Thanks All

PreviewAttachmentSize
Wii Sony Xbox.png
Wii Sony Xbox.png872.87 KB
Kinect Test 3.qtz37.86 KB

psonice's picture
Re: KinectTools v0.1

Looking good!

I have a comp somewhere that will render this on a mesh using the depth map + colour image, i'll dig it out at lunch time and post it up.

stuart's picture
Re: KinectTools v0.1

Like a 3d Ruta

monobrau's picture
Re: KinectTools v0.1

Nice! going to get mine soon... Just wondering, what framerate do you get?

psonice's picture
Re: KinectTools v0.1

Kind of yeah.

Connect the colour output to the 'colour texture' input, and the depth map to the depth input. I've hooked it up to a video input so you can see if it's working ok.

If you're not getting any depth, check which channel the depth map is on, if it's alpha you'll need to change the line in the vertex shader to:

float d = texture2D(depthMap, gl_TexCoord[0].xy).a;

PreviewAttachmentSize
DepthImage to Mesh.qtz8.76 KB

gtoledo3's picture
Re: KinectTools v0.1

psonice, what is that signal/mesh res supposed to workaround? Is it your intention to have the screen be black until someone changes res... that's what it's doing on my system, but I don't think that's what is supposed to be happening (?).

That's pretty interesting about the two rounds of color mapping Stu!

@psonice: In looking at this... it has me thinking about how to eliminate the fringe area on an extrusion. I'm starting to note it's pretty crucial (or maybe it just feels that way to me.)

This is a pic from a depth map test I did when I was messing, where OpenCL is used instead of GLSL. Because of the depth map being rendered as mesh, you can add shadows. This is nothing original, just simple white for z, like the shader you posted.

The generation of correct UV's for retexturing was giving me a headache with the kernel wanting to flip out and do real fun stuff to my computer as I was perfecting the UV setup (still haven't). Kinda waiting to get my hands on one before I mess around with any more of the processing code.

Looking forward to Steve doing the "rebooted" version as well!

PreviewAttachmentSize
depth 1.png
depth 1.png106.63 KB

psonice's picture
Re: KinectTools v0.1

Quote:
psonice, what is that signal/mesh res supposed to workaround? Is it your intention to have the screen be black until someone changes res... that's what it's doing on my system, but I don't think that's what is supposed to be happening (?).

Yeah, that. It's [i]supposed[/i] to fix an apparent QC bug, it worked on my setup, not on yours apparently. The bug seems to be that with this setup, the initial value of the mesh resolution is either set to zero or ignored, so you get a zero polygon mesh which you can't see. As soon as you change the mesh res, it appears.

I don't know why that happens, but the intention was to change it briefly after 0.5 seconds automatically so it fixes itself.

So yeah, if you get a black screen, change the mesh resolution. I've not got time to figure the specifics out and get a bug report in just now, so if anyone else puts one in let me know. Otherwise I'll get around to it at some point.

Quote:
@psonice: In looking at this... it has me thinking about how to eliminate the fringe area on an extrusion. I'm starting to note it's pretty crucial (or maybe it just feels that way to me.)

Do you mean the "sides" of the mesh, where it jumps between a foreground object + a background one? It's a little tricky.. the easy way is to sample the depth map in the pixel shader, sampling 3-4 pixels around the current texel and determining the gradient. If the gradient is beyond a certain level, render transparent.

I've got another version of this shader somewhere that does a similar method, but doesn't make the sides transparent - instead it creates a 'bulge' in the object so the sides are concave and hidden from sight. I'll see if I can knock together a 'transparent sides' version though.

Quote:
This is a pic from a depth map test I did when I was messing, where OpenCL is used instead of GLSL. Because of the depth map being rendered as mesh, you can add shadows. This is nothing original, just simple white for z, like the shader you posted. The generation of correct UV's for retexturing was giving me a headache with the kernel wanting to flip out and do real fun stuff to my computer as I was perfecting the UV setup (still haven't). Kinda waiting to get my hands on one before I mess around with any more of the processing code.

That looks pretty good. Why do you need to generate the UV coords though? If you're generating the mesh from a depth map, and the depth map is generated from the same camera data you're going to texture the mesh with, then the texture coords for the colour texture are the same as the texture coords you're sampling from to generate the mesh.

Or in other words, the .xy coords of each vertex are the same as the uv.xy coords. Or am I wrong there?

gtoledo3's picture
Re: KinectTools v0.1

psonice wrote:
Quote:
psonice, what is that signal/mesh res supposed to workaround? Is it your intention to have the screen be black until someone changes res... that's what it's doing on my system, but I don't think that's what is supposed to be happening (?).

Yeah, that. It's [i]supposed[/i] to fix an apparent QC bug, it worked on my setup, not on yours apparently. The bug seems to be that with this setup, the initial value of the mesh resolution is either set to zero or ignored, so you get a zero polygon mesh which you can't see. As soon as you change the mesh res, it appears.

I don't know why that happens, but the intention was to change it briefly after 0.5 seconds automatically so it fixes itself.

So yeah, if you get a black screen, change the mesh resolution. I've not got time to figure the specifics out and get a bug report in just now, so if anyone else puts one in let me know. Otherwise I'll get around to it at some point.

I seem to remember some Leopard bug like that now that you mention it (?). Ok, gotcha. Apparently in SL you don't have to do that, and it makes it fart out. It was pretty obvious though, not a biggie.

psonice wrote:
Quote:
@psonice: In looking at this... it has me thinking about how to eliminate the fringe area on an extrusion. I'm starting to note it's pretty crucial (or maybe it just feels that way to me.)

Do you mean the "sides" of the mesh, where it jumps between a foreground object + a background one? It's a little tricky.. the easy way is to sample the depth map in the pixel shader, sampling 3-4 pixels around the current texel and determining the gradient. If the gradient is beyond a certain level, render transparent.

Exactly. Hmm, that's a clever (too obvious but not!) idea. If I had read that before I started making my attachment, that might have made it in here. I'll try that.

psonice wrote:
I've got another version of this shader somewhere that does a similar method, but doesn't make the sides transparent - instead it creates a 'bulge' in the object so the sides are concave and hidden from sight. I'll see if I can knock together a 'transparent sides' version though.

Don't feel pressed on that, at least on my account; it's one of those things that now that you've said it, I immediately realize how to get that going in GLSL, but then, it's the OpenCL conversion that I'm pondering...

I guess I'm interested in making my routines with this stuff in OpenCL so that I could possibly render stuff quicker, use the built in shadow engine simply, and then do subsequent vertex processing. Not that the stability and speed of GLSL isn't a plus. It's really ideal to have routines based in both routes.

psonice wrote:

Quote:
This is a pic from a depth map test I did when I was messing, where OpenCL is used instead of GLSL. Because of the depth map being rendered as mesh, you can add shadows. This is nothing original, just simple white for z, like the shader you posted. The generation of correct UV's for retexturing was giving me a headache with the kernel wanting to flip out and do real fun stuff to my computer as I was perfecting the UV setup (still haven't). Kinda waiting to get my hands on one before I mess around with any more of the processing code.

That looks pretty good. Why do you need to generate the UV coords though? If you're generating the mesh from a depth map, and the depth map is generated from the same camera data you're going to texture the mesh with, then the texture coords for the colour texture are the same as the texture coords you're sampling from to generate the mesh.

Or in other words, the .xy coords of each vertex are the same as the uv.xy coords. Or am I wrong there?

Yeah, the coords are the same. You aren't missing anything at all on that. The calc is exceedingly simple. The issue is that with OpenCL, stuff wants to flip right out (at least on some builds/gpu's) if Depth Map and Color Map aren't exactly the same in width and height. In addition, if you start hot swapping images on the fly while the Viewer is running, stuff can go seriously awry, because now there's missing data for a frame (or whatever).

The plus though, is that you can render a vert for every pixel if you want (there isn't the GLSL res cap), the blazing speed on many gpu's/builds, and all of the subsequent vert/normal/color struc post processing you can do, that you can't do with GLSL... so I'm committed to fleshing OpenCL methods out even if the editing of a kernel turns my monitor into a strobe machine every so often. It's directly accessing the GPU, so what do I expect I guess...

Attached is an OpenCL kernel that has inputs for depth map as well as texture. If anyone uses this, please read my notes in the qtz.

PreviewAttachmentSize
OpenCL DepthMap and Texture.qtz46.16 KB
OpenCL DepthMap and Texture.png
OpenCL DepthMap and Texture.png429.37 KB

dust's picture
Re: KinectTools v0.1

cool was just messing with the glsl patch from chris. pretty funky. i'm not really into the rutt ettra effect so much although i like running fluid sims though height field patches but with the kinect its pretty damn interesting. im going to try and put a mirror behind me to see if i can can get any more dimensional data as a two cam set up won't work yet until someone figures out how to offset the the point cloud. here are some pics from my test with psonice glsl patch and blob test with of. i'm actually getting a whole had registered as blobs as well as fingers. really awesome can't wait to play more. need to do some homework now though. maybe i can figure out a way to incorporate kinect into my homework.

PreviewAttachmentSize
kinectfingers.png
kinectfingers.png1.07 MB
kinecthand.png
kinecthand.png1.08 MB
kinectpsonice_qc.png
kinectpsonice_qc.png1.35 MB
kinectpointcloud_qc.png
kinectpointcloud_qc.png1.8 MB

waxtastic's picture
Re: KinectTools v0.1

A fix for the speed issues on OSX is now available: https://github.com/OpenKinect/libfreenect/issues#issue/22

stuart's picture
Re: KinectTools v0.1

Cool, can't wait to see that improve along with the occasional glitch frame.

Found this page today on all the USB sniffing going on Kinect. This will give us access to the other parameters down the road.

http://ladyada.net/learn/diykinect/