|
kinect depth image in meters to world map (Composition by dust)
here is an open cl patch that converts the kinect depth image in meters into a hi-res mesh. meaning the vertex point cloud is over 300k points. i have included background subtraction as well as mapping both the rgb and or the depth image to the model as parameters. for the og code surf to.... http://graphics.stanford.edu/~mdfisher/Kinect.html |
All of these kinect compositions are actually likely to make me an xbox 360 consumer, or can we run the kinect entirely separate?
I had been looking around for better firewire / usb computer cameras, liking the look of the Creative, but if the kinect can run as a Mac separate, sans xbox, then it becomes a useful bit of kit in itself.
Whatever, the kernels and patches provided make for pretty interesting constructs with even a default iSight.
Thanks for the exemplars, dust [ oh , and all others getting giddy with the kinect - keep on moving ]:-).
You can run it entirely sans X-Box. It comes with an AC adapter that changes the Kinect plug into USB while providing power. I'm actually going to use it for my signage project instead of a touchscreen.
Does the kinect have a firewire output too?
No firewire.
If you get the sensor bundled with a new xbox then you dont get the power/usb adaptor so you cant use it with a computer unless you get the adaptor separately from somewhere. So its better to buy the standalone sensor.
Looks like the colour and depth images there are a bit misregistered. Is that intentional?
a|x
useful advice - cheers
that is not intetional tb. I have some done some research and a calibration is required. the issue is that depth image doesn't render a checker board so you have to calibrate the rgb image. if you check the original source there is a link to how calibration is done.
this guy is a teacher at Stanford and learned how to program at Microsoft so i believe him when he says even after a rgb calibration things are still mis registered. so decided to skip the calibration step although I would like to see the difference, don't have time today as it's finals right now for me but certainly if you got a kinect cam by all means follow the link calibrate and post a screen shot. my whole intention here is to get things mapped out as exact as possible. so the camera is accurate down the centimeter with calibration. my results here are maybe a few centimeters off as I did not calibrate.
You can't really start to get the color and depth images locked up unless you perspective warp the color image. There may also be something going awry in the rasterization/lock to rendering dest step.
Yeah, I thought when I got my Kinect that the three camera/sensors are quite far apart. Incidentally, is there any chance of getting the IR camera working in QC, I wonder?
a|x
This looks like it is trying to correct for the physical difference in lens location in the kernel, but also not taking into account lens distortion, or other factors.
This results in noticeable misalignment, and is a translation of that.
This would be stuff like CI perspective transform. The fact that the RGB camera is in a different locale will always make this imperfect, whether this fellow learned how to program at Microsoft or not.
thats interesting. i don't know enough about the hack to tell you the truth if its possible or not. i have just started getting into the guts of the lib.
with that stated i will take a guess and say it may not be possible without a hardware hack, as we would have maybe seen a third image but updates seem to be adding new features accelerometer, tilt, and motor data etc..
The driver wasn't supplying the infrared image. My understanding is that there are possibilities for:
-Infrared image. -Depth map (we have this). -Color Image (we have this). -Higher rez depth map. I'm still reading up on this. The higher res depth map functions do not work at full fps with the OpenNI driver, and it seems dubious if it ever will, but we'll see. We're talking 5fps here, from what people seem to be getting on list. -Possible FOV control.
One thing of note... people that did the firmware update via Xbox might be hosed for a little bit. Kinect w/out firmware update is working with the least amount of problems (eg., you don't have to fix/undo the update) at the moment. I imagine it would be a priority to get this going.
Right now there are a decent amount of bugs with what's been released, with a decent amount of "oh yeahhh, we forgot about that, we'll get it going this week". I'm liking the attitudes of those involved.
gt you mentioned openNI running on mac. where would one get info on this subject.
That was my first thought for Kinect — it will shoot through the glass of a store front window (why not approach your local Applestore ;-) ) where a touchcreen is hosed. Of course there are other ways to get gesture inputs through glass via a webcam and OpenCV but this looks way more sophisticated and accurate (guessing since I have no experience of either :-) )
From the OpenNI developer list.
Here... http://groups.google.com/group/openni-dev/browse_thread/thread/80809495f...
Oh, and here's an OpenNI video gallery... http://groups.google.com/group/openni-dev/browse_thread/thread/0e182ab0a...
They are porting Open NI to Mac ... this was written on Dec 13th. http://groups.google.com/group/openni-dev/browse_thread/thread/80809495f...
thanks for the links. i just joined the goole group. really want to try this out. does anybody know the best software package i could use to map joints of my skeleton to a skinned model in realtime. autodesk motion builder is bit pricey and when i bring my model into unity without baking an animation uni thinks my hip bone is my knee bone.
very frustrating. any help in pointing me to some source that will let me do rigid body type of physics with a skinned model would be greatly appreciated. doesn't matter what language just really want to make something other than a stickman.
looking for software like this but for mac. or open source processing, java, ofx, cinder, etc...
...ah that Miko thing! It's funny to see that one pop up.
just using this miko as an example in case you didn't know what i was talking about. maybe i should look into the msa bullet physics lib and stuff ?
GLSL should actually be able to do the skin deforming and weighting you wish. (not sure what your physics needs are)
This is very cool, However I think I may have a problem it is really really slow, and that's on a 12 core mac with tuns of memory, on my lap top it just crashes, any Ideas what I maybe missing?
this uses open cl so most of the calculation is done on the gpu so having a 12 core machine will not improve this patch however a bigger graphics card will or you can resize the images to something smaller that your card can. as far as your laptop is concerned it would need to be an open cl enabled machine in order to render this. hope that helps. there are various other methods you might be able to use to cpu side to do the same thing. you should try the 1024 kinect plugin i think that is running cpu side meaning your 12 core machine should crunch no problem.
You can see if this works faster for you...
http://kineme.net/composition/gtoledo3/OpenCLDepthMapTextureMapKernel
Some graphics cards just don't support CL or GLSL well, even if you're using a tower (ATI)...
I wound up converting the mesh "correction" to GLSL.
Also, I'll note that I personally think that this only applies to libfreenect, not OpenNI, which I think already has the correction. (If anyone has any clarification about that, I'm all ears).