OpenNI

gtoledo3's picture

http://openni.org/

"Example The following video, which demonstrates real-time skeleton tracking, was produced using the OpenNI compliant NITE middleware by PrimeSense, running on their PrimeSensor depth sensing hardware, all connected using the OpenNI framework."

...and nite middleware sdk also now available as well.

http://www.primesense.com/?p=515

(... and some interesting related papers.

http://jamie.shotton.org/work/publications.html )

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

gtoledo3's picture
Re: OpenNI

Kind of looking through docs...

If there is a built in FOV function in the Kinect for depth image, that is intriguing, because it could help kind of maximize resolution. I don't know if this capability applies to just the primesense reference or not (or is something totally "software" and not hardware capability at all).

"Depth Generator An object that generates a depth map. Main Functionalities:  Get depth map: Provides the depth map  Get Device Max Depth: The maximum distance available for this depth generator  Field of View property: Configures the values of the horizontal and vertical angles of the sensor  User Position capability"

This also seemed intriguing- resolution options. Again, have no idea if this is applicable to the kinect at all:

"The following options can be used:  Depth: Sets the resolution of the depth to either QVGA or VGA. If not specified, depth is off. If no resolution is specified, QVGA is used.  Image: Sets the resolution of the image, to either QVGA or VGA. If not specified, image is off. If no resolution is specified, QVGA is used.  Verbose: Turns on the log  Mirror: Sets the mirror mode. If not specified otherwise, it uses whatever was configured.  Registration: Changes the depth to match the image.  Framesync: Synchronizes between depth and image  Outdir: The location where the oni files should be created. The default is the execution directory. Note: Keep in mind the amount of memory used to store the frames. Configuration Size 1 second, QVGA depth 303202402B = 4500KB 1 second, QVGA image 303202403B = 6750KB 1 second, VGA depth 306404802B = 18000KB 1 second, VGA image 306404803B = 27000KB"

gtoledo3's picture
Re: OpenNI

photonal's picture
Re: OpenNI

This clip really makes me laugh - I wonder why it is that the real person starts behaving / moving like he's a StickMan with limited articulation!! :)))

(Although of course; top marks for the technology!)

gtoledo3's picture
Re: OpenNI

Some stuff that is apparent after more reading:

When Microsoft was developing, at one phase, you had to get in the "weird orangutang pose" and have bones lock on; the "stick man" demo. That looks like exactly what they've actually given people here instead of the principle they're using to keep track of people and assign body parts typically (?) that took it out of being janky.

So, while I'm eager to really check out what they have going on with the skeletal tracking demo, after really looking into it, I tend to think there more logical approaches, and that this is a kind offering of an interim step that Microsoft actually abandoned because no one wants to stand like this, and stuff looses track, at least according to some presentation notes I have somewhere.

Also, it is Microsoft/Linux currently, and still in a state of flux. It looks like there are people interested and working on integrating for Mac. Right now people are just trying(struggling?) to get stuff to perform as well as the "hack/kinect usb intercept" stuff that has been happening. That's to be expected; there has been statement that this was planned for Jan., but rushed out as alpha.

It is really cool that Primesense, WillowGarage, PointCloud etc., are backing this; especially since Primesense is one of the major companies involved. It looks like many issues are being sorted out; there are issues with people that had kinect firmware updates not being able to use the source properly, interestingly. It seems like there are differences between the Primesense Reference unit and the Kinect that have given people some issues, though the aim looks to be to support both.

The driver sdk seems to be interesting and expose some cool options, but I'm not sure how much applies to kinect. I'm uncertain if the OpenNI stuff is actually as performant, speedwise at the moment. It sounds as though right now, it's a step backwards, with a possibility of eventually working better.

SteveElbows's picture
Re: OpenNI

I installed linux today just so I could try the skeletal tracking demo. Its not perfect, but its good enough I reckon. I wasnt sure if or when proper skeletal tracking would become available so I am extremely happy with this development. Tomorrow I will see if I can get positions of joints sent out via OSC, then the fun can really start. I suspect I may not have the programming skills necessary to do this properly myself but if I am very lucky I will get something going.

SteveElbows's picture
Re: OpenNI

Yay I have managed to get joint positions out via OSC, oh joy :) Ive not looked at joint rotations yet.

SteveElbows's picture
Re: OpenNI

Slowly, slowly I get towards the point where I can have fun with the data this offers.

http://vimeo.com/17772651

dust's picture
Re: OpenNI

nice openNI solution looks promising.

gtoledo3's picture
Re: OpenNI

That's excellent! Some people got it compiled and running on Mac today (the actual OpenNI) as well.

There will be a bit to be learned!

I'm pretty convinced that the skeletal tracking that Microsoft is using is somewhat fundamentally different, or more fleshed out.

I suspect it's based in detecting the main "blob" of the body with optical flow and the depth channel, assigning an id to that, and detecting face front on, and/or profile.

Then, by looking at the form of the body, it's easy to start breaking it down. If you have the face, you can go "this half is left, this is right". You can color code the body area based on that definitive coordinate and the body mass, and return max color coords for different combos, and all of a sudden you have your hands and feet. This is without any registration step.

By comparing frames and looking at the face, you can start telling if arms cross, or if someone is backwards, or sideways.

It also becomes reasonable to sign any gesture and have that be the signal to initiate retrack or "start game", instead of the "arms up in the air" pose. Someone posted a gesture recognizer not to long ago that can be used for this exact purpose. Optical flow is going to generate color areas we can use to enable stuff like a reset gesture (of many possibilities).

I'm interested to see how they have the kinematic/joint stuff working. I haven't figured out any really awesome way to get that one part of it going. I suppose one could test edge detect pixels betwen the hand (or foot) and torso, and use that to make the point and angle. We can step it up to just use an arbitrary gesture or optical flow edge detection +/- face recognition to initially lock the skeleton.

This can potentially be much richer than what microsoft is actually using OR the OpenNI release, if we take this OpenNI "middle step" and add to it. In fact, OpenNI seems to be defined as "here's some stuff, and whatever you processing you do, is suddenly OpenNI, because you're using this SDK to get the image info". I may be really oversimplifying, but it's viewed as modular, in that you can add in your own skeletal tracking system, not necessarily use their examples.

They have stuff like body id-ing (sans skeletal track) going ok in sample projects. I'm curious how/if we can use that stuff to riff off of, and improve on.

There's this murphy's law - if you have to do the arms up in the air pose to make it work, there is some scenario where that's not going to work for the project. So I want to be cautious to make clear, I'm ultra positive about this, just trying to isolate what I see to be a weakness off the top so that we can get rid of it. I feel like through some smart balancing of some tools we have available, this is going to be monstrous.

It's so nice that you have that working via OSC just like that, and exciting. Awesome man. Great stuff!!! There are so many things that will be made apparent with all of this, and we got plopped a nice set of libraries (I think?).

franz's picture
Re: OpenNI

excellent steve ! Doesn't the openNI compiles on osx ?

gtoledo3's picture
Re: OpenNI

No, not yet... there have been people working on getting the linux running on OS X and they just started getting versions compiling yesterday. That should be integrated soon (hopefully).

gtoledo3's picture
Re: OpenNI

I find myself being very interested by the IR feed, since it will align perfectly on top of the depth image.

offonoll's picture
Re: OpenNI

mattgolsen's picture
Re: OpenNI

I think this video might be relevant to your interests George:

http://www.youtube.com/watch?v=bQREhd9iT38

mattgolsen's picture
Re: OpenNI

OpenNI Unstable build now available for Mac OS X

https://github.com/OpenNI/OpenNI/tree/unstable#readme

offonoll's picture
Re: OpenNI

and this is the performance, result of the virtual avatar singer. http://www.youtube.com/watch?v=dgjfpoIA054&feature=player_embedded

found it here: also very interesting projects. https://docs.google.com/present/view?id=df7rw7vz_338cz6ngnd6