Dynamically identify and track moving objects (edges and/or x/y coordinates)?

krikael's picture

Dear Kineme addicts,

I'm so glad I found this forum. I have spent a few hours browsing and already learned a lot.

I am a tree hugger working on magnifying nature experiences through art. I am currently working on an interactive forest floor, inspired by Avatar and Perilin, the Night Forest (Neverending Story). Basically, I want the forest floor to react to entering visitors in beautiful visual ways: Through light waves, light animals, etc.

I have been able to hoist my projector and Kinect 40ft into trees, import the depth image from Kinect into QC with Synapse, and play a bit with simple effects (City Lights.qtz). Fun already. But:

Right now, my bottleneck is turning the movement of my visitors (depth image) into beautiful visuals. I'm thinking of light waves on the forest floor that get brighter / more intense (amplitude) as you walk through it. Or of light bugs/fairies that dance in a safe distance from the person trampling through. Or waves that originate from the visitor like drops falling into water. Either way, it seems I need to:

1) Find or develop patches that dynamically identify multiple (bright) objects from an image on-the-go (i.e. no "reset" necessary) and simplify them to a format that other visual patches can use (x/y coordinates or edges).

and/or

2) Find or develop patches that turn such extracted data into 1) waves 2) animals 3) drops or any other visuals that pop up in your mind as you read this.

Fact is, I desperately need some guidance. I have browsed a few forums but remain in the dark regarding the distance of the learning path - in particular I don't know what has been done already. I may even be lacking the vocabulary to search. I found Kineme/OpenCV motion tracking, but it looks like it needs to reset every time ("Good points to track"). But I want to have this thing going for hours on its own. As a programmer (WAMP/R), I also feel there are hundreds of potential solutions / awesome-looking ideas out there but I haven't learned the language to identify and find them.

So here's my question: What would you want to appear on a forest floor as you walk through it - and how would you try to go for it, given your knowledge on existing patches? I may dive into xCode, but that will be a loooong journey.

If it's good, I'm happy to come over to your side of the woods for a night-time installation :)

Much love,

Christoph

P.S.: I also have different-colored strong laser pointers that a camera can track. How do I "burn" these patterns into the forest floor with QC? :)

blackburst's picture
Re: Dynamically identify and track moving objects (edges ...

Give Open TSPS a shot.

benoitlahoz's picture
Re: Dynamically identify and track moving objects (edges ...

Did you have a look at my old OpenCV plugin : http://kineme.net/composition/benoitlahoz/BlobTrackingPluginCarasueloOpe... ?

The "Contour" plugin output coordinates (X, Y) for each found contour.

That's an old implem but it should do the trick.

scalf's picture
Re: Dynamically identify and track moving objects (edges ...

Hey Krikael,

It seems as though we are working on similar pathways. I am looking for a solution to detect objects on a table and then project images on them as they move or are interacted with.

I am looking for something that goes like this:

Kinect Infrared Feed -> Tracking -> OSC/TUIO.

Essentially you'll want to track in depth or infrared as any projections will create feedback if you are tracking in the RGB color spectrum. Although the feedback may be a desired effect sometimes, it makes accurate tracking difficult.

I found that processing may be the answer for "Tracking" as written above. However, I think the OpenCV patches may be able to be used in some fashion.

krikael's picture
Re: Dynamically identify and track moving objects (edges ...

Hi benoitlahoz,

Thanks a LOT for this suggestion! Downloaded and opened like a charm: I can see the contours of the image in QC. Will test further as soon as I'm home with my Kinect. Also found your video: http://vimeo.com/32836134

It seems a big step has been taken. Now, I guess the challenge is how to get from contours to beautiful visuals. Are you aware of anyone that used your patch to create cool interactive visuals - and made the code/idea available online?

Thank you so much

Christoph

P.S.: For instance, one thing that would look great is if contour lines would extend from the object as ripples (as if a drop fell into water). From a my knowledge of ArcGIS/Photoshop, I guess I would apply your patch every second, smooth the lines, compute buffers and then iterate them over time. Admittedly, I do not know how to do any of these steps in QC. Yet. Any low-hanging fruit could help enormously :)

krikael's picture
Re: Dynamically identify and track moving objects (edges ...

Will do, blackburst. Looks like a great lead! Downloaded OpenTSPS and am impressed by first sight. Your keyword also helped me to find this thread: http://kineme.net/forum/Discussion/DevelopingCompositions/OpenTSPSOSCdat.... From what I read, it still looks like you have to use the mouse to define what blob you'd like to use for the visuals?

Will go from here. Before I do, another quick question: Have you seen anyone creating cool visual effects from this using QC? Looking at related QC patches would give creativity and development speed a real boost (call it a human genetic algorithm: take what works, mutate and cross-breed!) :)

Thanks again,

Christoph

krikael's picture
Re: Dynamically identify and track moving objects (edges ...

Sounds great, scalf. Will proceed on my side next week with the above suggestions and see how it goes. Let me know if you find anything promising! :)

krikael's picture
Re: Dynamically identify and track moving objects (edges ...

Mon dieu, Benoit, je viens de voir cette video! http://vimeo.com/33500649. Gé-ni-al!

I think nothing comes closer to what I'm thinking of (the only difference is that the Kinect will be viewing the people from above, and the objects will be moving around in a safe distance from the identified object)!

Would you be interested in sharing some insights on how this was developed? Whether the entire QTZ file, or just a few thoughts on how you did it? That would be ENORMOUSLY helpful!

All best,

Christoph

jersmi's picture
Re: Dynamically identify and track moving objects (edges ...

benoitlahoz -- are you selling/sharing your plugins? The text on your (awesome) Vimeo pages suggest this, but the link to benoitlahoz dot net is not working.

The box2D stuff always looks cool with this kind of stuff. Madmapper might be an interesting tool in this context, too. There is also the Delicode NI Mate app, but i think they bumped up the price a lot. Synapse is probably about as worthy.

benoitlahoz's picture
Re: Dynamically identify and track moving objects (edges ...

Wow, I'm coming back to this post after a long while ! Thanks a lot.

Actually, I'm thinking about sharing the plug-ins (may be in the form of a donationware, as there are about 60 plugins and long long nights of work). The problem is that : they are not ready. I'm the only user that can understand it for the moment :-/

The reasons are : I had a lot of troubles with the Box2D one, because I wanted the objets to be highly parametrizable (is it English ? :-) ), then I found that converting images from inputs in OpenCV ones, then outputing to another plugin to find contours, then playing with geometry, then feeding a Box2D plugin was highly expensive in terms of performances. And, well... I still don't know how to share an always updated library between my plugins.

So I decided to make only one bundle of patches with : Geometry and editing functions + Computer Vision + Box2D + conversions between the standard QC world and the plugins (mainly vertices, coordinates and images). It's called h[Oz] Insider.

This is not ready at all, because still very very buggy, but I hope to publish it at the end of summer.

@krikael : I'm finally doing a perf with it on the 15th of june, I'll post (I hope) better videos. :-)

@jersmi : Thank you ! I saw your comment on Vimeo. I don't really use Kinect, apart in very difficult situations when I have to install all quickly. My purpose is to play with a clean silhouette, and the Kinect image is not very good... But, how convenient it is !

By the way : I want to thank everybody here for the tons of advices, examples and plugins I found here. Thank you !

krikael's picture
Re: Dynamically identify and track moving objects (edges ...

Hey benoitlahoz,

Sounds like you're on to something big. I'm excited to check out the videos after the 15th of June! :)

Is it possible that your plugin does not work with the Kinect (at least with Synapse)? If I start your plugin first (and it works), then Synapse does not start properly (doesn't even show the depth image). If I start Synapse first (and see the depth image), your plugin does not work (white screen)...

I thought I had found the solution, but it looks like I may start again. Have you ever used any other plugins to import the Kinect depth image into Quartz which work with your plugin?

All best - and fingers crossed your performance will be a sweeping success!

Christoph

benoitlahoz's picture
Re: Dynamically identify and track moving objects (edges ...

Here it's working for me with Synapse, but I made so much tweaks that I don't know what could be the trouble.

I know I have a real big trouble on my image conversion method, that result in different bugs (see for example at http://stackoverflow.com/questions/14771847/opencv-distort-trouble-with-...). I can't find a way to resolve it. For a while I had a CVDisplayLink crash on VDMX for example, but now it's working fine.

Did you try to pass your depth image in a RII and feed the plugin image input with it ? Not elegant at all, but it could do the trick... Tell me.

Thank you for all your good words ! :-)

Ben