OpenCV Patches

Hi, I know this is a hard one... OpenCV stands for Computer Vision. It is an open source framework (check it out on sourceforge: http://sourceforge.net/projects/opencvlibrary or here: http://www.cs.cmu.edu/~cil/vision.html ) that allows for all the kewl stuff you might like: - Face detection (using Haar cascade algorythm) - Color Tracking - Feature tracking - Motion detection and so on....

Currently, this framework has been adapted for other graphical programming interfaces, such as: - Max Msp Jitter with CV.jitter lib (here : http://www.iamas.ac.jp/~jovan02/cv/ ) - VVVV with specific nodes (mostly freeframes).

I could use these two apps to get the features, but: VVVV runs on windows (and even with boot camp, is NOT as fast as QC in terms of video) Jitter is damn slow, even with the new UB.

Right now, apart from ReacTV (TUIO) and its fiducial trackers ( http://mtg.upf.edu/reactable/?software), no lib exist fro QC to allow for some image interpretation, so i think porting openCV to QC would be really interesting for a lot of ppl.

cwright's picture
Holy Crap

This would be so incomprehensibly cool :) but also incomprehensibly difficult... hmmm... this looks like fun.

cwright's picture
implementation

After playing with the samples for a bit tonight, it doesn't seem quite as incomprehensibly difficult (though still pretty close :)

I think it'll be patterned after the Texture Patch. That is, it'll have patches for the various functions, so you're responsible for chaining together useful stuff.

Are there any hot functions that are really useful to focus on? (There are a few hundred functions, so I don't think it would be wise to make a patch out of all of them.... many also appear to overlap with CoreImage filters, so those probably aren't as immediately necessary either.

Thoughts?

franz's picture
HaarCascade

I think you are insanely cool. The HaarCascade / aka FaceRecognition is fairly the most impressive function avaiable in openCV. Any pattern recognition/learning is awesome. (Automatic) feature tracking is also uber-great. StraightLine finding/Highlighting is nice. OpticalFlow to RealtimeImageStabilization seems more custom. CameraMatching is overkill (do they bundle this one ?)

cwright's picture
Feature Tracing

I've not played with HaarCascade yet, but when I saw feature tracking working in real time, in crappy lighting, with an iSight, I almost fell out of my chair (and the tracking points followed :)

With face recog, you need to train it I think (still scouring the docs), so that might be impractical at first (it'd need to save state to be useful on the go). Line finding is cool and pretty simple.

The camera orientation stuff looks pretty cool (give it reference points, and it calculates the camera's orientation), when combined with ... other 3d stuff ... ;)

franz's picture
Face Recog

Training files for FrontalFace and ProfileFace Recognition are already included in the download bundle. You only need to train it more if you want custom object recog.

basically, training files are just XML files.... Look for HaarCascadeFrontalFace.xml

I'm glad that you see my point about camera orientation... and the earlier discussion we had about the need for cameras...

cwright's picture
You're right

shortly after my previous post, I discovered the xml file loading aspect, and the default recognition stuff (there's even a demo that does it :)

yanomano's picture
3d tracking

Do you say that it can do 3D tracking in real time ?...:) yanomano.

cwright's picture
sort of...

Some of the demos I've seen can take a known object (like a chessboard), and determine how the camera is oriented based on how the chessboard appears in the image. Since a chessboard has many corners, it's probably a much simpler case than normal objects, but it's a pretty cool trick nonetheless :)

2D tracking with dozens of points is fully real-time, which was enough to impress me :) Makes the wii-mote IR stuff look rather weak :(

franz's picture
yes.....

http://openvidia.sourceforge.net/skeltable8.jpg it is not really 3d tracking, as PFtrack-like sotfwares (14000+$) would do (though one can dream.... it will be possible one day, sooner than expected), it is called 3D registration. Have a look at openVidia, which is an OpenCV port on the GPU, this will blow your mind http://openvidia.sourceforge.net (didn't i post these links already ? It seems i'm nearly the only one being totally wet about openCV)

or more generally, there: http://www.cs.cmu.edu/~cil/v-source.html

and you'll see that openCV, more than being opensource and FREE, is also used in amazingly specific tasks, such as fimage-based skin cancer finding.

Remember EyeToy on the PS2 ? .... openCV I've lost the link - i'm a messy guy- but openCV has already been embeded onto ArmProcessor (system on a chip) to make experimental vision enhanced mobilephones.

This is more personnal, but here's what i did using openCV (and vvvv) : http://www.exyzt.net/tiki-index.php?page=SPPEGX3T (sorry but the blah-blah is in french...)

note for cod3rs: have a look at the very bottom of the page, in case you missed it: GPUcv, which is actually ported on window$ only. Don't you think it might be a good thing to already from the start consider using GPU only libs (to my experience, the openCV routines i used on PC where wrapped around Freeframe DLL's, so they were running exclusively on the CPU, and it was quite slow - around 50% framerate drop)

cwright's picture
gpu-only

Without any working OS X GPU libs, I think I'll stick with the CPU-bound ones. The cool part about libraries is that you can swap them out for new ones, and the code that uses them doesn't need to change (much, usually).

Porting Cg/HLSL to GLSL (and fitting within the QC environment and getting reasonable performance) is a lot of work, so I'll probably not get into that in the near future... :) Definitely a cool goal/dream though.

franz's picture
understood

got it.

one day...

dust's picture
head tracking patch

so i have been waiting for the new revision of openCV. hopefully soon. in the mean time if you want to do haar cascade head tracking, have a look at this patch. its using a java, open cv, osc, and kineme 3d. nothing special just a simple head tracker using the standard xml.

thought some one might want to mess around with it until the revision comes out. its running thirty + frames a second and the poly count is something ridiculous like 2 million, which is larger than it ever really needs to be unless your making pirates of the Caribbean or something. just open the face_detect2osc file and the headcv file and mess around. like i said you need kineme 3d for this patch to work but, you could just add a osc receive and use the name space /x and /y on port 8000 and do what ever you want with the floats. they go from 0-256, so you don't have to use them as head tracking coordinates. have a look at the pic if you want.

merry christmas, oh sorry about the models size, remember less is more....

PreviewAttachmentSize
pic.png
pic.png951.79 KB
application.macosx.zip3.12 MB

smokris's picture
Yes.

OpenCV is actually already on my (personal) to-do list and I hadn't gotten around to adding it on kineme.net yet.

Yes. Let's do this.

franz's picture
Feature tracking on the GPU... agressive speed

Did you know about this one:

GPU-based Implementation of the Kanade-Lucas-Tomasi Feature Tracker using OpenGL. http://cs.unc.edu/~ssinha/Research/GPU_KLT/

And this one: http://openvidia.sourceforge.net/

Seems promising...

tobyspark's picture
more on reactable's toolkit

beyond reactable itself there are are simple but brilliant things like "everybody can be wireless artist". [a meta-moment: i just searched for some links to it, and found myself in the video, playing with it at sónar earlier this year]. the video doesn't quite do it justice, but such is life: its a live performance thing.

http://www.youtube.com/watch?v=k6XkP7VggxE

to my understanding, they downloaded the binary, edited its xml file, and the rest was done in ableton live. which ain't bad for "enabling technology".

this basic approach of searching for known symbols and outputting corresponding key'd trigger is a powerful one that i think would be good as a starting point in qc's search for computer vision.

franz's picture
reacTV alreary for QC

hey toby, reacTV as already been compiled as a patch for QC see here: http://www.tudra.net/wp/wp-content/uploads/2007/01/tuioclient_qc_0_2.zip/ and the performance is not that bad...

But OpenCV is really better (i think) because its libraries are wider. Algos inside are rock solid and developped by intel. It has all the neat features one would like for, say, control a robot-like installation with QC (=Phidget Patch+ WiimotePatch+ OpenCV Patch...hehe)

tobyspark's picture
ye gads. how did i miss

ye gads. how did i miss that. trés bien, merci.

franz's picture
vs GPU

u welcome... btw... all of these are pretty slow compared what can be done on the GPU these days.... so bad the GLSL scene on the Mac is that weak (and still emerging) compared to the DirectX scene (and Cg).... i'll be more than happy if someone has a few links about that subject...

franz's picture
feature tracker

it seems that sam kass is developping a feature tracker (as seen on QC dev list): http://www.samkass.com/blog/page2/page2.html plus, have a look at the optical flow leopard sample plugin... let's hope things will be moving fast the next months

tobyspark's picture
reacTVision on leopard

fyi i tried using reacTVision on leopard but couldn't get it to talk to qc properly, despite in theory having the right osc paths etc. i had a look at recompiling that plug-in for leopard but thats well above me. so we'll see where i get...

toby

cwright's picture
OSC trouble

(disclaimer: I know absolutely nothing about OSC)

From the discussion on the mailinglist, there may be some problems (?) with the current OSC patch when paths are complex. I don't know enough to know the details, but there have been some interesting test cases in the past week that might help you out?

tobyspark's picture
which reminds me, my mailing

which reminds me, my mailing list subscription seems to have stopped while i was away, and that is definitely something that needs sorting out!

franz's picture
QCdev vol32 issue 35

it seems that QC osc doesn't support hierachical keys. (at leat that's a question, I couldn't find what was the answer when browsing my archives)

forrest's picture
yes!

Face detection would be wonderfully fun, especially for iChat filters. Reactivision is great for what it is, but is by design limited to position and rotation, motion is generally lost.

franz's picture
one very interesting link

https://picoforge.int-evry.fr/cgi-bin/twiki/view/Gpucv/Web/WebHome

open CV on the GPU ... only for window$ right now, but still interesting

cwright's picture
Understatement

I was just checking this out, and noticed that they're using OpenGL/GLSL (not HLSL like every other windows CV project I've come across). So porting this should be way easier than crafting GLSL from HLSL/Cg and all that jazz. I'll definitely be looking more into this. Thanks for the link.

franz's picture
source

Yeah, i didn't want to be too "pushy". They also provide the source, that seems to be platform independant and very understandable .

cwright's picture
CV Status Update

Ok, after 2 days of poking around, I've got feature tracking working, along with a couple other minor features (camera input, opencv image -> qc image, opencv rgb -> opencv grayscale for optical flow, etc).

It's mirrored, to be more intuitive.

Working on a few more functions, and some performance improvements... OpenCV, being very image-based, has its own tightly integrated image types. I'm not sure if I should spend time making wrappers to turn them into QC Images, or if I should keep them CV images for better performance (the screenshot show 12fps, which is a bit lower than the normal 25-33 for opticalflow + cv-camera to QCImage display). Right now I'm planning on keeping them all CV images for performance, but it's not as intuitive if you don't know that they're not QC Images.

psonice's picture
Motivation :)

A quick example of what people have done with 'feature tracking' type effects. It might not look like much from a still image, but everything here is actually tracking a video.

It starts off being very hard to work out what the video is, but gets more and more obvious. Stuff like this should be possible in QC with the CV tools. Keep up the good work!

franz's picture
curiosity

just curious: why would you need to turn CVimages into QCimages ?

As Cvimages could be used to get the data from, while the QCimage coming from VideoInput could be used into an independant CIchain to be graphically processed. Are you using a CV function to display the features on a CV image, or are you feeding a sprite iterator with data coming from a CV structure ?

psonice: this demo is awesome, so stylish !!! Please, keep these links coming !

cwright's picture
For Display/processing

Right now, in the screenshot above, the camera data comes from OpenCV's camera capture functionality (not the Video Input patch). So the whole processing chain uses OpenCVImages rather than QCImages, except where I needed to convert to a QCImage for the billboard in the background.

Converting between the two (QCImage, CVImage) appears to be unnecessarily expensive (I probably don't understand QC's ImageProvider class, which may provide a cheaper solution to this in the future), so I only perform that one conversion.

If they get to be transparently compatible, CI Filters could be used in place of CV filters for some powerful image filtering paths.

franz's picture
CVdriven GLsplines

first tests.... (testing the performance actually)

PreviewAttachmentSize
EXYZT_cvTAG.png
EXYZT_cvTAG.png186.3 KB

franz's picture
and some more...

here:

PreviewAttachmentSize
Picture 3.jpg
Picture 3.jpg111.8 KB
Picture 6.jpg
Picture 6.jpg173.2 KB
Picture 13.png
Picture 13.png572.03 KB

gtoledo3's picture
I like the look of this...

I like the look of this... it's not around as an example comp, correct?

tobyspark's picture
chris (+franz), heroes amongst men

sooooooo excited to see this coming to fruition.

forrest's picture
ditto

i concur... can't wait to mess with it

forrest's picture
Beta yesterday

Just released the first beta yesterday: http://kineme.net/OpenCVPatch20080207beta

andreahaku's picture
Access Denied

Hi, why do I have an "access denied" message to that URL even if I'm logged in? Isn't it possible to access the patch anymore? Thanks, Andrea

cwright's picture
beta patch

The OpenCV patch is still in beta, so you need to check the "I want to be a Beta Tester" box in your user profile.

andreahaku's picture
Thanks a lot. Sorry I'm new

Thanks a lot. Sorry I'm new to Kineme. Andrea

cwright's picture
no prob :)

Not a problem :) The bounce page ("Access Denied") isn't very helpful, I know. it confuses lots of people... stupid drupal...

smokris's picture
betas

I just added a comment about betas to the Access Denied page.

dust's picture
access denied

i have tried to access beta openCV ??? i have checked beta test etc... ????

cwright's picture
beta tester

You'll have to check the "I want to test new and upcoming patches" checkbox in your account profile to opt-in for Beta test access.

dust's picture
openCV beta

i would really like to have some openCV patch plugins for quartz composer. willing to run some haar training examples for you ?

dust's picture
openCV quartz...

would like to test this...i know it is out there i have seen it...have haar training experience will trade ?

psonice's picture
Looking good

This is starting to look good (which I guess means it's not far off being ready :)

What is the performance like, and how does it look realtime? Things like this usually look pretty random if you turn off the background and just keep the lines, until you see it running and the pattern becomes clear.

franz's picture
video

it is around 8/10 fps in realtine / 200 features being tracked at max. (MBP 2Ghz / X1600)

http://www.exyzt.net/x_video/lab/DataSpace_InProgress.mov

psonice's picture
Looks a bit different from

Looks a bit different from how I imagined - the feature tracking is much more accurate than I expected. Is it possible to pick points along the edge of a shape (so that the splines would follow the pattern of the video)? Something like the picture I attached would be amazing - obviously that's way beyond what's possible with this kind of software, but I think it should be possible to do a similar effect.

And how about performance? Is it possible to boost it by using less features, or by optimising the code more? Although I think 8-10fps is probably acceptable at 2ghz - it should be fluid enough, and you can give the impression that it's running faster by overlaying effects running at full frame rate.

I'm very interested in putting this to some use btw, any idea of when a release will be available? :)

PreviewAttachmentSize
31746.jpg
31746.jpg21.42 KB

cwright's picture
optimize

it took a performance hit when I switched from the 1.0 release to the CVS version of OpenCV (not what I was expecting). It's a chore to get it to build for both ppc and intel, so I'll probably not switch back for a bit. I've also spotted a few tweaks that might help a bit (redundant colorspace conversions in the Mac code), but they'll make it non-standard compared to normal CV. It also won't make an earth-shattering performance change either :( cv's remarkably slow.

I think the most promising route for 30+ fps would be to change it to GPUCV, though that's still a bit tricky (and having an intel graphics card will make it difficult to see actual performance increases :)

Franz, could you post the filter chain you're using, along with the parameters. As psonice noticed, it's really accurate, so you may have some of the tracking settings set to "Slow But Good" mode, if you know what I mean :) (there are lots of input params to control how it works)

franz's picture
settings

CVgood points: 200 quality: 0.01 minimum distance: 20

window size 20 iterations 18 epsilon 0.03

At 100 features and 15 iterations, i'm almost @12fps, which is quite nice. I'll also try resize the CVimage to 240 or 160 pix ...

Having the CVsilhouette node would help locating good features only on outlines.

www.exyzt.net/x_video/lab/DataSpace_InProgress-01.mov www.exyzt.net/x_video/lab/DataSpace_InProgress-03.mov www.exyzt.net/x_video/lab/DataSpace_InProgress-04.mov

yanomano's picture
amazing

The tracking capacity is really impressive... and the precision is awesome... Your composition too frantz ...:) yanomano.

cwright's picture
careful with resizing

careful with resizing (as you've probably noticed, it doesn't like image size changes yet .. heh. unplug, change size, plug back in). Did you notice any loss in tracking capacity with a lower resolution?

I've found that removing/disabling the CV->QC image patch will raise me from 9 fps to 12ish fps.

Oh yeah: HoughLines! (generally a bit faster than feature tracking, but this screenshot says otherwise)

HoughLines demo

franz's picture
resize = crash

really... i just don't manage to get resize working...

Can HoughLines be used to help define good tracking points ? (should be)

PreviewAttachmentSize
Picture 22.png
Picture 22.png1.06 MB

cwright's picture
Probably

I would think so.. If you feed it an unfiltered image (no canny filter with really high thresholds) it goes horribly slow (1.7fps... meh), so it's difficult to get several good points out of it, but the ones it gives are probably very good quality ones (image corners that are likely to be backed by real corners in 3-space)

[EDIT: totally sorry about the resize stuff. unplug whatever the resizer will plug into before connecting it. a disconnected port forces all the patches to flush their temporary memory, but they're not smart enough to do this when the input size changes (resize was a last-minute addition, and before that resizing was Not Possible... starting to sound like Pierre now :) I'll address that before the next alpha]

cwright's picture
video input

Thanks to some brief profiling by psonice, we noticed that simply taking CV Camera input and putting it on a billboard was unspeakably expensive and slow (thus limiting all other patches). I've implemented a QCImage -> CVImage patch that lets us use the built-in Video Input patch (which uses a much faster, multithreaded video capture method).

This has improved performance for me by maybe 20% (your mileage may vary), and allows us to use CIFilters instead of porting every OpenCV filter (a half-step towards using GPUCV). Alpha testers can expect another version sometime tomorrow (cleaning up a few other pieces). Also good to note: while the framerate may not be radically higher, the latency is dramatically reduced -- much less lag, which makes it feel way more fluid.

psonice's picture
Excellent...

...for a couple of reasons (apart from the obvious performance increase :)

It'll be possible now to tune the input image with the standard QC filters so that the desired features are more easily tracked. Especially good if you want to track say an outline but not features within it, or features of a certain colour.

Also, this means that it's not necessary to use video as the source - you could use a QC composition instead.

Big thumb up! =)

On a side note, how do you capture a composition to save it out? Is there a tool on the mac, or do you just dump frames from within QC somehow?

franz's picture
capture

use Kineme GL read pixels / or render to texture, then use the Movie Exporter plugin.

Juz's picture
Match shape outline

Awesome topic.. bwt im currently working matching by shape outline(regardless of size) to images using opencv.. sadly, the guide for haar training, which comes with opencv package, aint specific enough... i'll share the project when its done :)

Cheers Juz

franz's picture
open CV's latest

FilipeQ's picture
points coordinates

Anyone nows if/how i can get the points coordinates from the OpenCV Calc Optical Flow Pyr LK patch, i´am trying to make something move in the screen with my head :).

cwright's picture
use the structure

the output is a structure of X/Y points, so just use Structure Index Member to extract points, and Structure Key Member to extract X and Y data from those points.

FilipeQ's picture
Thanks

ahh... I have try do did the first part but not the second :).

Thanks.

franz's picture
camera correction ... ?

as far as i remember, camera lens correction is part of the OpenCV suite.(?) I recently ran into such problems, aka lens correcting a camera input to have horizontals and verticals straight lines. I currently use a pinch filter (built-in) to undistort camera image, but it is far from being precise. Are these options still part of the openCV framework, and if so, would you ever consider porting them to QC ?

cwright's picture
still there

It's still in there -- I've considered adding camera correction patches, but the patches take a matrix input, and I'm not sure how to obtain a correction matrix, or how the QC interface should be structured to accept one (do people build them by hand?)

franz's picture
matrix ?

strange... commercial packages (like PFtrack for instance) usually have a "Barrel" / "pin cushion" param. that you tweak till verticals appear so. Or you know your camera and just enter the Lens type / filmback values.

marcotaffi's picture
Haar detect objects

Hi! Does this "OpenCV haar detect objects" work? I could not find any example. I tried it... using a haarcascade_eye.xml as a cascade. But I could not return nothing!

Can you tell me about that? Is the output "rectangle" a simple structure?

thanks Marco (Italy)

cwright's picture
sorta

In this beta, it's broken, and you're lucky to get any [useful] output. In the next version, it works. There'll be an example composition to demonstrate how to use it as well. Its output is a structure that contains all the matching rectangles.

branchat's picture
nice

... I just wanted to ask the same. I get an output, and I've achieved somehow to correlate the apparently random values to more or less the actual coordinates. However, without great succes. Excited about this next version ;)

Regards,

Rob

mfreakz's picture
About OpenCV

Hi, Open CV patches seems to be one of the best project for Kineme's members. 3D Object Loader will be release near Christmas if i understand ? Would you plan something for OpenCV ? I understand you've got a lot of work, i just want to know if there will be an update or a public release in the next future. I'm working on two project with your OpenCV patches (Beta), and it's really the most impressive patches that we could add to an "interactive" software like QC... I'm waiting for this update...

Thanx for all.

gtoledo3's picture
If you checked off the beta

If you checked off the beta tester box in your preferences, when you log in, you should see the beta testing tab in your upper left corner. So, there you will find the OpenCV patch... soooo it is definitely available if you are on the beta testing program. Just fyi... :o) ....

EDIT: Whoops ....you are asking about final release, in which case, excuse me for butting in !!!

mfreakz's picture
Exactly !

Hi qtoledo, Yes i'm working with the beta, i'm on projects with those promising patches. I just answer if CWright plan a new release coming soon because i'm deathly addicted to CV stuffs... Maybe a new beta release with more patches, or increased performance... Maybe the first public release (stable and documented)... I expect a lot for this project (like the 3D importer and Speech recogntion...) I just need some news...

cwright's picture
notes

The next beta of CV (probably early January?) will feature a lot of performance improvements, and some bug fixes (haar detection, primarily). There are a lot of potential patches, but no one seems to want to tell me what they'd like (I've got a request for camera correction, but that's very complex and requires the user constructing transformation matrices, which is complex for the user as well). If you drop some idea, I can see what I can do. If you say "blob tracking!" you had 1) better know what you're talking about and 2) know how to explain it, so that you can teach me enough to get it working (I've never seen any working blob tracking demos with CV, so I don't have any reference points to know what I'm trying to accomplish)

Alternatively, there are a lot of other CV-like libraries out there as well. Some are mentioned on this site. It would be nice to integrate those as well (eye tracking, occluded face detection, etc). Please post URLs, licenses, and what features you'd like from those libraries -- makes my job easier :)

Please Please Please don't say "openFrameworks!" ;) (I Hate oF, after poking at it some... what a nightmare) [you can request features from it, but make sure it's implemented somewhere else too, so there's a coherent code base to work with]

franz's picture
blob tracking !

Blob tracking please ! jk

For an overview of OpenCV's functions, i would suggest having a look at: http://www.iamas.ac.jp/~jovan02/cv/

it is the implementation of oCV for max/jitter, coded by fellow frenchman Jean-Marc Pelletier. Then, i would also suggest to install Max5 demo, then install the openCV external and have a look at the examples. They are very well documented, plus you have sample patches showing what the functions actually DO. However, this port is also very unstable.

More generally, openCV functions are of 2 types (to what i understood): - image manipulation (thresholding and such) : these can be reproduced with QC native patches. - image analysis : returning an image (like optical flow) or returning values (like blob tracking: position, size and number of blobs, center of mass, pixel area...)

Image analysis are the only functions i'm personnally interested in. So far, the openCV KnM port is practically unusable (you'll say "of course since its a beta", and you'll be right) due to: - poor speed - hickups in specific plugins like:

Tracking / unability to toss out points / unability to sort points (never managed to , at least) Face Recog. / buggish and very jittery Hough lines / unability to correct camera, so finally, even if the plugin is working correctly, there aren't any straight lines in a cam feed.

However, i'm pretty sure a 1.0 release is close and doable, since the plugin you developped is already working well in some specific situations. OpenCV is a great addition to QC, and i'll never be thankfull enough.

cwright's picture
thx

Thanks for the directed feedback (some of which you've already mentioned elsewhere) :) And for the link -- opensource, very handy, and it lists a bunch of features (so I know what's useful/what isn't)

are you basing this off the the public beta, or the unreleased version (not sure if I sent that one to you or not...) -- should be faster, and have working face detection. (and I keep saying I'll release it "soon", and never get around to it... :/) point-tossing might be in there too...

How do you envision point sorting working? (That's not a feature of cv, but it's trivial to do ourselves -- I just have no idea what's needed)

As you noted, camera correction is a pretty big problem, and I have no idea how to realistically solve it. The built-in demo correction stuff produces garbage for me (wildly disfigured images, or nothing at all), and the matrices used to perform it aren't "user friendly" -- Not sure how to gather specific matrices for every camera out there, to make it a simple drop-down.

Good point about dividing things between manipulation and analysis, I think that's very accurate, and can probably help me focus on the analysis side, rather than the manipulation side. [that said, camera correction is technically a manipulation that can be done with CoreImage filters...]

mfreakz's picture
What is a "Good point to track" ?

I understand that undocumented wishes aren't very useful for you, but i'm not really efficient in CV Stuff to talk about technics... I'm very sorry of that because adapting OpenCV in QC is my favorite thread in Kineme's projects ! I'm working with the OpenCV beta for a while, and i think it's the most valuable addition to QC (with increasing 3D, Audio and Video Patches). I put some OpenCV in my MediaCenter Project, and, for shure, like many of Kineme's users i'm secretly planing to make my own Multitouch project ;)

So here is my poor contribution:

When i'm working with the actual Beta: It could be very useful to be able to reset points, identify them (using boxes, squares...) by their size, velocity, or other values. When a point is loosing is target, it could be great to be able to kill or reset it or to be able to ask for re-pointing the same characteristics. When i'm working on a finger-tracking screen/surface, i use to downscale my video input to a 40X30 Pixels image, to increase the tracking response and velocity. The Quartz2CV, CV2Quartz video Convertion seems to use a lot of processing resources, and as we have to manage with many Native QC's Filters to improve tracking, it's very difficult to build a sophisticated composition in addition... If the Video Filters (to improve tracking) have to be include in the OpenCV Video "language" it could become very complicated in the future. I use to Reduce image size, to increase Luminance, to manage with a kind of "Difference Key" technics, to change Contrast/Gamma/exposure... It could be very useful to have a kind of "Good Point to track" Thresold, based on the quatity or velocity of moving pixels. It could be also useful to have an Image correction Filter, including Threshold, Gamma, contrast, Color to Black & White options (like a colors mixer to define the B&W resulting image), a good Median Blur could be very useful to kill video noise too. It could be great to be able to define what is pertinent to track and to be able to save this "definition" with the composition, in the patch settings. As OpenCV Patches does not seems to works with colors, I'm figuring that the luminance is the only variation in pixel aspect that could be analysed. So we need tools to manage with Luminance property in our video input. Minimum & Maximum quantity of moving Luminance to be considered as a "Good point to track", the luminance consistency should pass through a king of tolerance option.

I'm very interested in Face/eyes/hands/objects recognition too ! It looks very uneasy to input a simple picture (photos/scans) and to have a basic recognition ! It could be great to ad a Colision Patch in the next feature package ! I know that a kind of Zone colision is easy to create with QC but as we have to manage with a lot of this patches a complete Colision Patch could be very useful (also in GL Tools, if it understand Z positions..). Could you tell us what kind of analysis tools are listed into OpenCV project? I'm shure that you give us a link but i couldn't find it...

-Franz: if you know about a French documented site about OpenCV, let me know !

cwright's picture
More Notes :)

Thanks for your input on this.

Some dev notes: Yes, the QC2CV and CV2QC image transfers are very slow -- this is a QC limitation, not something we can really work around. However, there's not much need to convert CVImages back to QC (just use the original QCImage), so that can save some time.

"Good Track Points" are points with high contrast from their surroundings, so they're easy to track as they move. points that match the background are harder to track accurately, because it can't find their edges as well as high-contrast points.

CV doesn't give size information about track points, so it's impossible to report how large a point is.

Most of your filter ideas are perfect for CoreImage (and would be hardware accelerated) -- don't use CV's resize/image processing patches unless absolutely necessary, since they're likely much slower than CI. (thresholding, luma, gamma, contrast, B&W, median blur, etc are all possible with CI without a lot of effort, and very fast.)

40x30 seems a bit small for tracking... 240x180 shouldn't be much slower even with a larger number of points.

Handling lost-point behaviour would be nice -- I plan on handling that better in the next released beta.

As for listing the CV analysis tools, there are a bunch, and the documentation is terrible for almost all of them. Shape detection (circles, squares, others?) is possible, some kind of blob stuff (not really documented at all), optical flow (point tracking), haar detection (face detection), some kalman filters (not possible in CI, as far as I know?), and maybe some other things I'm forgetting...

Collision is an interesting idea -- ties in with Kineme3D, ParticleTools, GLTools, and physics stuff. Not sure exactly what aspect of collisions you're looking for (possibly just simple hit tests?)

Training haar cascades from images is tricky, and requires lots of input to get it working. A single image probably isn't enough to train it to recognize it later. I don't know much about image recognition though, so maybe it is possible?

franz's picture
I have v0.1

I have v0.1

About point sorting, i just went through this document : http://www.opencv.org.cn/images/d/d1/Opencv_introduction_2007June9.pdf

and realized i should use canny pruning before auto-track in order to obtain contour-sorted tracked points ! (btw, the pdf explains really well the whole blob-tracking stuff... with a spy-cam example ;)

Camera correction is typically doable via Corefilters. It is basically a pinch/barrel distortion filter. Since CI is boring to me, I tryied to apply the video on 2 pre-ditorted 3D meshes, then blended via KnM 3D morpher and placed in a render in image patch, but i didn't achieve good results yet.

oh, and here's a french forum about openCV: http://www.developpez.net/forums/f739/c-cpp/bibliotheques/opencv/

mfreakz's picture
Thanx

Merci ! I will check this French Forum...

gtoledo3's picture
I am really laughing out

I am really laughing out load here... After Memo's suggestion, I have been looking at openFrameworks and it just pisses me off. I mean, for some reason, it isn't too stable on my system, and I've had stuff crash just by clicking on the render window with my mouse. (oh, I am editing this statement to make clear that this is totally on the front end of checking it out, and I've seen awesome things done with it or I wouldn't be checking it out in the first place).

I'll say EXACTLY what I want :o) I want to be able to setup something similar to the ar toolkit markers... but using 4 corners and color recognition of those corners, or of some swatch of color that is on the piece of paper or whatever.

Or something along the lines of being able to hold up an item that is red, and hook coordinates to a 3d transform that tracks x/y/z ...

Setting up stuff like face/ hand/ eye recognition would just be icing on the cake.

I would also like a "reset" points.

Would it be possible to do something like your hough line example, but instead of generating lines, generating 3d positioned polygons that could actually be used with GLSL shading?... I'm thinking something along the lines of hough line, gl height field, rutt etra... mixed with openCV and kineme3D. It seems like some kind of depth map could be generated and transferred to a plane? I am clueless :o)

mitchellcraig's picture
A perhaps naive question...

Hi everyone.

Sorry to jump in at such a late stage in this thread and ask a potentially stupid question but i am relatively new to QC and its capabilities and i am looking for a quick solution to my problem.

Thus far i have created a patch that applies various filters to a webcam input (essentially creating a new photobooth effect) but i am hoping to add motion detection/tracking to it i.e. when a body moves in front of the camera it is recognized and the effect is applied only to it as it moves around the frame.

I have been able to add motion tracking using an existing patch but this involves using the mouse to plot points to track, which is not what i want.

Any feedback that could help either solve this or let me know this isnt possible so i can move on would be fantastic, thanks

gtoledo3's picture
That's pretty easy... if I

That's pretty easy... if I think I understand you correctly... I'll try to mock something up later, because I'm heading out right now (unless someone so kindly provides a sample first). Are you using Kineme openCV or the Apple Optical Flow Downloader?

mitchellcraig's picture
that would be great, thanks.

that would be great, thanks. i have the kineme open cv patch installed. i have included screen shots of what the patch looks like.... 'picture 2' is the content of the 'psychotic' macro.

PreviewAttachmentSize
Picture 2.png
Picture 2.png162.94 KB
Picture 5.png
Picture 5.png49.31 KB

gtoledo3's picture
So are you trying to have it

So are you trying to have it be completely normal, until someone moves in frame, or are you just trying to implement the NI Flip?

(also, this may end up being better suited to the OpticalFlow Downloader plugin of apple's, not sure)

mitchellcraig's picture
it shouldnt matter whether

it shouldnt matter whether it is normal until someone moves in frame or not as static objects already in frame should not trigger the effect. The NI flip is merely to make the image appear mirror-like when viewed on screen. The ideal scenario would be that the image in frame appears normal and remains that way until somebody moves in frame and the effect is applied to their body - if they remain still the effect will stop.

gtoledo3's picture
Here you go... this is a

Here you go... this is a standard plugin, so put it in the regular plug-ins folder, not patches (if you don't already have it).

It's not that I have anything against the KinemeCV, it's just that I think it's more suited to OpticalFlow, or at least I "grokked" how to set it up that way easier.

I'll leave it to you to do the NI flip dealio...

BTW, if I was you I wouldn't really consider this a new filter... or at least I wouldn't try to sell it given that it is still Apple's thing ( I don't know what their deal is with repackaging things like this untouched, or just adding a little bit like this). It also seems to my memory that the NI stuff needs it's own plugin to work (maybe not all of them?), so I don't know how that would effect cross-compatibility (needing two plug-ins to make this work and all....).

PreviewAttachmentSize
gt psychoticflow.zip19.77 KB

mitchellcraig's picture
Thank you so much gtoledo3

Thank you so much gtoledo3 for your efforts but im afraid that what you have created isnt really what i am after. Perhaps if i put it in context you might understand what i am trying to achieve better; This is for an an interactive installation. The intended use of the patch is to generate an image that will be rear projected onto a wall within a picture frame. The frame will contain a hidden camera capturing a live video feed from in front of the picture frame which is input into QC and output via the projector. It will therefore appear to be a mirror on the wall until someone walks in front of the camera and the effect within the patch is applied to the image of their body in the 'mirror'. This should cause some sort of shock and surprise to the person in front of the 'mirror' and it is intended that they 'play' with the effect and can potentially control it i.e. by standing still it should stop. Again, perhaps QC is not the correct tool for this but i cannot think of what else to use. If you or anyone has any new suggestions or can offer me some help i would be very grateful. Thanks

gtoledo3's picture
All you should have to do is

All you should have to do is put your NI Flip thing at the beginning of the chain. Right after the camera, link the NI Flip, and then trace that noodle to each of the places that I have connected right now.

On my end, if I am still, this thing returns regular image, and if you move left or right, the "psychotic.qtz" effect starts.

mitchellcraig's picture
yeah, i have been able to

yeah, i have been able to implement the NI flip but when i go to change the flow step or iterations (to presumably increase the frame rate to reduce delay? or is there another way to do this?) QC seems to crash. The effect that occurs uses the gamma, sepia, hue, bloom etc filters...i dont want them, i want the image to remain what is captured by the camera; with the addition of the distorted movement and such, however when i remove them from the patch the effect remains the same. Do you know why this is or what i have to do to get rid?

gtoledo3's picture
You are probably still

You are probably still getting the original effects, because of the nature of sample and hold... you've probably sampled some of the effected image, and it's still retained. In those situations, if you stop the render window, and restart, it usually should "clear".

This should work for you. I think the throw/responsiveness of the image flow effect is probably done better in this example, and I also put a range on it before it goes to the multiplexer... which is probably also a better idea.

When you "run" this.... for about the first 15~30 seconds it really sucks on frame rate. I think that is because of all of the "sample and hold" going on (which can't really be changed since that is inherent to your effect. After that, it clears up and it gets a decent 15~20. You might get better results putting your image to a sprite instead of a billboard, and flipping the sprite and adjusting the size/aspect ratio of the sprite as desired (instead of using the NI Flip). I don't know, didn't spend time to check.

I put the NI filter in as well for you :o)

Feel free to use this one obviously, but give me shout if you use it for anything cool, just out of curiosity.

After looking at the finished result, I swear there is a FreeFrame filter that is very similar... but I don't think it responds to motion. I think it does take certain frame ranges and play them backwards though...

Actually... I feel bad that this has eaten up kineme bandwidth, when it didn't even end up having anything to do with kineme plug-ins (I can't figure out how you would do this with the openCV patch, without clicking a point first)! If you aren't signed up to the Quartz Composer Developer mailing list, you might want to look into that, and also look at the archives, because they are a great resource as well.

mitchellcraig's picture
hey! that sounds great

hey! that sounds great but...when you say 'feel free to use this one' are you referring to a new patch that youve made but maybe forgot to attach with the post? thanks for the additional tips as well! : )

gtoledo3's picture
D'oh. LMFBO. I'm including

D'oh. LMFBO.

I'm including another similar idea that was inspired by this discussion that works with v002 zoom blur.

leegrosbauer's picture
Face recognition

ManyCam has today released a Mac version of their virtual webcam that includes a moderately effective facial recognition feature.

I've tried it and the facial recognition seems to function ... kinda-sorta (although not a whole lot better) than the solutions that we are discussing here. It's worth looking at just to see it implemented in a commercial freeware application, I think.

That said, I'm still yearning for a solution via Quartz Composer. Aren't we all, I guess. :-)

gtoledo3's picture
Re: Face recognition

To be excessively nitpicky.... motion detection and facial recognition overlap, but for my motion detection comps, I have no DESIRE for them to work in a "facial recognition" manner.

To be fair, working facial recognition wouldn't be bad as well, and is also something I desire. I just see each method as distinct and with their own advantages and disadvantages.

I see AR as a little stronger platform... I mean, there are only so many uses for tracking an object to your face. But, if you can track it to a preselected marker, you are golden, and it is an easy "lock" for the camera. Tape the marker on your forehead at that point :o) That said, there are unfortunately some issues with that as well.... a BIG bummer.

leegrosbauer's picture
Re: Face recognition

gtoledo3 wrote:
To be excessively nitpicky.... motion detection and facial recognition overlap, but for my motion detection comps, I have no DESIRE for them to work in a "facial recognition" manner.
Understood. I'm sympathetic with it, even. This particular product reference would probably only have utilitarian significance to users who video conference with some degree of regularity (and who like tacky video effects to go along with their facial recognition).

But we're in a developer forum here with somewhat uniquely expanded perspective on CV. I would speculate that any of us who use a QC video input patch certainly do insert our faces into quartz comps at one point or another, and we do indeed also then apply our skills towards modifying that facial input. So, that interest area actually seems reasonably valid to me. And, if a face can be recognized .. so can other things. I'm happy to see development attempts in this area. I just wish it was in QC.*

*so I could make my own tacky facial effects in Quartz Composer. ahaha. Just kidding.

gtoledo3's picture
Re: Face recognition

leegrosbauer wrote:

But we're in a developer forum here with somewhat uniquely expanded perspective on CV. I would speculate that any of us who use a QC video input patch certainly do insert our faces into quartz comps at one point or another, and we do indeed also then apply our skills towards modifying that facial input. So, that interest area actually seems reasonably valid to me. And, if a face can be recognized .. so can other things. I'm happy to see development attempts in this area. I just wish it was in QC.*

*so I could make my own tacky facial effects in Quartz Composer. ahaha. Just kidding.

Oh yeah Lee, of course... facial recognition would definitely be cool...

Do you have to stand out of frame for the facial recognition to work, or does it automatically take with you in frame?

leegrosbauer's picture
Re: Face recognition

In my brief look at the app it had both approaches. Step away for background replacement (like in iChat), and auto-facial recognition upon effect selection as a seperate option.

I hope you understand that I'm not endorsing this app. Just pointing out that somebody out there has begun commercializing, if not monetizing, something for OS X that we recently had under consideration in this forum. I myself use a different virtual cam app that's QC compatible.

gtoledo3's picture
Re: Face recognition

Yeah, I also had someone mention to me that they had a facial recognition program for Windows, and that kind of piqued my curiosity as well... I was thinking that if it has to grab the background/do a chromakey effect, that maybe it's adding some kind of "oval" object detection to home in on the face.

I bet that it could work in QC in a method similar to the way an AR tracker is tracked. The caveat, is that the face itself would have to be able to be loaded AS a tracker. I wonder if a tracker was made with an ovoid shape, and kind of "population average" spacing for some blobs where the eyes, nose, mouth would go, if an AR type system could use that, or respond to it like it was a tracker?

It's definitely an interesting subject.... I would like something that could track hands or fingers in the air, automatically, without setting point data like in OpenCV... in addition to facial rec stuff! As long as I'm gonna wish, why not :o)

leegrosbauer's picture
Re: Face recognition

Well, you've called it precisely. Upon examination of the app's effects creation how-to pages, it can be seen that the application references a few different individual template parameters to achieve shape recognition. Interesting approach.

Of additional interest perhaps, I posted to their blog requesting Quartz Composer compatibility. They responded that they are working on it for the next release.

gtoledo3's picture
Re: Face recognition

I'm not surprised.... in some ways, tracking for AR and TUIO have some overlap in their use of setting up reference points.

For giggles...

... this is the company that has apparently implemented this for Apple... Omron's OKAO Vision- they not only do facial recognition, but attempt to do feature recognition.

http://www.omron.com/r_d/technavi/vision/okao/detection.html

Now, there is another company that has made an app called iLovePhotos (haven't tested it), that does facial recognition.

Looking back at some really basic/old info, it seems like the "simple" way to approach it is to try track moving points, and from that, try to derive a "bounding box" that tracks width/height (talking about 2d methods, that need your face to be pointing directly at the camera). Once a bounding box is derived, the code is typically written to try to derive "blinking" eyes. Average position of the eyes/mouth is apparently derived from brightness intensity info, which tend towards certain averages.

This makes perfect sense... the eyes usually always fall at a particular y value between the top and bottom of a head, as does the mouth - thinking of art class stuff here :o) So, once blinking eyes are detected, it is "certain" that there is a human head, so then your "mask" can be imposed as an overlay, on top of your face.

One thing that I've noted is that, in about an hour of poking around on this last night... there are some pretty horrible frame rates quoted, and the authors of the code "beam" with pride. Stuff like 6fps doesn't exactly make me warm and fuzzy :o)

This is a page I found with some info about doing this in processing... have not tried it at all yet.

http://www.awgh.org/?p=21

leegrosbauer's picture
Re: Face recognition

Thanks for the links. Very informative.

And yes, it does appear to be looking for the eyes .. see my earlier posted remarks below in response to Franz.

In my own trials, the app was most capable at facial shape recognition when the image of my eyes was not obscured by eyeglasses. I also fed it facial imagery which had been modified by assorted combinations of QC image filters. Results varied and I'm not competent to quantify them, but I can report that the shape recognition still worked over a clearly observable but narrowish range of distortions.

As for framerates; I don't know what to think about that. I do realize that one of the primary user groups addressed here has a strong interest in inexpensive processor taxation. It's conceivable however that a broader general user group could emerge in the future which might not have quite the same constraints or needs. I mean, if somebody had asserted ten years ago that Apple would be focusing on portable telephones, I would have not only laughed but been dissapointed. But ... as things turned out, thats where the most potential users were so that's where Apple went. No?

franz's picture
Re: Face recognition

Face Recog is actually able to tell a FACE from a COW (for instance) but it can't tell G.W.Bush from R.Nixon. Unless you train it with 10000+ photos of Nixon, maybe it will be able to distinguish him from the crowd. Maybe.

leegrosbauer's picture
Re: Face recognition

There is a discernible difference to be noticed between G.W.Bush and R.Nixon? Must be different guys than the ones I've heard about.

Regardless, that's a great observation Franz, and combined with George's comments it made me realize that I should have actually called the initial posting 'face tracking' instead of face recognition. Because that's what is accurate as well as being what is significant about the subject.

So anyway .. I looked a little closer. I fed the tracker a resizable and rotatable webcam image source and had it track my face in that source through various image positioning locations. I can report that the tracker seems to be strongly focused on eyes because it works way better when I take my glasses off. Additionally, I would now characterize the face tracking performance as actually being rather good, better in fact than I first thought. It can track a face and apply image modifications to that tracked face through a fairly large range of X, Y and Z positions as long as the movement is not rapid.

We should have this level of tracking capability in QC. It would be useful.

dust's picture
Re: Face recognition

here is a face recognition patch i use to hands free video games. you lean forward to go forward you sit up and the guy jumps etc...

here is small clip of face recog. if your really are into it as far as making your own cascade files there is the .gov data set called ferret. its funny now the facial recognition is used for social network image tagging and i use it to play video games thats a far cry from using it to catch terrorist. actually making a more biometric type of facial recognition is a lot easier, meaning if you are trying to recognize generic or all faces the data set has to be a lot larger 5000 plus pos and neg training sets. if you just want to recognize your head for lets say a login it takes a really small data set but with more tags. if anyone wants i have compiled a haar xml builder app.

this is a processing script....

doing things like head tracking can actually be done with QCCV just use the mouse to select your eyes as tracking points. then track distance between the eyes if you want to spoof z depth rotation is trickier.

import hypermedia.video.*;
import oscP5.*;
import netP5.*;
 
OscP5 oscP5;
NetAddress myRemoteLocation;
 
 
OpenCV opencv;
 
int contrast_value    = 0;
int brightness_value  = 0;
 
boolean left;
boolean right;
boolean up;
boolean down;
boolean space;
boolean shift;
 
 
double zfov=0;
double h=0;
double w=0;
void setup() {
  myRemoteLocation = new NetAddress("127.0.0.1",8000);
  oscP5 = new OscP5(this,12000);
 
 
  size( 320, 240 );
  frameRate(25);
 
 
    opencv = new OpenCV( this );
    opencv.capture( width, height );                  
    opencv.cascade( OpenCV.CASCADE_FRONTALFACE_ALT ); }
 
 
public void stop() {
    opencv.stop();
    super.stop();
}
 
 
 
void draw() {
 
opencv.read();
opencv.convert( GRAY );
opencv.contrast( contrast_value );
opencv.brightness( brightness_value );
 
Rectangle[] faces = opencv.detect( 1.2, 2, OpenCV.HAAR_DO_CANNY_PRUNING, 40, 40 );
 
image( opencv.image(), 0, 0 );
     OscMessage leftMessage = new OscMessage("/left");
     OscMessage rightMessage = new OscMessage("/right");
     OscMessage upMessage = new OscMessage("/up");
     OscMessage downMessage = new OscMessage("/down");
     OscMessage spaceMessage = new OscMessage("/space");
     OscMessage shiftMessage = new OscMessage("/shift");
     for( int i=0; i<faces.length; i++ ) {
     float xpos = faces[i].x;
     float ypos = faces[i].y;
     println("y" + ypos); 
     println("x" + xpos);
     zfov = (faces[i].width)+(faces[i].height);
     w=faces[i].width;                          
     h=faces[i].height;
     h=faces[i].width;
     zfov=w+h;
     println("z" + zfov);
 
 
 
    if(xpos > 125)left=true; else left=false;
    if(xpos < 75)right=true; else right=false;
    if(ypos < 25)space=true; else space=false;
    if(zfov < 150)down=true; else down=false;
    if(zfov > 200)up=true; else up=false;
    if(zfov > 250)shift=true; else shift=false;
 
 
    leftMessage.add(left);
    rightMessage.add(right);
    upMessage.add(up);
    downMessage.add(down);
    shiftMessage.add(shift);
    spaceMessage.add(space);
 
oscP5.send(leftMessage, myRemoteLocation); 
    oscP5.send(rightMessage, myRemoteLocation);
    oscP5.send(upMessage, myRemoteLocation); 
    oscP5.send(downMessage, myRemoteLocation); 
    oscP5.send(shiftMessage, myRemoteLocation); 
    oscP5.send(spaceMessage, myRemoteLocation); 
 
   noFill();
   stroke(255,0,0);
   rect( faces[i].x, faces[i].y, faces[i].width, faces[i].height ); 
 
     }
}
  void mouseDragged() {
    contrast_value   = (int) map( mouseX, 0, width, -128, 128 );
    brightness_value = (int) map( mouseY, 0, width, -128, 128 );
}
 

gtoledo3's picture
Re: Face recognition

Yeah, the idea is to avoid having to manually set points... to be frank, I have some pretty well refined/non standard open CV stuff in the bag (using the Kineme plugin), interfacing it with a variety of other techniques. Yet... having to manually set tracker points does make it less than idea, and even totally inviable for a number of scenarios.

You just can't have anything where someone walks up to a screen and the face, or hands are automatically recognized, b/c you have to set the point first... and then there is "point drift" to contend with as well :o(

Just to throw it out there.... it's pretty easy to setup the Kineme 3D Bunny Warp with some open CV.... Make the top of the bunny correspond to your index finger, the bottom to the thumb, and then control the gravity warp in the middle with your middle finger. I would post it, but I can't find the qtz., and I don't feel like writing it from scratch again. I never posted a clip either, boohoo.

In general, it is cool to track a point to your finger tip, and then use the values generated by your finger tips to control different kineme3D deformer patches, or to control 3D transforms, CI filters or whatever.... and record that with the value historian, for replay. Makes ya feel kinda like a puppeteer! It's a hit or miss method, and sometimes it's more trouble than it's worth.

gtoledo3's picture
Re: Face recognition

Similarly, you definitely can track a point like Dusty is saying, to anything, your forehead, your hand, and then use a folder images setup to auto-scroll through "masks".

This works particularly well with AR (which is a plus, b/c you don't have to set points at all) if you don't mind sticking a tracker to your forehead. Looks kinda silly if you don't have one of your "masks" on, and people see you with a friggin' tracker on your forehead.

echolab's picture
Re: Face recognition

here is a very nice example of this technique, made with animata and eysweb; from Kitchen Budapest


Reverse Shadow Theatre from gabor papp on Vimeo.

dust's picture
Re: Face recognition

yeah the image tagging thing im talking about is a like an augmented reality pattern that marks areas of the face.

bernardo's picture
Re: Face recognition . Paging DUST

hi there Dust i want your app for making the XML haar build.

thanks hugs bern

dust's picture
Re: Face recognition . Paging DUST

cool yeah i will see what i can do about finding the image tagger program. right now i'm in the process of cleaning out my junk room and turning it into an office so all my drives and things are not plugged in at the moment.

to get started you need to install open cv on your computer. i suggest using CMake to build the programs there is a haar trainer included with open CV that you run in the terminal.

all my program really did was use qc to preview images from a folder and write a file out with a tagged spot indicating where the part of the photo is to be trained.

it takes a long time if you want to generically be able to track features like if you want to track all peoples heads not just your head etc.. the example included with opencl uses a logo which is fairly quick.

like i said i will have to find it. i am able to find and track most body parts from hands to eyes with the included haars or ones from the nets so its been a year or so since i have put any thought into open cv.

actually since kineme came out with open CV i haven't really done anything with it, except for one day i tried building a blob tracker with ofx to qc plugin from vade but i was unsuccessful at getting the tracked image back into qc from ofx. i should take another look at ofx for qc and see if its possible now.

you will also want to assemble a library of background photos. that do not have the image you want to track. i made a photoshop script that converts the images to the correct format needed as well.

since i have upgraded my photoshop i will have to hunt for it again as open CV uses a really old image format that is basically just an array of black and white pixel data. i think its a .ppm or .pcx or something like that.

just let me know how far you have got and if you can run just the haar logo example. i will see if i can resurrect the files... maybe we could work together on something.

im thinking that open CL might be able to do some nice image recognition things very fast. haar training can take a really really long time so i have had to come up with some other solutions that i have learned using cv.jitt with max years ago, like using histogram and threshold data to mean you can use to match etc..

what are you trying to track ?

bernardo's picture
Re: Face recognition . Paging DUST

hey dust thanks for the reply... i popped up openframeworks in computer and ripped up an application really fast... goddammm its not haar but it tracks the darkest part of an image and parses it into an rectangle so i can crop the original ones...

but the thing is kinda strange... i have alot of drawings (hand made) and i that are similir between them selves but not actually equal... i want to split them up into several pieces and try to see what are the computer's choice in similiarity...i can't really discuss it more than this because its not a really grown and mature project.... but definitly not a logo... i had to go the hard way... please see the images attached:

now imagine hundreds of these drawings to be cropped up and coffed into seperate files.... what i want is to take advantage of the haar split up and rectanglize properties and then take into acount the computer selection... ahahaha man if only i could talk about it more....

PreviewAttachmentSize
hand.jpg
hand.jpg30.43 KB
testeHuman.jpg
testeHuman.jpg34.88 KB

dust's picture
Re: Face recognition . Paging DUST

im uploading a basic image recognition patch i made today with open cl. it only takes one training image but may work for your needs. its not outputting position data yet but will be able to recognize different images shapes colors etc... there is a hand haar cascade available at nuigroup.com

bernardo's picture
Re: Face recognition

does anyone know how to make a rectangle arount the tracked areas like in the picture bellow:

i want to track the several dark spots on an image and make a rectangle around their areas

can anyone help?

thanks bern

PreviewAttachmentSize
hand.jpg
hand.jpg30.43 KB

benoitlahoz's picture
Re: Face recognition

Hi Bernardo,

I'm on the same thing. What I'm trying to do is to get the minimal and maximal X & Y tracked to build the bounding box.

But I'm using Image PixelS, not OpenCV, with the composition posted by usefuldesign in this post : http://kineme.net/forum/Discussion/DevelopingCompositions/Renderingtextm...

I guess there's a better and simpler way... but...

bernardo's picture
Re: Face recognition

yeah yeah me too inside a javascript patch to split into to 2d array the tracked blob.... and then find the min X max X min Y mzx Y... and build the several rectangles around them but! and but i mean buT!!!! i am not a programer... hitting the clay into the wall to see if it hits...

.

benoitlahoz's picture
Re: Face recognition

I guess on your photo they're using Haarcascade on a hand, and the Apply Haar Cascade patch outputs... a rectangle structure ! I did'nt try though...

bernardo's picture
Re: Face recognition

nope they were detecting motion... i am about to start on creating some haar for my image tracking....

in javascript i don't know how to parse the different bunch of darkspots in to an 2d array

see the image attached... say i can make a 2d array in JS that finds the darkspots and processes them to a 2d array each on sperated to a new level.

<2d array> pixel... pixel... </2d array>

PreviewAttachmentSize
testeHuman.jpg
testeHuman.jpg34.88 KB

benoitlahoz's picture
Re: Face recognition

Ouch... What kind of animal is it ? :-)

dust's picture
Re: Face recognition

here is a hand classifier.

PreviewAttachmentSize
handTarget.zip91.33 KB