IO Gesture (Composition by gtoledo3)

Author: gtoledo3
License: (unknown)
Date: 2009.08.27
Compatibility: 10.5
Categories:
Required plugins:
(none)

This is a really simple gesture recognition trick. I'm inspired to post this from Dust's post on gesture recognition. (Thanks for the inspiration Dust!)

This qtz is an extremely simple idea, and not an all encompassing system for gesture recognition. The setup of the qtz is actually a bit leading in causing the participator to "do" what they need to do to make it work.

The leading thought was "what is the simplest possible setup to make an idea like this work?".

PreviewAttachmentSize
IO Gesture.qtz5.89 KB

cybero's picture
Re: IO Gesture (Composition by gtoledo3)

This is a very interesting start, GT; I find that on first click if placed centrally in the viewer, it will always draw a 1 and if placed at or towards the top right, left top, bottom left or bottom right it will always make a 0 , so definitely consistent.

Must get into some of this gestural stuff myself.

dust's picture
Re: IO Gesture (Composition by gtoledo3)

glad to be of inspiration to you. i think simple approaches to complex problems sometimes is the best way to go. now combining your phil patch with gestures would be cool.

gtoledo3's picture
Re: IO Gesture (Composition by gtoledo3)

The over-riding reason that I've never been interested in gesture recognition that much is that I don't perceive it to be solving any real problem.

For instance, if I make the shape of a letter M, and something pops up to represent that, then I may as well be writing the letter M!

To that end, I can imagine a scenario where one would gesture in the air, and then that would get recognized. That is a bit novel, because one could possibly outline a pattern without actually touching anything. Again, I don't find that intriguing, because the only reason I can perceive for the gesture is for a user to select something, or to communicate. In either instance, their are much more direct routes available! This is why we have things like buttons on ATM's instead of doing gestures in the air, and use keyboards, etc.

The guy that thought of the bar code was very wise in thinking of a practical application for the analysis of image, and extrapolation to some kind of useful outcome. Facial detection technology is an area I've seen you experiment in (though I still haven't looked at the HAAR patterns you set up and all of that yet, but am really interested in it!), that has some really practical uses. Along those lines, using image analysis to find ways of deducing what people are speaking from very far away is probably an area to get involved in that would prove interesting to many.

This setup had me thinking that it's fairly cool that I can solidly do two different movements, and if I do them in a consistent manner, I can effectively set up an interactive A/B bus. The fact that it's "solid" in that way is a plus to me. Then, the way that not clicking anything gives you the 1st layer, basically gives really simple system for triggering 3 different image chains from simple gesture.

.... Yet.... I could always just have my image sources setup, and select my image sources by typing in a number, a drop down menu, etc., and it's even more "solid". My mind stretches to find the killer app where a 2d gesture is useful... I guess video games maybe? (yeah, that's the ticket...). Is that how Wii stuff works? I've only played that a couple of times.