what is the best way to do a pixel x pixel search

dust's picture

so i am in a class that is trying to propose a grant because new media is becoming an emerging cluster in my state. it is obviously a established cluster in many other geographical locations. this is all new to me because it is cluster economics class not cluster computing which i thought i signed up for. thats besides the point, the technology initiative has all ready given grants for the "iPointer" http://www.i-spatialtech.com/ which we are working with.

when it was conceived it was an amazing idea but the iphone went and blew the market away for gps / spatial programs or what ever. so this system points at a building and 3d information comes up on your phone. so far all of denver, colorado is mapped and modeled. this is a great idea, but you can't point it at a chair and have it say chair, like you can with rapid object detection in open CV with the haar kit. i think they have now ported open cv to the arm chip im not sure haven't played with computer vision since i made that spatter paint program this summer.

so this i pointer got me thinking of a way to detect full landmarks from a picture or video source and then pull up a 3d representation of said picture in quartz composer?

is there a way to do a pixel by pixel search with a CI filter. i have seen the image pixel patch. there used to be a rgb average mean filter in leopard but its gone in SL. thats ok because i can make my own average mean calculation later?

what would be the best patch to well record all the pixels in an image or is this a task for a ci kernel. i am about to try and iterate through the pixel image patch and record the rgb info, would this be the best approach ?

here is a example that actually works kind of well using the histogram to compare. the tolerance or disparity between the image thresholds is .0001 on this implementation with the histogram which baffles me how accurate it is. anyways i thought i would try doing something like the i pointer in qc negating the spatial data and just using a camera.

so yeah to re phrase i guess whats the best way to search pixel by pixel ?

here is an example with histogram just take a pic of your self and put it onto the data image splitter then hit play to see if it will recognize you.

PreviewAttachmentSize
detection.qtz36.61 KB

usefuldesign.au's picture
Re: what is the best way to do a pixel x pixel search

Not sure what you need to do Dust but the Image PixelS plugin may help: http://www.magdatt.nl/software.html

I gather you weren't talking about this patch when you wrote of the Image Pixel patch ie. the one with rgb and colourspace outs

gtoledo3's picture
Re: what is the best way to do a pixel x pixel search

Searching pixel by pixel for some kind of "does image a=image b" is a very different proposition from the problem you suggest solving. One scenario actually entails the exact same (or very similar) images twice, and then running an analysis, making a structure of pixel values, and then seeing if they are identical, perhaps within tolerances.

The idea of taking a picture with your iPhone (or still pic), running an image analysis, and extrapolating where you are at... isn't the most direct way to solve a problem of wanting a 3D representation of a image in QC (not sure exactly what problem that solves either?)

The steps to do something like that would be a lot more like-

-Global positioning data/Current coordinates you are accessed from in-phone or global positioning device hardware. You may point a phone at something and aim at it with the Viewer, but that's only to determine what you are pointing at so that whatever building/terrain that is in some kind of surveying library can get pulled.

-Once coordinates are pulled, a database gets queried to see what establishment the camera is actually pointed at... or sends a menu to a camera of a few likely options, and you choose the correct one. This database makes more sense to be external, queried through the net.

-If a 3d model of the building exists that you are point the cam at exists in the database, it gets loaded.

I'm just saying that I would make the logic dependent on coordinate data, not image analysis. Image analysis from the phone feed (or camera feed, whatever), could be better used to determine a visually appealing place to put the info pulled from the GPS coordinates.

It seems like a great deal of processing, and an indirect and probably pretty unreliable way of doing something that could be pretty simple with already established methods. Even if you did something like this to analyze aerial maps or something... well, the coordinates are already known. There isn't a need to do that kind of analysis.

Also, note that the iPointer system you talk about probably works pretty much this way (I'm just now looking at the site to make sure I'm not talking out of my booty)...

"The iPointer doesn't need to use Image Recognition or RFID tags in the buildings to work. Instead iST uses 3D CityMaps in a server side augmented reality system."

For the kind of process you're talking about... it might be more suited to doing facial detection... your original idea of using image analysis for something simple like a chair was probably closer to reasonable, but still pretty complex. At least with a face, front on, you have your eye, nose, mouth blobs, the blinking eyes, pretty unique space relationships of those "blobs" between people, etc...

Don't let me dissuade you! I'm just throwing out the hurdles that come to mind off of the top of my head. One thing about semester projects is choosing something that is of a scale that can be reasonably successful, in that you already about a step or two away from. Please take this as good natured advice... again, I would love to see you be successful in your endeavors with these concepts.

dust's picture
Re: what is the best way to do a pixel x pixel search

@useful designs no i was not thinking of that plugin but it seems it will do what i want to do with QC.

@gt your right george the the iPointer doesn't use a camera its pulling from gis coordinates. im just trying to better understand the device before we meet with Chris Frank. i guess he is on board for the clusters symposium.

i don't know much about the project other than my mate has to make it work on an iphone. which i really want to see but seeing there is this database of 3d models i thought it might be cool to try and do something with QC on the multi-touch table. some kind of virtual walk through.

i suppose i could just send the coordinates via osc from the phone to QC. im pretty sure spatial uses a oracle db so more than likely some sort of oracle plugin would have to be made. i'm sure my aunt would like it if i made a oracle plugin (she's oracles politician)

which ever way you perceive it, there is lots of work to be done for the device to be fully effective globally.