Image to DataStructure Patch

Hi all I'm curently using the Image pixels patch (built in) to read pixel data from an image. However this patch is EXTREMELY slow, as it has to be placed within an iterator (it returns the value of only 1 pixel). Having a patch that takes an image input and outputs a float structure in a friendly format (as discussed in the structure thread), would be overkill. This would actually mimic the MaxMsp JITTER Matrix bevahiour, which is , apart from being still slow in some situations, an uber-efficient paradigm. (interpret everything as an image, then use image data to drive nodes)

I know this is a duplicate of some already existing feature, but just set in a convenient manner, so this is not high on my request list.... Anyway, if you ever got the time....

Please note that i'm filing this request while keeping on the view the recent discussion we had about iterators (and their lack of speed) and spreads. Yes, this datastructure would be spread-wise....

See M. Oostrik's Image PixelS patch: [](

cwright's picture
let me see

so, it would take an image, and output a huge 2d structure (parent is structure of structures, sub-structures are scanlines with color data). is that what you had in mind? How would you want to handle channel data (rgb, grayscale, yuv, etc)?

toneburst's picture

If this structure was also writable, it would allow you to actually draw pixels at particular locations (the opposite of the GLSL/CIKernel paradigm). Only problem is, Quartz Composer JavaScript patches are probably too slow to usefully write stuff into an image structure in any usable way...


Quartz Composer Blog:

Music Site:

cwright's picture
that's what I'm thinking

With it as describe in my post above, it'll soak up ram like no tomorrow (32x or more ram usage per image), it'll cause all kinds of lag (transferring pixels to /from video card). Nothing is designed to operate on data in this format either, so you'll need JS or a twisty maze of iterators, and you're quite correct in that speed will be ..... lacking :)

This might be a place for the theorized "spread" port (actual C array, no object overhead), but even then it's not quite immune to GPU transactions. Even better would be some way to expose the underlying pixel data directly (that way, no copying/duplication/processing needs to take place), but I'm not sure how this would work in QC-space (plugins already have access to this data pretty easily, so it's not complicated, just ... different)

franz's picture
exposing the underlying pixel data

i was thinking about the future (if ever) spread port. Maybe a regular QC structure would do with downsampled pictures, 100*20 pixels already makes 2000 instances (if interpreted as 3d cubes). However, "a la Max MSP" was referring to the Jitter matrix (with which you do need special operators). Old school appz like Pd(+ Pdp/Gem) and Jitter do deal with 5000+ instances easily, so i'm kind of surprised it is such a hard task under QC (which is made from the - as advertised- latest tech ).

see attached patch

Pixels to Squares 000.qtz10.57 KB

cwright's picture

It's only difficult because of how QC was designed: I don't think Pierre and his crew thought much about people wanting to move lots of data around that they could access quickly — You've got images, which are a lot of data, but you can't really do much with them other than putting them on billboards, passing them through CI filters, or feeding them into a GLSL shader (some times ;).

You've also got structures, which can hold a lot of data, but there's some overhead in accessing data (since they're designed to be object-oriented associative arrays, they're not nearly as snappy as plain C arrays, though they're much more flexible in other areas). Spreads are more like C arrays, from what I can gather. And more importantly, patches were designed around them, which also doesn't appear to be the case in QC (structures are pretty rare, and there aren't a lot of useful tools that work with them yet).

Since most patches weren't designed for structures (cubes, for example, as demonstrated by your composition), you have to put them in iterators. Iterators have their own (probably small?) overhead, but the cube rendering patch does a lot of work behind the scenes. OpenGL's a pretty complex state machine, and if you forget to preserve any part of it, things get misrendered later on (GLTools has had a few bugs like this, because I'm an idiot when it comes to OpenGL). Preserving (and changing) state in OpenGL is a surefire way to obliterate performance, even for seemingly simple tasks (like drawing a few hundred or even few thousand cubes).

To illustrate the costs of this a little bit, we've got Kineme 3D vs. GL Tools. Yanomano, a long time ago, gave me some complex models (an Audi Q7). The total face count for some of the geometry (it comes in like a million different files, so I only loaded a few of the bigger ones) is over 55000 polygons. Even on my lame video card, I can render this at 60fps without any problems (3.3m polys/sec, not too amazing by today's standards). However, if you were to make an iterator that rendered 1 GL Triangle, and set the iteration count to 55000, you'd probably you will end up with a useless mac (I just tried it on my machine, and spent the past 20 minutes waiting for it to stop grinding :). That's the overhead of iterators and GL State swizzling. 3D Game Engines will go out of their way to avoid this sort of thing, for exactly this reason. QC, however, isn't designed to handle this intelligently for you - it was more designed around intelligent texture filtering/handling than intelligent gl state stuff for 3d effects.

yanomano's picture
About Image pixel to data...

Frantz, your composition is really interesting, but I think it can be optimized... (By A 4 factor i think :) Look at the image pixel patch : it analyse a pixel an return 4 values (so there is 4 operations ) one for each RGB component and one for the alpha... But what we need is just one value : We don't need color, we don't need alpha (let's assume it is always 1.0)...we just need a luminance value for a given pixel. We can't tell to the image pixel patch to calculate just one component. But we can compute the image that is passed to it ... For exemple : we have an RGBA image with 4 pixels (2X2) We pass it thru a CI filter to have a monochrome image. So we have r=v=b and a=1.0 forget the alpha. - Take one of the 3 components.. For our 4 pixels image, we input the 4 differents zones in 4 samplers input of a CI filter (upperLeft,upperRight, bottomLeft, bottomRight). In this CI filter we map uL to r, uR to g, bL to b and bR to alpha. We output an image. This image is then passed thru the image pixel patch. On the output of the imagePixel patch we have now 4 Pixels luminance values in one cycle (4 times the speed....)

Let's try it with the same composition !