Smells like Memory leak inside nested loops + Blending mode oddness's picture

In this patch there are two nested loops to reconstitute a strip of pixels into a 5x7 bit-map type grid image.

When I used a crop patch to 1 px sq taking the iterator indexes for x offset calculation I find it slows the patch down by about 2 fps/s until it hits zero.

I replaced with the read pixel patch inside same iterators and runs fine. Am I getting an image leak with the crops, maybe an errant input value the cause? Read pixel uses exact same calculation.

Also when I change blending modes on the sprite that makes up the grid I get full sqs joining as I set it to but in all other modes the is a perimeter line on the sprite knocked-out. (Look and see what I mean)...was thinking of having some subtle edge masking but not this much. Can't think why it's there. On account of using Kineme GL posted here but don't imagine that .plugin has anything to do with it.

Patch is saved with a note where the leak(?) can be switched on and off.

CK Rainbow Box 7.qtz274.45 KB

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

cwright's picture
Re: Smells like Memory leak inside nested loops + Blending ...

the "pixel halo" is because you have antialiasing enabled on the sprite -- that generates a bogus 1-pixel border to fake antialiasing.

If there's a leak, you /always/ end up with the exact same result eventually: crash. If you're not crashing, you're not leaking, simple as that.

Read Pixels does not do the exact same calculation -- in one case, you're setting a color (glColor4f(red, green, blue, alpha)), in the other, you're cropping an image, creating a texture out of it, and uploading it to the graphics card for use on a per-sprite basis. the code for that is ridiculously complicated (including Core Image copping/filtering, CoreImage -> GLTexture creation, setting the image on the context, rendering the sprite, removing the texture from the context, and reclaiming resources). Visually the output may be the same, but the paths to get there are fantastically different.

franz's picture
Re: Smells like Memory leak inside nested loops + Blending ...

maybe you should use Matthias Oostrick Image Pixels patch outside the iterator, instead of read pixels inside iterator. This will speed up things anyway. check here:'s picture
Re: Smells like Memory leak inside nested loops + Blending ...

Thanks for explaining cwright. To clarify, I didn't mean read pixel and crop do the same calculation I meant they are being fed exact same co-ordinate calculation in the case that it might be some kind of improper input value. Didn't occur to me just how differently they end up processing (except for texture bit which even I got 8) ) so thanks for sketching that out for me.

Can I conclude that the memory being freed up at end of each iteration is not being as much as is getting taken. It concerned me because I have used nested iterations to do rapid 3D transforming on multiple sprites and got pretty decent frame rates in past and certainly no deterioration.

This image was static so in terms of lazy evaluation perhaps that's out the window inside iterator as the index is actually changing and recalc'ing even though sprites aren't moving.'s picture
Re: Smells like Memory leak inside nested loops + Blending ...

Thanks franz, actually sent this link to Alex (who this is for) as I was following your LED image thread. Said he's not a programmer so I didn't examine it. No programming required!! Well bit of JS if desired. Cool.'s picture
Re: Smells like Memory leak inside nested loops + Blending ...

Oh thanks for anti-aliasing catch too, nicely taken.

cwright's picture
Re: Smells like Memory leak inside nested loops + Blending ...

The memory being freed is the same amount that's getting taken (no leaks) -- otherwise, you would crash. A leak is when something takes memory, and then never gives it back -- eventually the program runs out of memory and crashes (unless it's a one-time leak, like at program startup or something).

manipulating texture data is very expensive -- CoreImage to GL textures can be costly, and setting a unique texture for every face takes time. other attributes (vertex data, color data, etc) aren't as expensive. In part because they're easy to generate/upload (single-function-call type stuff), but also because they're easier to generate. a single color is 16 bytes of ram. A single-pixel texture, on the other hand, is 16 bytes for the color, 4 more bytes for the dimensions, a few bytes for the format, a few bytes for the texture identifier, a few bytes for its location in vram... and all that is getting generated On The Fly (because CoreImage does texture stuff dynamically), and then all discarded again -- it's the generation/destruction stuff that gets really expensive. Lazy eval doesn't work so well in iterators because you're changing values more than once per frame, so it can't simply retain the previous value. If it could retain the previous value, it wouldn't be such a big deal.

I think Apple put some effort into addressing this in Snow Leopard (caching inside iterators), but for now, don't do anything in iterators that you can do elsewhere, esp if it's dealing with textures.

Edit: There's a correlation between leaks and performance, such that ongoing memory leaks will gradually harm performance until the app crashes. However, it's possible (and even common) for something to have terrible performance without actually leaking. In this case, the bad performance is simply due to the sheer amount of work getting done behind the scenes with texture handling, versus the significantly easier amount of work to do color handling.'s picture
Re: Smells like Memory leak inside nested loops + Blending ...

Thanks cwright that's a really good breakdown of what happens with CI patches to GL textures for me. I guess the only outstanding confusion I have is why the fps deteriorates at about 2 frames per second per second until it hits 0.0 (weighted average) and then climbs again when the patch is unlinked. Is it just clogging up the VRAM to the point of 'gridlock' (excuse the technical jargon)?

If the processing is slow and expensive, wouldn't it start slow like at 10 fps at the most?