GLSL or Core Image filter for Image Processing

Adrianh's picture

Hi, What would produce the fastest processing throughput for image processing? A glsl shader or a Core Image Filter?

Thanks , Adrian

toneburst's picture
Re: GLSL or Core Image filter for Image Processing

Probably depends on the intended output of the filter. If you're planning to further process the output with other filters etc. a CIFilter patch might be the best way to go, because if you use a GLSL patch, you'll have to Render In Image.

On the other hand, if you're not planning to apply further processing, a GLSL patch might be better, since you'll be able to render directly to the output context in this case.

I'm sure there are other factors that could be taken into account, too, but I defer to someone more knowledgeable on what those might be ;)

a|x

Adrianh's picture
Re: GLSL or Core Image filter for Image Processing

Thanks Toneburst.

Im not planning to do any more processing post this event/patch.

Is there a way to set a timebase for such rendering techniques?

Im passing live video through QC and need the throughput as stable as possible.

Ade

psonice's picture
Re: GLSL or Core Image filter for Image Processing

More stuff to consider:

  • CI is basically a sub-set of GLSL. That's generally fine, but there are times when you'll want to use a feature that's not in that subset. If that's the case, CI forces you to mess around with a hacky workaround, if it's possible to do what you want at all.

  • GLSL is mainly polygon based, rather than image based. It makes basic 2d image processing harder than with CI. Then again, you might want to take advantage of the polygon side.

  • CI has options for javascript scripting. It's much easier to build a range of filters, mix and match them, iterate them etc. all in one patch.

  • CI performance is generally pretty good, but I suspect GLSL is faster in many cases. Especially with Nvidia.

  • GLSL is the obvious choice if there's any chance you'll want to port it to other platforms in future.

  • GLSL is better if you'll be grabbing shaders of the web - most are in HLSL or GLSL format.

toneburst's picture
Re: GLSL or Core Image filter for Image Processing

That's what I would have written, had I thought about it... ;)

Excellent summary, psonice.

a|x

Adrianh's picture
Re: GLSL or Core Image filter for Image Processing+ hidden prefs

Fantastic description, Many thanks,

Is there a way to control or be more precise with the rendering rate of the composition. Ive noted one can be vague with max frame rate in preferences. Is it possible to be more precise programatically.

AND where is the documentation for the hidden preferences. Im scared to turn stuff on and off not knowing what it does. Are these settings saved per composition?

Ade

cwright's picture
Re: GLSL or Core Image filter for Image Processing+ hidden ...

There's no finer-grained control in the QC editor -- it's not intended to be used for super-accurate frame rate playback, it's an editor for creating compositions.

outside of the QC Editor, you can set arbitrary framerates to whatever you'd like programatically (12.34 fps, for example).

There is no documentation for the hidden settings -- Apple generally would prefer that you not tinker with them at all, and as such, they don't document them externally. They are not saved per-composition.

Adrianh's picture
Re: GLSL or Core Image filter for Image Processing+ hidden ...

Thanks cwright,

Is it as simple as

MyQCRenderer renderAtTime:12.34 arguments:arguments

Is this setting the Max Frame Rate or Locking the framerate?

Ade

cwright's picture
Re: GLSL or Core Image filter for Image Processing+ hidden ...

no -- renderAtTime will, as the name implies, render the composition at the specified time (at 12.34 seconds, in your example)

QCView has a method to set its desired framerate. if you're using a QCRenderer, you're in charge, and you have to figure out a way to drive the renderer at the desired framerate yourself (HINT: NSTimer ;)

jpld's picture
Re: GLSL or Core Image filter for Image Processing

Assuming I were only targeting current hardware revisions on Snow Leopard, how would OpenCL be characterized in this breakdown? Is it not worth exploring for filtering algorithms that may lend them self to parallelization?