Core Image Questions

idlefon's picture

So I've been messing with CI filters for sometime and bumped to these questions/problems:

1- Isn't __color (defined both in JS editor and the kernel itself) a vec4? I'm asking this because I tested a simple dot function and the results were different. I attached the composition.

2- What is the best way to smooth the output especially in filters that are comparing a specific pixel with a bunch of others such as it's neighbors. I used pre-bluring the image and it worked (the output doesn't give the impression of being "noisy") but was wondering if something could be done inside the kernel. "smoothstep" maybe?

3- What are the first two arguments in the .apply command in the filter JS. I think the first one is the DOD (thanks to http://machinesdontcare.wordpress.com/2007/12/15/core-image-filters/ ) but the second one which is "null" usually I don't know. Is there a reference anywhere about the JS in CI.

4- Is there any resource for kernel examples. I would love to see apple's own CI filter's kernel (or any other "private" CI filter)?

I know, I know; Long and possibly silly questions :D

PreviewAttachmentSize
__color versus vec4.qtz9.47 KB

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

gtoledo3's picture
Re: Core Image Questions

1- Color can be interpreted as a vec4, because it has 4 lanes, RGBA. The only way I know of getting those values is the Color to RGBA object. (Haven't looked at the comp yet). If you try to decimate a number into a color port, the 1 gets called by every lane, so you see the color go from alpha to white.

2-http://homepages.inf.ed.ac.uk/rbf/HIPR2/index.htm ... is a pretty good page for explaining the plusses and minuses of different smoothing techniques. It may not go into trilateral filtering, which is probably worth at least being aware of. A gaussian pays no mind to edge; it gives an effect even when pixel values are very disparate, so it's not really ideal for dithering.

3- I'm not really sure what you mean here. Region of Interest, Domain of Definition... but there's something about the semantics of the question that make me unsure if I'm answering you correctly. I'm not sure what you mean by "the second one which is 'null' usually". (Maybe it's a statement that says "if null, do this").

4- There are a ton of pages on image processing, in general. Much of that can be directly applied to GLSL or CI, but maybe not without some work on getting syntax correct.

There's a decent amount of CI examples floating around, and there are some ok examples from the 10.5 developer folder (if you have that... it's probably available on Apple's site?). I forgot where the CI source stuff is in the OS. If you find out, post back... I don't think it's necessarily actually written in CI though, but likely a bunch of it is GLSL and CL.

idlefon's picture
Re: Core Image Questions

Thanks George for the reply! I somehow knew YOU were the one to reply first :D

1- If you see my example, I have used the Color RGBA patch to convert the color to a vec4, but the thing is that this procedure is outside CI. What I meant was that the CI filter can not interpret a __color input as a proper vec4 if you use color.r/color.g/color.b/color.a objects. My question was if there is a workaround this.

2- That's a great site. have to study it carefully. cheers for sharing the link.

3- Sorry I explained unclearly. here it goes. The simplest JS Editor for a CI is this:

function __image main(__image image) { return multiplyEffect.apply(image.definition, null, image); }

The first argument far as I know is the Domain of Definition which is the dimensions of our output. The second argument is the one I'd like to know about.

I also like to know that when one apply a filter with a limited DOD (say (0,0,100,100) for an input image with dimensions (0,0,800,900)) is the kernel only executed on a part of the image or is it applied completely and then cropped to the DOD (I'm asking this for efficency sake)

4- The only place in the dev folder I found kernel examples were the /Developer/Examples/Quartz Composer/Compositions/Core Image Filter (I'm on SL). Will post any good links on the net if I found one.

thanks again George :))

idlefon's picture
Re: Core Image Questions

Even with the multiply command the result between the __color and vec4 color is slighty different.

Is this a bug or what?

PreviewAttachmentSize
Color vs. Vec4.qtz4.52 KB

usefuldesign.au's picture
Re: Core Image Questions

Quote:
3- I also like to know that when one apply a filter with a limited DOD (say (0,0,100,100) for an input image with dimensions (0,0,800,900)) is the kernel only executed on a part of the image or is it applied completely and then cropped to the DOD (I'm asking this for efficency sake)

The most an image unit chain will calculate is the DOD, that's the whole point of it as far As I understand it ;-). Yet even less that the DOD may end up being calculated because the whole image unit chain (outside QC) and QC itself operates under lazy evaluation. So if filters further down the chain only require one pixel of you CI Filter that's all that gets executed. In theory, that's how I read it. I'm not very experienced with CI Filters yet and haven't use the DOD function as yet.

Read the docs: Region of Interst and on the Creating Custom Filters page, click on the Supplying an ROI Function link in the contents table at top of page.

usefuldesign.au's picture
Re: Core Image Questions

idlefon wrote:
1- If you see my example, I have used the Color RGBA patch to convert the color to a vec4, but the thing is that this procedure is outside CI. What I meant was that the CI filter can not interpret a __color input as a proper vec4 if you use color.r/color.g/color.b/color.a objects. My question was if there is a workaround this.
I'm probably not the best person to explain premultiplied and unpremultiplied. I get what they do, it's pretty simple, multiplying the alpha component into the colour channels. From the docs:

Quote:
Premultiplied alpha is a term used to describe a source color, the components of which have already been multiplied by an alpha value. Premultiplying speeds up the rendering of an image by eliminating the need to perform a multiplication operation for each color component. For example, in an RGB color space, rendering an image with premultiplied alpha eliminates three multiplication operations (red times alpha, green times alpha, and blue times alpha) for each pixel in the image.

Filter creators must supply Core Image with color components that are premultiplied by the alpha value. Otherwise, the filter behaves as if the alpha value for a color component is 1.0. Making sure color components are premultiplied is important for filters that manipulate color.

By default, Core Image assumes that processing nodes are 128 bits-per-pixel, linear light, premultiplied RGBA floating-point values that use the GenericRGB color space. You can specify a different working color space by providing a Quartz 2D CGColorSpace object. Note that the working color space must be RGB-based. If you have YUV data as input (or other data that is not RGB-based), you can use ColorSync functions to convert to the working color space. (See Quartz 2D Programming Guide for information on creating and using CGColorspace objects.)

With 8-bit YUV 4:2:2 sources, Core Image can process 240 HD layers per gigabyte. Eight-bit YUV is the native color format for video source such as DV, MPEG, uncompressed D1, and JPEG. You need to convert YUV color spaces to an RGB color space for Core Image.

I've premultiplied your vec4 color by component in this comp and the result is very similar to using __color because CI Filters assume pre-multiplied values. The slight difference is probably a colour-space issue. While I understand colour-space conversions quite well in theory and in practice in Print work, I've never tried to understand how QC implements it. Judging from some of cwrights comments, I'm guessing in quite annoying ways.

PreviewAttachmentSize
__color versus vec4 _ versus premultiplied.qtz208.99 KB

usefuldesign.au's picture
Re: Core Image Questions

In this case premultiply has no effect. Hmm. Same slight difference as when I premultiplied you previous comp. I can only guess this is colour-space related. I suggest you ask on the list. cwright is very good at fielding these kind of enquiries.

usefuldesign.au's picture
Re: Core Image Questions

Quote:
4- Is there any resource for kernel examples. I would love to see apple's own CI filter's kernel (or any other "private" CI filter)?

Yes, explore the Apple Docs I already linked to, there's a lot of material including sample code for a few simple-ish CI Filters. Also maybe same Private sample code, they certainly explain how to package CI Filter(s) as a Core Image Unit for use in Cocoa development.

usefuldesign.au's picture
Re: Core Image Questions

More on Premultiplied pixels: Tom Forseyth's Tech Blog

Which was referred to in this recent OpenGL apple list discussion

idlefon's picture
Re: Core Image Questions

Thanks for clearing this. I checked the link and the lazy evaluation was what I was looking for :D

So I guess this method that Toneburst explained here is "the most efficient" way to apply a filter to a limited area:

http://machinesdontcare.wordpress.com/2010/03/09/apply-core-image-kernel...

idlefon's picture
Re: Core Image Questions

Thanks again usefuldesign !

Actually unpremultiply-ing the __color was what should be done in this case I guess. Nevertheless basically it's the same thing.

Cheers again for clearing these things for me!

idlefon's picture
Re: Core Image Questions

It maybe a very stupid question but since I have absolutely ZERO experience with the Xcode environment I'm gonna ask it:

Is it possible to unpack the QC plugins (since they are packed into a plugin in Xcode) and then look for the kernels?

You're a life savior my friend.

gtoledo3's picture
Re: Core Image Questions

Hmm, I was doing some testing... the problem actually seems to be in the process of converting from rgb->color, color->rgb.

I did some kernels from scratch, and was getting a little darker results out of 1. Then I noticed that red channel value was actually .6188 ; after hitting the RGB Color, it came out as .62 So, it appears as though the RGB Color object is truncating bit depth.

idlefon's picture
Re: Core Image Questions

I re-checked George. Even if the they are completely similar (no difference caused by rounding the channel's amount) still the difference in brightness is obvious!

usefuldesign.au's picture
Re: Core Image Questions

You can't unpack binary files. Sometimes a HUGE amount of reverse engineering can create source from binaries. That's all I know about it, I could be wrong in some special cases. It's a bit like (but not really) asking can I take this heavily compressed JPEG of 10KB and 'unpack' the original 2M TIFF it was created from. No, in case anybody is wondering, no matter how good you are at fractals ;-)

As for the CI Filters that ship with QC, I don't think Apple has made the source code available. Well nobody on this site seems to think it exists going on past discussions.

usefuldesign.au's picture
Re: Core Image Questions

Yeah, exactly same result. 6 or ½ a dozen :-)