Core Image Kernels

gtoledo3's picture

I've been reading through all of the Core Image kernel documentation I can find, writing kernels, and have come up with some okay ideas.

Does anyone know of a better resource of example code than the online Apple documentation? Are there any undocumented tricks I am missing here as well?

The other thing I am wondering...

It seems like if I have a chain of image filters, that it is sometimes better to actually write them all together in one kernel, performance wise. Has anyone that has messed with this found this to be the case as well?

I would write this to the developer forum, but I am at a pretty low level of competence on this to be frank, and I am sure that it has been covered in the 4 or 5 years of forum archives. I have definitely done web searches and archive searches on this, and am just wondering if anyone around here has cool resources that may be lurking in less obvious places on the web.

Besides Sam Kass's site, Quartzcompositions, and the developer archive, there isn't a lot out there.

What would be the approach to writing a master kernel with 4 or 5 settings, ie, a glass distortion, a bump distortion, box distortion etc.... and attaching the setting options to a node? The idea is that I could have one master kernel, and flip between filter styles easily by controlling just one node.... is this even possible?

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

toneburst's picture


you should investigate the JavaScript-based 'edit filter function' option of the CIKernel patch. It allows you to basically write javascript code to pass an input image through a series of kernel routines, and even apply CoreImage filters, all within the one CIFilter patch. You can also write JS code to process controls in various ways before you pass their values in to the kernel functions. It's quite useful for eliminating cable spaghetti.

I'm not sure about your other point, about chains of filters being slower. I try to keep things as simple as possible generally, so I'll go for several simple filters over one fiendishly complicated one. I don't know if this is the best approach performance-wise though.

There isn't much in the way of documentation on CIKernel code, unfortunately, other than the stuff you already know about. It's worth bearing in mind that CIKernel code is basically a subset of GLSL, so a lot of the things that can be done in a GLSL fragment shader (or an HLSL/Cg Pixel Shader, for that matter), can also be done in a CIKernel program, with a little tweaking. Having said that, there are some nasty limitations on CIKernel slang GLSL, so you're probably better initially working out your own stuff, and getting a feel for the language and syntax before trying to tackle a GLSL/HLSL conversion job.

Hope this helps.


gtoledo3's picture
I am actually going to

I am actually going to finally start with the java in CI land suggestion today, so I'll let you know how that goes. Since I have been getting a handle on the java, I feel like I shouldn't have waited this long to start using it, because it has been simple.

It seems like you posted a working example/implementation of using that, that I can't find, and I would appreciate it if you could point me to that if you get a chance.

FilipeQ's picture
pixel coordinates??

I'm trying to write a patch (in kernel language) to merge 2 pictures, for the first step i will try to merge them this way:

  • Each image have 2 black dots, and based on their coordinates i will apply a samplerTransform function, but i'm having problems getting the coordinates of the black dots, i've tried to look the all image using the sample(image,xy) function (and the return the coordinates) but the system crashes...

Anyone knows any trick/approach to get this done??


cwright's picture

CoreImage isn't designed to return sample positions. The kernel is executed for every pixel of the image. The return value is thus the intended pixel color at that pixel's location.

There's not really a fast way to do this kind of image processing (This kind of processing is very different, internally, from per-pixel processing, which is what CoreImage does rather well)...

FilipeQ's picture
I was not trying to return

I was not trying to return the sample positions to outside the patch, i was trying to write a function inside the patch to be used only by the patch... Anyways you are rigth the kernel language is not the best approach to solve this problem

Thanks for the attencion

toneburst's picture
The way CIKernel code works

The way CIKernel code works takes a bit of getting used to, actually. It's more-or-less the opposite to the way you would think intuitively about drawing something. It's quite a powerful technique though. It's also a good route into writing more complex shaders in other languages like GLSL (of which CIKernel Slang is a subset), HLSL and Cg.


FilipeQ's picture
Anyone know any good

Anyone know any good approach/ techique to solve the problem that i mention above.. thanks

psonice's picture
No, but I have a similar problem...

I'm not sure that there's a good way to do it, really, and I have a similar problem to solve. In my case, I need to 'trim' an image.. i.e. remove the empty top, bottom and sides around an image and return just the centre. Any suggestions?

It's not critical.. I can estimate the size roughly enough to do the job, but the processing I do to the cropped image is pretty heavy and the app depends on both high speed and high quality, so missing pixels = bad and extra pixels = also bad :/

toneburst's picture

You should be able to do it with a little bit of maths, and the Image Dimensions and Crop patches. If the image is, say 200px wide, and you want it to be 100px wide, you'd set the Crop Width to 100px, and the Crop X to 50px (the difference between actual and intended size / 2).