10.8

Pacman Core Image (Composition by gtoledo3)

Author: gtoledo3
License: Creative Commons Attribution-NonCommercial
Date: 2012.11.11
Compatibility: 10.5, 10.6, 10.7, 10.8
Categories:
Required plugins:
(none)

This is a pacman animation, working in core image.

core image points (Composition by gtoledo3)

Author: gtoledo3
License: Creative Commons Attribution-NonCommercial
Date: 2012.11.11
Compatibility: 10.5, 10.6, 10.7, 10.8
Categories:
Required plugins:
(none)

This is a translation of Paulo Falcao's "blobs" glsl shader, to core image.

Raytrace Core Image (Composition by gtoledo3)

Author: gtoledo3
License: Creative Commons Attribution-NonCommercial
Date: 2012.11.11
Compatibility: 10.5, 10.6, 10.7, 10.8
Categories:
Required plugins:
(none)

Nothing super fancy, but a raytracer working in core image.

I'd been thinking about doing this for awhile, from being interested in the way core image always has a dedicated image output without having to render to screen/texture to do something with the result.

It's a little bit of a pain because of typically used functions and working styles not being supported, but it works if you stay within the boundaries. It seems a little slower than GLSL. Less substantial/2D oriented fragment shaders tend to perform better when transferred, seemingly.

Glass teapot (Composition by voxdeserti)

Author: voxdeserti
License: Public Domain
Date: 2012.11.06
Compatibility: 10.6, 10.7, 10.8
Categories:
Required plugins:
(none)

glass GLSL shader

fragDepth (Composition by gtoledo3)

Author: gtoledo3
License: Public Domain
Date: 2012.10.20
Compatibility: 10.5, 10.6, 10.7, 10.8
Categories:
Required plugins:
KinectTools

I was reading an OpenGL forum last night, and it became obvious that many people think you can't mix the results of a scene created with a fragment shader with objects created with vertices (like a Sphere or Cube, in QC world).

This composition shows how to write to gl_FragDepth, so that your scene can depth test against other geometry rendered in the scene. Specifically, it uses a Kinect input, in the most basic way possible, to make it very obvious how to tweak to your liking. Note that my hand is in front of the teapot in the sample image, while the rest of me is behind it. This is depth testing with the most basic kinect output image.

The same principle can be applied to stuff that's done programmatically in the fragment shader by taking whatever value represents the depth of the objects, and writing that to gl_FragDepth, usually with some number massaging.

(Thanks to cwright for showing me gl_FragDepth some years back.)