10.6

Mac OS X 10.6 (Snow Leopard)

step sequencer (Composition by dust)

Author: dust
License: MIT
Date: 2010.11.14
Compatibility: 10.5, 10.6
Categories:
Required plugins:
(none)

here is a simple step sequencer / trigger that takes a variable bpm input.

it produces the same results as coges step trigger/seq patch although simplified and derived from different math.

basically i take the number 15/bpm gives you your 16th note duration which is used to delay a signal counter.

this works beautiful in unity as i am able to sync up lots of tracks.

qc hiccups a bit if your messing with the editor while its running.

probably best to use something like this for visuals rather than music.

Release: KinectTools, v0.1

Release Type: Beta
Version: 0.1
Release Notes

Initial release, based on libfreenect.

Known Issues

  • Only works in 64bit mode.
  • Not thoroughly tested.

GLSL-Vignette (Composition by gtoledo3)

Author: gtoledo3
License: (unknown)
Date: 2010.11.14
Compatibility: 10.4, 10.5, 10.6
Categories:
Required plugins:
(none)

This is a GLSL shader that creates a vignette effect.

It takes advantage of the GLSL smoothstep function. It's adapted from the example here: http://www.geeks3d.com/20091020/shader-library-lens-circle-post-processi...

Combining frames / long exposure / blending frames (Composition by psonice)

Author: psonice
License: (unknown)
Date: 2010.10.25
Compatibility: 10.6
Categories:
Required plugins:
(none)

Not too sure of the correct name for this. All it does is blend all frames from a source over a certain amount of time - like a long exposure photograph basically.

Potential uses: - Making a long-exposure photo - Noise reduction in photography (part of what I made this for originally) - Timelapse photography (what I use it for now - it makes much better timelapse, no jerkiness) - Video effects, with a shorter frame length

Image Recognition (Composition by dust)

Author: dust
License: MIT
Date: 2010.10.24
Compatibility: 10.6
Categories:
Required plugins:
(none)

here is a basic image recognition kernel made in open cl. it is an unfiltered type of recognition meaning all RGBA values are checked pixel by pixel and the distance between training data and live input data is calculated. an average mean of the overall distance is calculated and normalized in a 0-1 range. zero being perfect.

you can record a short sample of you giving your self the peace sign or the middle finger its arbitrary then try and match. think sign language to speech or something like that as possible implementation.

you can try various thresh... settings for a looser image recognition. below .05 seems to be a decent all around setting. that means when the pixels are .05 apart in distance.

whats going on is i'm going pixel by pixel and calculating the distance between the training image and live image. if the distance is within the threshold then i keep the distance if not i make the distance 10 ultimately normalizing the average down to 0-1.

you need to record a training image. a 3 second sample seems to work good for me. also optimization's like inputing a vectorized image or something may help speed things up. as it is with the setting here i'm getting fps of n/a mostly.

feel free to record some samples or try different video feeds. this will work as gesture recognition as well. i will add the ability to save down various training sets so you so this could be used to for handwriting recognition or what ever.