Frame blender for better slow-mo video

http://www.deutvalgte.no/ requested this in May 2007 --- http://fdiv.net/2006/09/22/more-new-quartz-composer-patches/ .

"GridPro had it working beautifully. Any Ideas?"

tobyspark's picture
A de-luxe "image with movie" node

i would see this as a de-luxe image with movie node, with controls such as you'd find in a vj app to allow full temporal control. and a perk of which, would be if the patch's framerate exceeds the playing framerate, then it starts cross-fading the frames.

toby

cwright's picture
motion vector cross fades

Just curious, I've been wondering about this idea for a few years.

Normally, crossfading is just alpha blending. This works for basically any data type out there (color, images, numbers...). I'm wondering if there's a cooler way of doing this.

I'm curious if you've heard or seen anyone using motion vectors to blend between video frames. For example, when you encode a video using MPEG4 or h.264 or any modern codec I'd guess, it compresses using motion compensation, where it detects the direction of blocks on the screen. If we can get these motion vectors, we can apply scaled motion vectors to (hopefully) produce a more accurate interpolation of in-between frames.

The closes thing I've see in After Effect's "Pixel Motion" blending in Time Warp, documented here: http://livedocs.adobe.com/en_US/AfterEffects/8.0/help.html?content=WS641...

Is that basically the same thing? Am I completely insane?

tobyspark's picture
optical flow

retiming using optical flow has been around for a while (the twixtor plug-in was the first i knew of it), but it is very computationally heavy. there isn't a short cut to be gained by cheating the info out of a temporally compressed movie's codec, as for vj use this is exactly the kind of encoding to avoid: every frame a key frame and the codec with the least cpu hit to decompress a frame.

so yes possible, no practical from what i know.

toby

franz's picture
times have changed

pete warden wrote this in 2k5: http://petewarden.com/notes/archives/2005/05/gpu_optical_flo.html

and a thesis (however in german language): http://www.fhbb.ch/informatik/bvmm/Projekte/0506%20Image%20Processing%20...

and of course, jeff han (the multitouch guy) did an optical flow on the GPU as well.

I truly believe that real time optical flow can be achieved through GPU computation - as can be Feature tracking and Ego-motion -. Any GLSL expert around here ?

cwright's picture
Great References

That's some wonderful reading, thanks! :) I figured it'd be possible on the GPU, so I'm glad someone has already blazed that trail.

Looks pretty intense, not sure when I'll get around to whipping one up :/