Vusic - Experiments in live 3D

buano's picture

Since Steve and Chris release particle tools 0.3 yesterday we though we'd post about the first stage of a new project we have been working on, showcasing our 'proof of concept' of real-time 3D for live musical performance.

We used a bunch of Kineme plugs (Particle tools, Kineme 3D, GL tools, Audio tools, Structure tools, Spookies, Value historian and Quartz Crystal). Big thanks to Kineme for adding the boids to particle tools for us :)

vusic

Click the link to check out the video on our site http://www.yuva.tv/index.php

PreviewAttachmentSize
Clairdelune.jpg
Clairdelune.jpg59.7 KB

gtoledo3's picture
That is really nice work!

That is really nice work!

Scratchpole's picture
Bueno

Splendido, excellente, quick work, were you up all night?

SteveElbows's picture
Lovely stuff, looks great. I

Lovely stuff, looks great. I am not very good at creating stuff that looks beautiful, or finishing stuff, but Ive been experimenting with different sorts of visual sync to music, I think there is a lot of potential in that area.

Have you looked at VJ tools like VDMX to use in conjunction with QC for realtime 3d at all? I dot know how the VDMX audio analysis compares to the stuff in QC, Ive been meaning to look into that. At the moment I am using midi to drive kineme 3d stuff rather than audio analysis

SteveElbows's picture
oops

Oops I double posted.

Do you have any particular plans for taking your concept further?

noonanon's picture
Up a few nights :)

Hey guys thanks for your comments, there are lots of things we wish to do with this at the mo, but really the main point of this was to see how far we can push quartz beyond just self contained experiments in order to see how close we actually are to putting as many of these proven concepts together into one piece.

I think we got close to achieving what I set out to do but it has also made very clear what are the very real limitations of the platform in its current form, limitations that everybody seems to be bumping up against on a daily basis. Sure there are lots of workarounds for all sorts of problems but that just makes it even more difficult to combine different effects, thats why I can't wait for snow leopard in the vain hope that Apple will give us some sort of glsl multithreading.

At the moment this quartz file runs at about 5-7fps on my mac book pro, but I am sure I can optimise the various 3d elements more to help it along. In terms of vdmx I get a very similar fps result now that its fully intel based, but will probly break it apart into 3 or for scenes to allow me to mix the elements together and keep it running at a respectable pace.

The one thing that I was looking to achieve and havn't with this version is getting more audio responsiveness out of the 3d Animes, some of these at the moment just utilise Kineme 3d deformers and others are just hard animated from C4D.

What I was initially thinking about was similar to triggering different movement sequences almost like in a video game, but as of yet havn't really found a suitable solution for this. The closet thing so far has been the 8 way 3d mesh blender, which might have got quite close to allowing for triggering different combinations of poses but unfortunately doesn't seem to respect an objects centered position, it just tends to find the physical center of the mesh. So when trying to animate something like a fish, instead of its tail moving left and right, the whole body offsets in that particular direction.

cwright's picture
no sense

GLSL multithreading doesn't mean anything. it runs on the GPU, which is a wildly parallel processor, but unless you have 2 GPUs rendering to different contexts, multithreading isn't possible at a fundamental level on the hardware. Snow leopard can't fix that.

To improve gl performance, there are several things to test against to find the bottle neck.

  • Resize the window to make it small. Does that help? If so, it's fill-rate limited, try doing less in the fragment shader, or do less filtering.
  • reduce the number/complexity of objects. Does that help? If so, you're geometry limited, and need to use fewer vertices or have someone write a more efficient vertex submitter (Kineme3D is somewhat efficient, but there are lots of things we can't do because people insist on goofy stuff... newer versions should slowly address these, and performance will then improve).
  • Turn off lighting. Does that help? If so, it's a variation on geometry limited. Same rules apply.
  • Use smaller textures. Does that help? If so, you're VRAM memory limited (or possibly RAM->VRAM limited, if you're updating textures every frame). Use smaller textures.

With the mesh blender, it shouldn't be centering your objects -- make sure the models are correctly positioned (with a kineme3d object loader). Make sure the Center input isn't enabled on the loader, and if you continue to have problems, please submit the set of files to us so we can perform some testing to find out what the problem is.

Remember that Quartz Composer isn't a magical limitless processing engine. There are very real physical hardware limits of the machines it's running on, and QC allows you to hit those limits extremely easily without warning. There isn't much that can be done to smooth that out, other than upgrading to newer, faster hardware. Contrary to what some would like to believe, the Really Cool effects you see in non-QC demos have been hand-tuned by people who know intimately what the hardware is and isn't capable of, and they employ lots of tricks to bend the limits a bit. There isn't a generic "Make all my effects Cool, Easy, and Fast" route to pull things off. Usually, (in my experience), you have to choose 2 of the 3...

gtoledo3's picture
I few thoughts...

I few thoughts...

I think that at one point I was in the same mindset. Then as I was learning Blender (around the same time as Kineme3D alpha 9 version...)... I had a big revelation. Which is...3fps in realtime for something that you would usually actually have to offline render to even see any kind of moving visual reference, is actually amazing when everything is said and done. It is an unreal expectation to expect something to look like a rendered movie done in Maya, Renderman, or Blender, and also be real time ( I believe).

What you have done looks really nice.

Bend, twist and gravity warp can be used to insinuate/achieve motion sometimes, very very efficiently. When you talk about wriggling fishes... try the bend box w/ a little twister.

Some more ideas-

-Consider prerendering extremely detailed backgrounds as movie files, and put them on sprites. Somewhere I have a clip on vimeo that illustrates flying ufo's over the apple rollercoaster film. The thing flies (60+fps). Some may thing that is "cheating" but it is a pretty stock trick in many other situations, so I don't view it that way.

-You can also pre-render, send backgrounds to sprites, and blur the sprites at various amounts to achieve more apparent depth (think old school animation.... Max Fleischer, Walt Disney, Tex Avery, Chuck Jones).

-Avoid expensive CI filters in realtime. Instead of running image-> CI Filter -> 3D Render, just render your input image with the CI filter, and use THAT image as your input.

-You can use folder image sequences/loading to effectively animate. If you have a body, and a separate head object, you can load one image to the body. Then you can have half a dozen or so facial expression images... you can load those from a given at a given speed, or order. If you aren't familiar with loading images from a folder on the fly, let me know, I have a decent macro for that. I did a render on vimeo that has some dinosaur looking md2 I found on polycount, where I load a bunch of different pop-art type images...I think I programmed the images to load with an exponential curve on that one, looking more for a "surreal" effect. I've done a bunch with facial expressions that I don't have any great online examples of.

-The theory of relativity, and artistic "scale". This may seem like a cop-out, but many times, something that is running that is sucking up fps like a mofo doesn't REALLY need to be running, as far as artistic/emotional impact of the viewer is concerned. Embrace the limitations.... this is a gruesome saying, but "there are many ways to skin a cat".

-Panoramic backgrounds. Unfortunately, unless you want to buy some equipment, you will end up having to go public source. I've checked into equipment for shooting panoramic movies, to use as backgrounds. I am extremely intrigued by this, yet it isn't necessarily cheap. However, using a static panoramic image to a front culled sphere is another way of getting detailed backgrounds without killing your fps. You can achieve motion by shifting with a 3D transform, or shifting coordinates on the sphere renderer.

-For getting more movement out of qtz's with audio, you can use math/ math expressions to "amp up" the values that come out of the audio patch. For audio in general, I actually tend more towards figuring out what second a given event occurs, and then just triggering it with lfo/interpolation/timelines/whatever. Also, try achieving your main movement with interpolation/lfo, and then connect your value sorted amplitude or frequency to the "tension" control of the interpolation, or the phase or pwm ratio of the lfo. That can yield movements very similar to some audio reactive stuff that I have seen in processing... but can also look shaky/jerky. It depends on the audio, and the "pre-scaling" of the values through multiplication, addition, or whatever.

-Try working more with efficient image inputs, with cheaper images, instead of as many shaders.

-I want to also suggest looking up "Rinboku" on Vimeo. He has ways of using mostly all stock QC stuff, very very efficiently. Seeing what he has done with cutout 2D images, and even building bodies out of stuff like spheres and cubes, building environments out of front culled cubes... it is a big reminder about "cheap/old" styles of video game type animation (the stuff really really reminds me of early arcade games sometimes). He has an extremely perceptive grasp of "rudiments".

-Last thought.... everything Chris said, ditto x 2.

psonice's picture
another perspective...

I can appreciate <10fps being impressive when you move from a complex 3d package, but coming from the demoscene where everything is expected to be both cool and fast, I have to say that 5-7fps for that clip on a macbook pro is way too slow for the quality of visuals. Ignoring the QC side of things, on that hardware, with that scene complexity and those effects, 60fps should be pretty easily achievable.

It could be a number of things slowing it down: excessive poly counts (doesn't look like it, but check that your models aren't in the hundreds of thousands or millions of polygons range), excessive shader use (again, doesn't look like it, but check you're using the fastest type of shaders, e.g. the cheapblur instead of gaussian (or preferably the v002 plugin), and use GLSL where possible instead of CI). It could also be some limitation in QC, such as the iterator, in which case it's a bit harder to fix.

Don't take this as criticism btw, that's one of the most interesting and good looking QC compositions I've seen :)

cwright's picture
analysis

From when I briefly poked at the composition, a ridiculous amount of time was spent rendering the aquatic background sky sphere (it was a render in image of lots of polys, passed through several CI filters or something -- been a month or so). Just disabling that made it ~6-8x faster from my testing (I was using it as a test case for some Kineme3D bugs in 1.0). (PerformanceTool says this is taking about 55-65% of the time).

Using particle tools is another speed hit. It's snappier than the previous release, but it's still designed from a comp-sci point of view (too much object abstraction) to keep things snappy (~30-40% of the cpu time spent in particle tools is spent iterating over particle structures, doing branches and cache misses). This isn't a huge hit, but it adds up.

Some of the models are embarrassingly complex (several thousands of polys), and deforming complex models with kineme3d is a death sentence (all deformation takes place in system ram, and then has to get re-submitted to the gpu every frame = sucks).

Relatedly, Kineme3D Object Blend Render is asking to get your face blown off -- it has to march over both sets of vertices, do some stupid math, and then submit the results. Every frame. (PerformanceTool says this is burning another 27-33%) -- another reason to discourage blending. It can be done in glsl, but it currently isn't because QC's shader model is quite limited, and hacking around it is a lot of code that I don't want to write/debug just yet.

Otherwise, there were a zillion spooky patches (spookies require an NSDictionary lookup per value, per frame. NSDictionary can only handle ~10k-30k lookups per second, which is like 500-1500 per frame. Doing 15 spookies? that could be a 3% hit right there.)

Then there are a bunch of little things (blur filters, billboards, not much, but they add up to 5-15%). These are normal, but they have to be closely watched and kept under control

Yes, this composition should be possible in real time, on a macbook pro (almost on a macbook). However, implementation (in terms of both composition design and our plugin design) precludes that from happening.

[P.S. I second the respect -- it's the most complex one I've seen, and it's hard to appreciate how much went into it without seeing it in person]

gtoledo3's picture
I just looked at this

I just looked at this again... was this the original render? Anyway, the particle cluster looks extremely similar to audio reactive clusters that I have worked with/ posted in some forms (it ain't rocket science, so don't think I'm coming at you like "it's my particles" lol... similar implementations are all over the place )... Point being, if you achieved it in similar ways, then that alone is going to sap a lot of your speed potential, especially with sharp volume peaks, since it seems you have two of them, as well as kineme fish particles, and perhaps even the bubbles are particles as well? If your particle size is programmed to get extremely bigger with volume peaks.... QC doesn't respond well to that with the regular particle system.

If there is as many particles happening as I suspect... try minimizing the emitter amount (with kineme particles), or with the standard apple particle system, minimize the particle count/ and modify the math that feeds the maximum particle size to not give as big a "leap" in size within such a small time frame.

I would be curious what frame rate you would get with all particles turned off.

It looks as though what you have done is to setup a large area with many things "happening" and then you are just panning around using 3D transforms, field of view, gl ortho, I don't know... but in essence, all of that "offscreen stuff" is running even if it is offscreen.

You may try programming patches to turn off when they aren't onscreen. It also looks like this is all running through a zoom blur, which is nice for realism, not great for frame rate.

Again, snazzy... and nice choice of music!!!

EDIT: Just saw Chris's much more informed posting above... I think there is a real good argument for prerendering the sky sphere if possible, and just using the movie player to replay it in the background. It's not EVIL or WRONG... musicians have been overdubbing for years :o)

psonice's picture
Whoa, that was a bit

Whoa, that was a bit self-critical!

I'm sure that there's room for performance improvements in the kineme plugins (when you start saying there isn't, it's time to go back to school..) but i'd say they're pretty fast.

I've not used kineme 3d yet, but particle tools was impressively fast, and i've had zero performance issues with the others too :)

cwright's picture
comme ci, comme ça

It's perhaps critical, but accurate at times.

There are parts of kineme3d that I'm not particularly proud of (though overall, for a first-run 3D app from a coder who's never written anything more than Hello World in opengl, I consider it pretty ok) -- many of them have to do with design choices early on that limit how far it can be taken now that behaviour has been established (everything does lots of copying, which is slow and expensive, and submitting data to OpenGL requires some massaging that provides a ~20% performance hit, and it kills the cache, to boot).

ParticleTools is terrible (smokris and I have both spent hours looking at shark profiles, shaking our heads and sighing) from a performance point of view. There are some Really Cool spatial partitioning tricks in there to speed up some aspects, but it struggles with less than 30k particles. Real particle systems can push 100k, and GPU-based one (while horrifically limited and complex) can push over a million, in real time.

Most of our plugins are actually pretty well tuned (I have a personal obsession with optimization, so I do little tweaks here and there in my free time because I'm weird like that), I agree. It's just where it counts that sometimes things feel like we failed to deliver :(

(I'm not moping, just pointing out some of our bigger places for improvement that I stare at currently).

franz's picture
little clarification

just to be sure: you said : "ll deformation takes place in system ram, and then has to get re-submitted to the gpu every frame = sucks" does this means that when NO inputs of the deformer change, the deformer is still evaluated every frame ? Let's say i have a plane, ripple it once, then have it static (= frozen ripple), the ripple is still calculated every frame ?

cwright's picture
correction

It only does that when the inputs change. If the model and the deformation values don't change, then it only processes once (so your static deformation will still be quite fast).

I was referring to how deformation was applied generally -- deformation every frame for animation-like qualities.

Thanks for pointing that out, franz !