Skeletal animation with Inverse Kinematics

Including some level of support for joints, bones, and the automatic solving of angles using inverse kinematics (IK), would vastly increase the animation possibilities of Kineme3D.

There are probably numerous different ways of going about this, or at least how it is exposed to the user within Quartz Composer, so I guess we need a discussion about how people think this might work.

SteveElbows's picture
OK so for a start I would

OK so for a start I would love to know who else is interested in this stuff. Do you currently use bones, joints & IK within any 3d modelling software?

My understanding of this stuff is quite simplistic really, Ive only dabbled, but the potential seems large. I use cheetah3d for 3d modelling. Here is a video that introduces various animation methods within cheetah 3d, and it covers the simple creation of bones, joints, IK. So it might be useful to watch for anybody unfamiliar with this stuff:

http://cheetah3d.com/download/Cheetah3D4.0_anim_tut.mov

Some questions that spring to mind about how best to handle this stuff in QC:

Are we bringing in the skeleton, bones, joints & even poses from the 3d model file, or will there be the ability to create bones from scratch within qc, or both?

Likewise there are questions about how the bones are weighted to the model mesh, whether we can manipulate that in QC or just reply on people doing it in their 3d modeller.

I tend to assume that the joint angles and positions should be exposed to the user as qc variables, and IK would be handled by a seperate patch that can be wired up to control the joints variables if people want to use IK to solve stuff?

Obviously right now QC & kineme dont deal with any of this stuff. So for animation I would do all of this in a 3d modeller, and then export each pose as a separate model file (or seperate objects within one model file) and then animate between them using the techniques demonstrated in Kineme3D examples. It works well, although for blending I have to make sure each pose has the same number of vertices. But if this stuff was ever manipulatable in QC, it would open up a lot of possiblities - anybody else interested?

cwright's picture
count me in

(sort of tainting the stats here, but I'm very interested in this too, for a few reasons).

I think editing skeletons/weights in QC would be far too tedious/clunky. Perhaps there's a clean, clever way to do this which would change my mind, but as it stands, that's too "deep" for QC. So for that, I'd expect people to do bones/weights/poses in their modeler, and we'd just read that data and apply it.

I don't know if I'm a fan of using noodle inputs for skeletons, simply because different meshes will have different numbers of bones (And different constraints, once IK is factored in). If you changed models, it would disconnect all your noodles, making the composition useless outside of the editor. Structures are probably the way to go for this (i.e. an object would export a structure with poses/bone angles, and you could tweak those individually, and apply the changes to the mesh using the modified structure.) I'm not sure if that's still too clunky though (structures are cumbersome, but tools can be made to operate on those as necessary).

I guess that's about all I've got/thought about thus far... I'm hoping someone well-versed in this stuff can jump in and make me look like an idiot :)

SteveElbows's picture
I see what you mean. I

I see what you mean.

I suppose with the right structure tools, I could still effectively manipulate individual joint parameters using noodles within qc editor?

Whats the structure of this data like in the model files, have you looked at that stuff at all? Any idea if there are rules about the order that bones & joints are described, that would make it possible to swap models and get fairly consistent results? eg can we refer to joints by name?

cwright's picture
chaos

It depends on the format -- we'd be using fbx, so most of it should be sane.

Bones are sometimes named, but not always.

Model Swapping is not a good idea (blending also uses this, and it's become a huge problem recently: some models have the same numbers of vertices, but they're in very different orders, which makes blends look wrong, and sometimes crash -- there's not really a good solution for this, which is why I'm wanting to throw blending out (except for MD2, which is designed for this kind of use), and use skeletal only in the future.

The structure is something like this:

You have a root bone, and then all other bones are children of this, or other child bones (so it's a hierarchy, perfectly suited for structures in QC). Bones have positions, relative to their parent positions.

Vertices are associated with a set number of bones (typically no more than 4), with each association carrying a specified weight (the weights, in total, sum to 1.0). These weights are used to determine how much each bone affects the specified vertex (to smooth out joints, where things would otherwise tear/distort unnaturally).

We also have "poses", which define positions for all bones. FBX provides "Takes", which define pre-defined animations (pre-defined bone movements, for certain sequences).

Fancy IK systems will associate degrees of freedom for a joint (some are 1 angle only, like elbows/knees, others are free-form, like shoulders/hips), limit angles (knees don't bend backwards, fingers don't bend backwards), and rest angles (fingers rest slightly curled, so IK solvers try to get all angles as close to their rest pose as possible, while accomplishing the goal position) -- I don't think this information is provided by fbx...

SteveElbows's picture
Cheers for the info &

Cheers for the info & enthusiasm :)

So an initial simple version of this functionality, to replace blend, would be to just support poses within models, and offer a few parameters for triggering these? I'll have to check whether my modeller supports takes. IK within kineme would be most excellent but basic skeletal support would be a great start.

Is support for mesh morph targets something that should be looked at when planning to get rid of the current blend stuff?

I must admit Im sorry to hear that blend stuff is proving such a pain with non-MD2 models. I was planning to use it heavily, so if it is removed in future I need to adjust my plans straight away. Any chance of leaving it intact but with a health warning?

cwright's picture
blendeprecation

If we dispose of blending, we'll leave it in, but have health warnings and lots more sanity checks (to prevent crashes). Its those sanity checks which harm performance and make things that looks like they should work, not work (which is frustrating to users, who then vent at us, who then have to explain that what they want isn't possible, etc etc).

Morph targets (sincere apologies ahead) are a terrible idea -- There's a Ton of user intervention required to make morphing useful, and that's not really a QC-appropriate requirement. For example, let's say you have a head mesh, mouth closed, and a head mesh, mouth open. Logically, blending between the two would need to match the lip regions, inner mouth regions, etc, and blend nicely. But the computer has no idea what any of that data means, and it would simply have to find the nearest points, and blend from there (causing the mouth to emerge from the nose, or the chin, most likely -- not at all useful, though perhaps artistic). In technical parlance, the meshes need to have identical "mesh topologies", which takes us back to the "same vertex count, same texture count, same normal count, same index order" constraints the mesh blending requires -- Or, there needs to be metadata associated with both meshes to associate points, and no formats support that information (requiring the user to manually generate it).

If blending works for you, by all means, continue to use it :) It won't magically disappear, but it won't be an emphasized feature, and will likely end up less optimized/more guarded to prevent crashes/problems.

SteveElbows's picture
Ta for the info. I dont

Ta for the info.

I dont actually know much about morph targets, I just noticed the bicep bulge stuff at the end of the cheetah3d animation video, and a can of worms opened up in my mind.

Taking into account what you have said about morph targets, if I want to think about 3d face animation in qc, Im better off considering the creation and clever manipulation of meshes from scratch within qc itself as the solution, rather than importing models?

Oh thats made me thing of something else. Imagine I want to create a character whose body & head are a single model with various poses. If I want that model to wear a hat or wave a sausage around or have a face made of a GLSL grid and some primitives, I need a way to keep track of the location and angle of the different body parts, so that these other QC objects can move accordingly. This has similar issues to the manipulation of joints, a variable quantity of values that could cause very messy noodles. Any ideas?

cwright's picture
positional

Morph targets are essentially what blending is today -- however, the tools used to do morph targets are very careful to preserve certain things: Vertex count, vertex ordering, etc. Most modellers (wings3d, maya, blender, from my experience) Don't do this -- If you make a sphere, save it, then deform the sphere to make a bulge on one side, the vertex order changes! So, if you blend between the two, part of the sphere acts as expected, but the bulged faces all rotate, since their order changed (0,1,2, to 1,2,0, same face, but index order is off by one).

Now also imagine if the face order changes, and during a blend (morph), you'll get polys flying like crazy to fill their new position from their old position -- chaos between the two otherwise nice-looking frames.

Further complicating it is FBX's auto-triangulation feature (which is slightly buggy, but no one using kineme3d has been bitten by that yet, thankfully) -- it's not going to triangulate non-triangle meshes the same when they're deformed, causing variations of the above problem.

Skeletal addresses all of this by only deforming one mesh, ever (no order preservation stuff needs to happen, ever). This would be ideal for facial animation, and body animation.

Also, as a benefit, we get orientation of all child bones, so positioning objects to integrate with the skeleton would simply need to get the child bone's orientation, and adjust accordingly.

It's really brilliant, honestly... too bad it's all theory, and not at all practical yet :(

For current kineme3D, I guess clever deforming is safest, but if you happen to get multiple models to work, by all means use that (it's much less effort, and much higher quality).

SteveElbows's picture
Aha Id never really

Aha Id never really considered using bones for facial animation, looking around the net I see a little discussion about this as a better way forward, especially for realtime stuff, I just fear many of todays 3d tools use morph targets. So I was hoping that morph targets / blendshapes were stored in a more sophisticated way in fbx files, which could eliminate some of the problems you mentioned, eg if each target was data about how specific points on the mesh were deformed, rather than just another copy of the mesh in its deformed state.

It seems cheetah3d can save skeleton info, takes and morph targets in fbx, although not poses for some reason. I will email you an example fbx just in case it is of some use.

Cheers

dust's picture
blend shapes verses bones

i have always used a bone structure in a facial animation. if you just use morph targets or blend shapes you don't get any head movement articulations plus you need something to parent your inside stuff like tongue and mouth to as well. the tongue is usually a set of 4 bones and ik spline. i have seen some people rig bones for the lips and stuff but thats over kill. just a joint at the top of the head, top of the neck and bottom of the neck, plus a parented joint to the top of the neck to the bottom lip, that is your jaw. i mean if your making a human. if you think about it our top jaw dosen't move but our lips do so use blend shapes for mouth and eye expressions. you could look at apples phoneme op code index if you want to do real lip syncing you would need a shape for each phoneme, but i found just a OOO and MMM blend shape can do all the other phonemes you just have to use a combination of both OOO and MMM at different percentages of the blend as well as animate the jaw IK.

SteveElbows's picture
Thanks thats very

Thanks thats very interesting - I think my problem is that Ive just used some fairly basic standalone human character face & body animation software so far (eg poser) and so Ive never tried rigging up a face for myself from scratch. And the opposite extreme where Ive made ultracrude mouth movement using seperate 3d objects in qc, manipulated with 3d transforms. I will need to get into proper modelling apps more in order to be much more use in this conversation - thanks for the tips :)

franz's picture
jumpin' in

Hi, sorry for jumpin' into the thread, but: - bulging a sphere in 3ds max DOES keep the same vertex number, and DOEs keep indices untouched. I assure you don't have polys flying around during interpolation time.

Actually, i'm using KnM 3D Blend, a lot to be honnest. Keeping the same number of vertex is quite easy, provided you know a bit about your modelling package. In fact, morphing is GOOd for - simple face animation (like "mood-changing", but not like "talking" -- which is okay for me so far-- - building transformations (but much more complicated, texture-wise)

I've been successfully using Blending for both of these stuff. Please, keep it in the next releases...

Eventually, if you still don't trust me, i can pass around some QTZ demonstrating this working functionality. I know i'm almost the only one who requested this blending operator (and the 8way GLSL blender, which is even better) and obviously i caused a lot of trouble, since it works only with meshes having exact same number of vertices and indices - and that's not written in the manual -

cwright's picture
yay!

I'm glad you mentioned a tool that doesn't break ordering -- Thanks for the tip :)

You're right, morphing/blending is handy, it just requires great care that most people don't seem to put thought behind (which is somewhat expected -- I don't think docs mention the rules very well, if at all). I've just been getting flooded with people who find weird tricks (same number of vertices, but orders are wrong, or some other subtle difference) that crash kineme3D, and then they complain, and it's annoying to try and explain it. And then people keep wanting to morph meshes with differing numbers of faces/verts, and that's even more impossible.

I'm not removing it, ever. I just don't know how much more attention I'm going to pay to it/how much longer I can deal with "But, but, but, I want to blend an 8-vert cube into a 512-vert sphere, and that shoudl be easy/possible!" :)

(If you'd like to post some good blend examples, I'm sure people would love to see them :) Not because I don't trust you, but because I like seeing what people have accomplished thus far :)

SteveElbows's picture
Poses always all bones?

I noticed that you said poses define positions for all bones. In cheetah3d it seems like poses can just affect some bones, eg I could set a pose that affects whole body, and then a pose that just changes the hand, leaving the rest of the previous body pose as was. I was wondering if that was true of fbx format too, if so whether youd try to support it, and whether it has any implcations for how the user will select poses in qc.

cwright's picture
Not sure

I'm not sure, to be honest -- I've only very briefly dabbled with takes/poses. The viewer program I have (fbx sdk sample) doesn't correctly handle poses for some models (it doesn't populate lists, so nothing happens when clicked), so I can't tell. You're probably right, and some poses deform a subset of points from the reference (bind) pose. I'll try to work it out...

(Feeling like hell at the moment... flu or something?)

SteveElbows's picture
How takes store bone animation?

I was just wondering how takes store the animation. Are they sort of 'keyframe every frame' based or do you interpolate between values over time? I was wondering about manipulation of take animation duration within qc.

cwright's picture
curves

(::Wild Speculation::) Inside the fbx sdk, it seems to store points in time as "KFbxCurves" or something like that -- these have a start and stop point (in time), and output a 1, 2, 3, or 4 dimensional point at a given time. I think takes store curves, which you then evaluate to get motion/animation.

At least, that's how the old SDK seemed to work -- the newer one looks similar, but I've really not touched that aspect (just updating all the mesh loading was problem enough to get 1.0 out the door :)

dust's picture
ik-fk

given the plethora of 3d applications ik and fk skeleton control it would be very difficult, maybe could be done with blender seeing it is open source. i don't know if you have tried getting your ik info from maya to max ?

however, im sure some kind of ik solve plugin could be made using gradient decent or stochastic gradient decent would be faster im sure. stochastic means wander is "greek" the decent is calculated off one training example or every [n] training example given a really large data set it is the best solution. the recursion of a gradient decent could take long time to hit its base depending on the amount of training examples.

inverse kinetics is machine learning and a very difficult if not impossible problem in respects to the replication of natural human kinematics. there are many many things to be considered in a calculation like this.

im all for turning quartz into a full blown real time 3d animation application don't get me wrong.

don't worry you can bake ik into a models export now and playback in kineme 3d at real time that alone is worth the beta tax on the plugin.

on a different note i think that rigid body dynamics would be cool, was thinking of that last night. deformations on soft body seems to be included in the plugin, but something simple like gravity would be cool.

like doing a domino effect if you knock over your model onto another model that gets knocked over etc... then having control of things like mass and inertia or what ever would be good.

im all for dynamic or machine learning algorithms in 3d in real time. i say yes, yes, yes.

cwright's picture
iterative IK

There are lots of IK solvers and algorithms, the particular one isn't important now. The important part of this discussion is: "How do we specify this information" in QC? If I have an arm model, and I want a finger to touch a point, how do I set that up? How should the interface be defined to accomplish that in QC? Once that's figured out, then we can explore various algorithms to approximate it.

How would dynamics be specified in QC?

(I'm trying to figure out what people think would be a powerful, usable interface for all this stuff, so future versions of Kineme3D can incorporate them -- the magic behind the scenes is much less important than the usability by the users.)

gtoledo3's picture
I personally like the idea

I personally like the idea of forgetting about all of the joints/skeletons, etc, and having a really super gravity warp/sculpt tool where I can click on the point that the object should bend at, and then grab the other part with my mouse, wave it back and forth and "record that".

So, if you had an arm, you could click on the middle knuckle of the index finger and that is your "bend" point... then you grab either side of the set bend point, wave it however you want, and "record" that movement to a plist, timeline, or whatever.

If I was to set a "bend" point at the knuckle, and drag with the mouse on either side, I would get movement. If I was to set a bend point between the knuckle and the finger tip, I would get something similar to when you pull on a rubberband... the points serve as "anchors".

This also extrapolates to the idea of being able to use the open CV to set similar points, and then pull the structure out later, to use with an actual 3D object.

HOWEVER, that's just me, and I tend to be a little quirky in how I think about accomplishing some of this kind of stuff.... so I defer to the experts!

SteveElbows's picture
You have some interesting

You have some interesting ideas there, but I think they are best thought of as something you might be able to achieve by building on top of a joint manipulation feature. The use of mouse to set parameters and select & manipulate things in 3d space, and the ability to record parameter changes over time, are presently more suited to 3d modeeling software than quartz composer I guess. There could also be issues with the model bending in ways that are not intuitive, if you dont have some underlying bones whose influence on specific parts of the model is setup properly in advance in the modelling program.

If you were prepared to do some of the initial setup in a 3d modelling package, ie bones & joints, its possible you could get close to what you seek by wiring up a clever qc patch that would allow you to use mouse to select bones & manipulate joints. Id love some sort of recording feature in QC, thats a topic that I think came up in relation to Quartz Crystal, I'll probably renew that conversation in the relevant thread.

cwright's picture
selection

Mouse selection in 3-space (i.e., picking faces/vertices/objects, etc) requires the use of a Select Buffer (OpenGL), or a ridiculous amount of math (to the tune of "rewrite opengl's rasterization, and pray that they haven't used glsl to displace vertices, or else we're going to have to roll our own glsl engine as well") -- and QC doesn't support Select Buffers, so you know which option we need to choose.

So, in short, that's not happening. :) sorry :/

I'm with elbows -- this sort of stuff is much more useful in 3d apps that are designed for this sort of thing, or procedural generation (automatic via math forumlas, no user required).

Slowly starting to work on the value recorder... PerformanceTool has been my baby lately though :)

dust's picture
ik select handel

thinking that using a some sort of named space or index for ik handles would be the best way to go. i guess for a simple arm it would not matter the index or name space there would be just one handle unless you rig all the fingers. regardless of how complex the rigging is if you have a patch that takes ik index or name that has xyz position you could do some sort of key stucture and attach how ever many needed ik handle patches to the structure keys ? i don't think ik handle rotation is a good idea but maybe the option would be cool for experimental results.

mfreakz's picture
Sequencer Patch: the return...

I think it's time to re-talk about: Sequencer patch... http://kineme.net/FeatureRequests/SequencerPatchUpdatedTimelinePatch This patch could be useful for many projects, and seems to be useful too for 3D animation not ? If we find a way to animate 3D model, that's could be great to create a multipurpose tool for recoring/playing everything ! SEQUENCER PATCH !

SteveElbows's picture
Re: iterative IK

Returning to this subject after it came up on another thread, even if this skeletal stuff never happens I will start talking about it again anyway...

Regarding the lack of feedback about how this stuff should be exposed in the editor, perhaps this is one of those occasions where people cant imagine how it makes best sense for this stuff to work, until they have seen it working in any way, even if it turns out to be completely the wrong way!

Alternatively dodge the issue by first supporting aspects of skeletal animation other than IK-based direct manipulation. Eg the ability to trigger preset poses/takes in the model file and thus not expose much. Limited for sure, but perhaps a sane half-way house.

Let me present a slightly different scenario. This website has thousands of bvh animations:

http://sites.google.com/a/cgspeed.com/cgspeed/motion-capture

As far as I know the bone names & hierarchy are the same in all of these files, and it sure would be swell to have a way I could load more than one of these up in QC and then wire the bvh loader to model(s). I can see that to achieve this without a well thought out flow & editor UI might be silly in various ways, but even with severe limitations and QC-paradigm busting approach to how it works it would still unleash some interesting possibilities. I know that in some ways it would be more practical for users to combine the bvh with the model in 3d software and then just export the resulting fbx to QC, but its often far more painful than it should be to do this, especially if you have lots of bvh files you want to apply to the same model, and there are some merits to having the model and the animation in separate nodes within QC.

gtoledo3's picture
Re: iterative IK

(forgetting about all of the sequencer/timeline style editing I always talk about)...

How about having a "motion file input" on renderers ( like the object input port), in conjunction with "motion loaders", with the idea that a motion file loader would be kind of the same end result as having interpolates or value historians hooked to a translate or rotation, except that it controls the file "in total". Maybe the motion file import could have single shot, loop modes, etc, kind of like an interpolate, external time control as well? Would it make more sense to have multiple loaders and multiplex or directory scan to call up different bvh files?

If one had a 3d figure powered by a bvh, would translates and rotations be exposed on the renderer still? If so, would values getting sent to those ports be relative of the motion file (for example, if a motion file puts a character "jumping up in the air" to Y=+.2, and you have an interpolate connected to the y-translate, is the value whatever the interpolate is currently, plus the y=+.2?

gtoledo3's picture
Franz, I would LOVE to see

Franz, I would LOVE to see an example of the mesh...