Question

I have a facial animation rig which I am driving in two different manners: it has an artist UI in the Maya viewports as is common for interactive animating, and I've connected it up with the FaceShift markerless motion capture system.

I envision a workflow where a performance is captured, imported into Maya, sample data is smoothed and reduced, and then an animator takes over for finishing.

Our face rig has the eye gaze controlled by a mini-hierarchy of three objects (global lookAtTarget and a left and right eye offset).

Because the eye gazes are controlled by this LookAt setup, they need to be disabled when eye-gaze-including motion capture data is imported.

After the motion capture data is imported, the eye gazes are now set with motion capture rotations.

I am seeking a short Mel routine that does the following: it marches through the motion capture eye rotation samples, backwards calculates and sets each eyes' LookAt target position, and averages the two to get the global LookAt target's position.

After that Mel routine is run, I can turn the eye's LookAt constraint back on, the eyes gaze control returns to rig, nothing has changed visually, and the animator will have their eye UI working in the Maya viewport again.

I'm thinking this should be common logic for anyone doing facial mocap. Anyone got anything like this already?

Was it helpful?

Solution 3

The solution ended up being quite simple. The situation is motion capture data on the rotation nodes of the eyes, while simultaneously wanting (non-technical) animator over-ride control for the eye gaze. Within Maya, constraints have a weight factor: a parametric 0-1 value controlling the influence of the constraint. The solution is for the animator to simply key the eyes' lookAt constraint weight to 1 when they want control over the eye gaze, key those same weights to 0 when they want the motion captured eye gaze, and use a smooth transition of those constraint weights to mask the transition. This is better than my original idea described above, because the original motion capture data remains in place, available as reference, allowing the animator to switch back and forth if need be.

OTHER TIPS

How good is the eye tracking in the mocap? There may be issues if the targets are far away: depending on the sampling of the data, you may get 'crazy eyes' which seem not to converge, or jumpy data. If that's the case you may need to junk the eye data altogether, or smooth it heavily before retargeting.

To find the convergence of the two eyes, you try this (like @julian I'm using locators, etc since doing all the math in mel would be irritating).

1) constrain a locator to one eye so that one axis oriented along the look vector and the other is in the plane of the second eye. Let's say the eye aims down Z and the second eye is in the XZ plane

2) make a second locator, parented to the first, and constrained to the second eye in the same way: pointing down Z, with the first eye in the XZ plane

3) the local Y rotation of the second locator is the angle of convergence between the two eyes.

enter image description here

4) Figure out the focal distance using the law of sines and a cheat for the offset of the second eye relative to the first. The local X distance of the second eye is one leg of a right triangle. The angles of the triangle are the convergence angle from #3 and 90- the convergence angle. In other words:

focal distance                    eye_locator2.tx
--------------       =            ---------------
sin(90 - eye_locator2.ry)         sin( eye_locator2.ry)

so algebraically:

focal distance =   eye_locator2.tx * sin(90 - eye_locator2.ry) / sin( eye_locator2.ry)

You'll have to subtract the local Z of eye2, since the triangle we're solving is shifted backwards or forwards by that much:

  focal distance =   (eye_locator2.tx * sin(90 - eye_locator2.ry) / sin( eye_locator2.ry)) -  eye_locator2.tz

5) Position the target along the local Z direction of the eye locator at the distance derived above. It sounds like the actual control uses two look targets that can be moved apart to avoid crosseyes - it's kind of judgement call to know how much to use that vs the actual convergence distance. For lots of real world data the convergence may be way too far away for animator convenience: a target 30 meters away is pretty impractical to work with, but might be simulated with a target 10 meters away with a big spread. Unfortunately there's no empirical answer for that one - it's a judgement call.

I don't have this script but it would be fairly simple. Can you provide an example maya scene? You don't need any math. Here's how you could go about it:

Assume the axis pointing through the pupil is positive X, and focal length is 10 units.

  1. Create 2 locators. Parent one to each eye. Set their translations to (10, 0, 0).
  2. Create 2 more locators in worldspace. Point constrain them to the others.
  3. Create a plusMinusAverage node.
  4. Connect the worldspace locator's translations to plusMinusAverage1 input 1 and 2
  5. Create another locator (the lookAt)
  6. Connect the output of plusMinusAverage1 to the translation of the lookAt locator.
  7. Bake the translation of the lookAt locator.
  8. Delete the other 4 locators.
  9. Aim constrain the eyes' X axes to the lookAt.

This can all be done in a script using commands: spaceLocator, createNode, connectAttr, setAttr, bakeSimulation, pointConstraint, aimConstraint, delete.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top