top of page

Rigger

Reinhart
Jonah

jonahrnhrt@gmail.com
Zipper Lips Script
Part 1
The first section of the zipper lip script was creating the joints that are required to drive the mesh. Zipper lips have a joint segment for each vertex on the mouth. The joint segment runs from the jaw pivot to the vertex position, and are oriented to world space.
​
Unfortunately the MEL "ls" command will return the vertices in order of the their vertex number (which is not associated with their position in world space), so they need to be reordered based on their world space position along the x-axis. This is difficult to do in MEL but can be accomplished easily using zipped lists in python. This lip (or rather jaw) rig cannot be made without using scripts.
Part 2
The second section the script was creating the core node network that would allow the joint chains to zip and seal by blending their rotations. This needed to be relatively easy to understand, so rather than creating a complex node network for each joint, I created one node network and used frame cache nodes to get a unique output for each joint.

The first part on the node network is an animCurve whose shape determines how much of the jaws rotation is inherited by each joint. The animCurve has keys at (0,0.5) (number of verts / 2, 1) and (number of verts, 0.5). So the first joint on the lower lip (the one farthest to the right) inherits 50% of the jaw rotations, the joint in the middle inherits 100% of the jaw rotations, and the joint farthest to the left inherits 50% of the jaw rotations.
​
For the lower lip this curve is inverted by plugging it into a reverse node.
​
The value from the curve and the reverse node are then multiplied by the XYZ rotations of the jaw. So we now have the rotation values for the fully open jaw for the upper and lower lip.
​
These rotations are blended together to create the target rotation for the zipping an sealing. Then two more blender nodes blend between the rotations for the open upper lip and the seal rotation, and the rotations for the open lower lip and the seal rotation.
​
There are cases when the jaw is rotated up rather than down, in which case the upper lip and lower lip should inherit all of the X-axis rotations of the jaw, so that it look like it is being pushed up by the lower lip. To accomplish this we have a condition node that outputs the lower lips x rotation instead of the upper lips x rotations when the jaw is rotated up.
​
​
Part 3
Now that we have these blender nodes set up, we need to driver the blender attributes on them. The first blendColors node is simple, it determines what the weighting is between the upper and lower lip for the seal position. So when the blender is .5, the lips will seal in the middle, when it is .75 it will seal closer to the upper lip. This blender attribute is just driven by a single attribute (I named mine "Seal Bias") with a range of 0 to 1.
​
The other two blend nodes are driven by the same secondary node network, but this one is more complicated because it has to add up the seal, left and right zipping, and the all zip values.
​
To make this part easier to understand I am going to break the node network up into three node networks (the left zip, right zip, and all zip)
​
​
Right Zip
So for the right zipping, we want all of the pairs of joints to blend between the open jaw rotation and the sealed jaw rotation at different rates. Basically we want to set up a series of keyframe animations that look like this.

So for the right zipping, we want all of the pairs of joints to blend between the open jaw rotation and the sealed jaw rotation at different rates. Basically we want to set up a series of keyframe animations that look like this.
So each joint would have it's own curve and the would be set up to go from 0 (0% sealed) to 1 (100% sealed) at different rates, and they would overlap nicely because some of them would be partially sealed at the same time. However setting up this many curves (1 per joint pair would be pretty tedious) and since these curves are pretty uniform we can generate them using a single node network.

A remapValue node can basically be thought of as an animCurve that can use inputs as the time and value for key frames. So we are going to use one to get the result we want. All we need to do is determine what math needs to be done to get the keyframe positions for each vert.
​
The correct value for the "0 key frame" is:
​
value_0 = (1 - overlap)/(number of vertices) * (vertex number)
​
The correct value for the "1 key frame" is:
​
value_1 = value_0 + overlap
​
Here is a visual explanation of why this is the math that is being done. In the node network it is pretty clear how these nodes perform these calculations. The only confusing part is the "test_R_zip_curve". This curve is linear and has key frames at (0,0) and (number of vertices, number of vertices), so that the node network can be run for each vert (if you do not know how frameCache nodes work then this explanation probably does not make sense, so I would suggest familiarizing yourself with them before continuing).
​
​
​

Left Zip
The right zip node network is almost identical to the left zip node network. The only difference is the vertexes on the left side of the mouth (the characters left) should seal first, so the math for the "key frames" should be.
​
The correct value for the "0 key frame" is:
​
value_0 = (1 - overlap)/(number of vertices) * (number of vertexes - vertex number)
​
The correct value for the "1 key frame" is:
​
value_1 = value_0 + overlap
​
This difference in the required calculations is the reason the node network is almost identical. The only difference is a plusMinusAverage node which subtracts the vertex number from the number of vertexes.

All Zip
The all zip node is also nearly identical to the other node networks, the only difference is that it has a different animCurve node. This node rather than having key frames at (0,0) and (number of vertices, number of vertices) it has key frames at:
​
(0,0)
(number of vertices /2, number of vertices)
(number of vertices, 0)
​
This way the node network gives the same output for the last and first vert and all the mirrored pairs of verts (again understanding why this happens requires an understanding of the frameCache node).

The last bit
The last step is to add the left zip, right zip, all zip and seal values to determine how much the lips have blended together. Below is that simple node network. The clamp node is needed because putting a value greater than 1 into the blendColor nodes will mess things up so we clamp the sum below 1 (the sum could exceed 1 if you set seal to 1 and left zip and right zip to 1). The blenColor nodes at the right side of the image are the same one from part 1 (we have finally set up the the input for their blender attributes).

Now the last step is visually the scariest, but only because of the number of nodes, the process is very simple. The "test_upper_lip_blender", "test_lower_lip_blender", and the "test_collide_condition" node will be plugged into the joints. But they will not be plugged in directly. Instead they will plugged into a frameCache node which will allow use to run the node network for all the different values of "vert number". This is the reason we set up animCurves throughout the process. The varyTime value for the frame cache nodes is the vert number the joint is associated with.


On the left is the node network for a single pair of joints, and it makes sense. But on the right you can see what the full node network looks like when you set it up for for each joint. If you didn't understand why before, I hope by now you understand why this cannot be rigged without the aid of scripts.
bottom of page