Starting Point for Physics-Based Character Animation

Character Physics based on Newtonian Particle Simulation with Angular Constraints

David Rosen 2007


Abstract

Rigid bodies and flexible systems like rope or cloth can easily be simulated using newtonian particles with distance constraints. However, this method has some limitations when used for character physics, stemming from the fact that each constraint is defined by two points, and thus has no complete orientation. Without a complete orientation, it is difficult to simulate rotational friction and joint rotation limits, and to apply the simulation to 3D models. I addressed these issues by allowing sets of three connected points to be defined as triangles, and implementing rotational constraints between connected triangles.


Introduction

To combine motivated character animation with believable physics, we need a physics system that is fast, intuitive, and easy to control. Most real-time physics engines (Havok, Newton, ODE, etc.) are designed to simulate the interactions of large numbers of discrete rigid bodies, such as a bowling ball hitting pins, or a car smashing into a stack of bricks. It is possible to implement constraints between rigid bodies to construct more complex articulated bodies like limp human 'ragdolls' in this kind of engine, but it is very difficult to implement articulated bodies like that in a stable and efficient way, and even more difficult to combine with motivated animation (e.g. walking or climbing).


An alternate method of physics simulation (Jakobsen http://www.teknikus.dk/tj/gdc2001.htm) is based on newtonian particle physics and Verlet integration: every timestep, we loop through each particle and detect the change in position from the previous frame, and then add this change again to determine the next position. That is, position_next = position + (position - position_previous). This kind of integration enforces Newton's first law (an object in motion remains in motion unless an external force is applied). In this kind of system, we can add forces by directly moving the particles. For example, we can create a constraint that particles A and B must be 1 unit distance apart, and then each timestep we check if this is true, and if not, we move them closer together or farther apart as necessary. Using just particles and linear constraints like this, we can simulate rigid bodies by defining all of its points and adding distant constraints between them. This is not very efficient as the rigid body becomes more complicated, but this method is very well suited to articulated bodies. For example, without adding any more code, we can create a rope, a piece of cloth, a flail or a very basic ragdoll.


This ragdoll has several problems. First of all, it does not collide with itself, and has no rotation constraints. For example, the leg could go up inside the torso instead of down, or the lower arm could rotate so the hand is inside the shoulder. The easiest way to address this would be to add min/max distance constraints. That is, they enforce that two points are at least x units apart, and no more than y. In this way we can make sure that the leg does not get too close to the upper body, or the wrist cannot get too close to the shoulder. However, this addresses the symptoms rather than the underlying problem. While it prevents many of the most obvious errors, it does not correct them in a physically realistic way. In addition, each body part does not have a complete orientation. If we have a 'lower leg' model, then we can determine where the center should be (halfway between the foot and knee points), and where its vertical axis should be (parallel to the constraint between those two points), but its forwards and right axes are undefined. We can make a pretty good guess based on parts that are completely defined like the torso, but this also ignoring the underlying problem.


We can start to address both problems by defining each body part as a triangle: a set of three points connected with distance constraints. This gives us a complete orientation for rendering, and also lets us differentiate between 'hinge' joints and 'ball' joints. Two triangles that share two points will only be able to rotate around the axis defined by the shared joints, and two triangles that share only one point can rotate freely. We don't have to do anything to support this distinction; it is already implicitly enforced by the distance constraints and the Verlet integration. However, we still need to constrain rotation so that, for example, elbows cannot bend backwards and legs cannot penetrate the torso. Since we have fully defined orientations for each part, we can define rotation/orientation constraints. Given two triangles, we can store a target orientation, and then check if the current orientation is close enough to the target (for example, no more than 30 degrees off). If not, we change the relative orientation to be within the threshold.


With rotational constraints we now have a physically realistic starting point for motivated animation. The next step will be to calculate a target position for each point in the system and then convert these target positions into target relative orientations, and use the rotation constraints to move the points towards the correct position. For example, if we want the arm to reach towards a specific point in space, we can calculate the target pose using well-documented inverse-kinematics techniques, and then use our rotational constraints to try and reach that pose along a physically realistic path (or fail to reach it if it is not physically possible). This kind of approach would also be possible using more common rigid body physics techniques, but this way is intuitive and efficient, and combines well with traditional animation techniques.


Implementation

I created a physics engine and editor that combines particle physics and Verlet integration with triangle orientation constraints. The editor is not the focus of this paper, but briefly, it lets you place points and add constraints between them, and define triangles and place constraints between them as well. It takes about two minutes to create the human ragdoll described above, and much less time for simpler systems. Here is a video of the editor in use.


The highest-level data structure for the physics simulation is the "Skeleton" class, which stores a dynamic array of points, constraints, and triangles. The Point class just stores a position and previous position, and an array of new_positions used to average the effects of multiple constraints. The linear Constraint class stores pointers to the connected Points, and the minimum and maximum length. The Triangle class stores pointers to the three vertex points, the orientation matrix, and an array of rotational constraints. The rotational TriangleConstraint class stores pointers to both Triangles involved, as well as the middle orientation (that transforms the first triangle's orientation to that of the second) and the maximum rotation allowed (in degrees). All data structures shown here are simplified to remove irrelevant editor and rendering code. Some variable and function names are changed for clarity.


class Skeleton {

std::vector<Point*> points;

std::vector<Constraint*> constraints;

std::vector<Triangle*> triangles;

void Update(bool paused);

void ApplyLinearConstraints();

void ApplyRotationalConstraints();

};


class Point {

XYZ position;

XYZ old_position;

XYZ next_old_position;

std::vector<XYZ> new_positions;

int num_new_positions;

};

class Constraint {

ConstraintType type;        // _rigid or _minmax

Point *end[2];

float length[2];            // min length and max length

void Apply();

};


class Triangle {

Point *points[3];

MATRIX4X4 orientation;

std::vector<TriangleConstraint> constraints;


void calculateOrientation();

};


class TriangleConstraint {

Triangle *first;

Triangle *second;


MATRIX4X4 middle_orientation;   // middle orientation

float angle_range;              // maximum distance allowed from middle orientation


void init(Triangle *p_first, Triangle *p_second);

void enforce();

};


Physics Loop

Skeleton::Update()


First, each particle's position is incremented by the difference between its current position and its previous position. The position is also shifted downwards by gravity.

next_old_position = position             // This position is the next frame's old_position

position += position - old_position;     // Verlet integration

position += gravity;                     // gravity == (0,-0.01,0)


Next the engine checks if each particle is below the ground plane. If so, its y-coordinate is set to that of the ground plane.

position.y = max(position.y,0.0)         // ground plane collision


Now it loops through all of the linear and rotation constraints and averages their effect on each particle. I will describe this in more detail in the next section.

ApplyLinearConstraints();

ApplyRotationalConstraints();


Finally it updates the old_position value for the next frame.

old_position = next_old_position;


Linear Constraints

ApplyLinearConstraints()


Here we loop through all of the linear constraints (rigid distance or min/max distance) and enforce them. The code for enforcing a rigid distance constraint is very straightforward:


// end[0] and end[1] are pointers to the points at each end of the constraint

XYZ dir = Normalize(end[1]->position - end[0]->position);

XYZ avg = (end[1]->position + end[0]->position)/2;

end[0]->add_new_position(avg-dir*(length/2));

end[1]->add_new_position(avg+dir*(length/2));


Min/max constraints check if the distance is outside the allowed range before enforcing a distance in the same way. The constraints are all enforced with equal priority by keeping track of all of the changes in position that the constraints apply to each point, and averaging them together.


Rotational Constraints

ApplyRotationalConstraints()


Before we can work with rotations, we need to find the orientation of each triangle. We can calculate the orientation by constructing an orthonormal basis from the three points:


void Triangle::calculateRotation()

{

XYZ up, right, facing;

up = Normalize(points[1]->position - points[0]->position);

right = points[2]->position - points[0]->position;

facing = Normalize(CrossProduct(up, right));

right = Normalize(CrossProduct(facing, up));


orientation.calcFromBasis(right,up,facing);

}


Now we need to know the middle orientation for each constraint. That is, we need to know what constraint we will be enforcing a maximum rotation from, and what the maximum rotational distance is. In my implementation, it checks the orientation of the two triangles when the constraint is defined, and stores the relative orientation from the second triangle to the first. The default maximum rotation is sixty degrees.


middle_orientation = second->orientation.GetInverse() * first->orientation;

max_angle = 60.0f;


Now each frame we loop through each triangle constraint and find the difference between the current orientation and the middle_orientation:


MATRIX4X4 rot_offset_matrix = second->orientation * middle_orientation * first->orientation.GetInverse();

Next we convert the rot_offset_matrix to axis/angle form, and compare the angle to the maximum angle allowed. If it is less, then we don't have to do anything, but if it is greater, we subtract the current angle from the maximum angle to get the correction angle.


axis_angle rot_offset_aa = MatrixToAxisAngle(rot_offset_matrix);

if(Abs(rot_offset_aa.angle > max_angle)){

rot_offset_aa.angle = max_angle - rot_offset_aa.angle;

}


Now we have a rotation that will rotate one of the triangles to satisfy our rotation constraint. If we want to have softer constraints, we can scale the corrective rotation by any constant here. We now have to make sure that we are conserving linear and angular momentum.


To conserve angular momentum, we first apply a very small rotation along the corrective rotation axis to determine how far the vertices of each triangle move, and use the sum of the movement distances of each vertex of each triangle to determine the relative inertia of each triangle on this axis. Next we determine the rotation for each triangle by multiplying the total rotation by the relative inertia of the other triangle, and apply the rotations.


Finally, to conserve linear momentum, we find the sum of the changes of position of each point caused by the rotation, and push all of the points an equal distance in the opposite direction so that the center of mass is the same as it was before we enforced the rotational constraint.


Extensions


Angular friction is important for ragdoll simulation to allow the ragdolls to realistically (and quickly) reach stable resting states. For example, without angular friction, an arm hanging over an edge could unrealistically keep waving back and forth forever like a pendulum. Joint rotation in the human body is damped not only by actual bone friction, but also by all of the connective tissue and muscle surrounding it, so it is essential to include some kind of angular damping in any kind of human body simulation. I implemented angular dynamic friction as an extension of the rotation constraint, by adding a soft (stiffness < 100%) rotational constraint, with the target orientation set to the relative orientation in the previous timestep. I added static friction by adding a threshold rotation magnitude below which the friction stiffness is set to 100%. This was easy to do, and made the ragdoll simulation look more realistic, and reach resting states much more quickly (allowing the simulation to be suspended until another force acts on it).Here is a video of a constraint without friction and one with friction.


Conclusion and Future Work


Combining newtonian particles with linear and angular constraints results in a powerful and intuitive physics engine that is ideally suited for working with articulated bodies like human ragdolls. The rotational constraints provide us with a logical entry point for combining ragdoll physics simulation with motivated animation in a realistic fashion. A reasonable next step would be to store target poses and use the rotational constraints to reach them on command, and then start to create procedural target poses; for example, where the center of gravity is above the line connecting the two feet. We can then start to work on more complex target poses, such as moving the feet to recover from more severely unbalanced conditions (like a hard push), and from there we could start to work on more normal movement such as walking.


To use this system in a more complex environment, it would also be necessary to add more complex collision detection and response. One way to start addressing this would be to add convex hulls around each body part, and translate forces applied to these hulls to the underlying triangle, and vice versa. In this way, collisions could be handled in a similar way to conventional rigid body simulations: a very well-documented problem with a variety of solutions.


This system also does not support skinning more detailed meshes over the underlying skeleton, but since the triangles have fully defined orientations, this can be done with standard matrix palette skinning. We can 'bind' the vertices of a display model to one pose of the skeleton, and then transform the vertices each frame to match the new pose.