﻿ Kalman filter for a Differential Drive Robot

# Kalman filter for a Differential Drive Robot

## Introduction to Kalman Filters

Kalman filters use Gaussian distributions for the state belief, action error and sensor error. Gaussian distributions have a nice property of remaining Gaussians under a number of operations:

• A Gaussian can be translated and remain a Gaussian.
• A Gaussian can be linearly transformed and remain a Gaussian.
• A Gaussian multiplied by Gaussian results in a Gaussian.
• A Gaussian convolved by a Gaussian results in a Gaussian.

The Kalman filter uses closed form solutions to these operations to update a Gaussian belief using actions and sensor readings. However, this also limits the sensor and action models to linear systems. As we will see, this is a pretty serious limitation not shared by the other Bayes filters in this tutorial series.

## Action Update

The Gaussian belief is updated with an action using a process called convolution. Convolution marginalizes the action model result using the prior Gaussian belief. Given a state and an action, an action model for a Kalman filter generates a Gaussian distribution of possible states with a mean equal to the expected resulting state. Each point in the prior belief Gaussian is passed through the action model state transition function. The resulting Gaussians are summed according to the weight of the prior belief to get the posterior distribution. The result of covolution is a Gaussian with a covariance equal to the sum of the two original Gaussians.

However, in order for the posterior to be a Gaussian the transtion function must be of the form:

Xnext = A * Xprev + B * u + C

where xprev is the prior state belief, u is the action performed, xnext is the posterior state belief, A and B are matrices, and C is a vector. A matrix R is also defined as part of this action model to define the covariance of xnext Gaussian distribution.

The equations defined in the previous tutorial do not fit this form. Instead, we will convert u=[ t, vL, vR] into u=[ Δx, Δy, Δθ ]. This means that A and B are simply identity matrices (and C is zero).

The interface GaussianActionModel extends ActionModel to support the generation of Gaussian distributions.

The class KalmanActionModel uses the linear system defined above to implement GaussianActionModel.

The class LocalOffset will provide our new action data defines our new action data. LocalOffsetActionModel derives from KalmanActionModel and sets up the linear system to work correctly. The error of LocalOffsetActionModel defaults to zero.

The following code will set up the Kalman filter action model:

KalmanFilter<Pose, LocalOffset, PoseObservation> filter =
new KalmanFilter<Pose, LocalOffset, PoseObservation>();
filter.ActionModel = new LocalOffsetActionModel();
filter.ActionModel.R = new RobotMatrix(3, 3, 0.5, 0, 0, 0, 0.5, 0, 0, 0, 0.05);

While the action model specifies a contant error, it may make sense to change the error based on the action being performed or the time period between updates. If this is the case, the R member on KalmanActionModel should be reassigned before calling UpdateBeliefWithAction.

To update the Kalman filter, we use the DifferentialDriveAction to convert out wheel velocties to a LocalOffset. The LocalOffset is then used as the parameter to UpdateBeliefWithAction.

DifferentialDriveActionModel ddModel = new DifferentialDriveActionModel();
ddModel.AxleLength = 5;

DifferentialDriveAction action =
new DifferentialDriveAction(data.Vl, data.Vr, data.t);
Pose prev = filter.Belief.Mean;
Pose next = ddModel.GetNextState(action, prev);

LocalOffset u = new LocalOffset(next.X - prev.X, next.Y - prev.Y, next.Theta - prev.Theta);
filter.UpdateBeliefWithAction(u);

## Sensor Update

The Gaussian belief is updated with a sensor by using Bayesian inference. In this particular case, the Bayesian inference is performed by multiplying two Gaussians. The sensor model provides the conditional probability of making an observation at a some state. When the actual observed value is plugged into this conditional, a Gaussian likelihood is generated. The value at each state in the likelihood function is multiplied by the prior probability of that state to get the posterior probability of that state. The resulting posterior distrubiton, after normalization, is the new Gaussian belief. As with the action model, the sensor model calculation is done with a linear system.

Z = C * X

As before we also have a covariance matrix, in this case Q, for the resulting distribution.

In this tutorial, we will pretend the robot has a noisy GPS system. The observation will provide X, Y and θ values for the robot pose. While we could reuse the Pose class for the observation, that would result in the confusing type SensorModel. Instead, we use a class created exclusively for sensor readings.

The interface GaussianSensorModel extends SensorModel to support the generation of Gaussian distributions.

The class KalmanSensorModel uses the linear system defined above to implement GaussianSensorModel.

The following code will set up the Kalman filter sensor model:

filter.SensorModel = new KalmanSensorModel<PoseObservation, Pose>();
filter.SensorModel.C = RobotMatrix.CreateIdentity(3);
filter.SensorModel.Q = new RobotMatrix(
3, 3,
50, 0, 0,
0, 50, 0,
0, 0, 1);

Updating the belief is a simple matter of getting an observation and applying to the filter:

PoseObservation z = GetObservation();
filter.UpdateBeliefWithObservation(z);

## Simulation

We need a source to generate an observed value and so we will also keep track of simulated robot. A simulated robot the exactly follows the action model would not be very interesting. The robot's pose would exactly follow the expected pose in the belief distribution. The error bounds would seem kind of silly in that situation.

A real robot's motion would be affected by bumps on the ground, slipping wheels and imperfect drive motors. We will introduce this error by sampling from a distribution around the wheel velocity controls, and then passing those noisy controls through the DifferentialDriveActionModel.

We use a Pose to model the robot position (which we initialize to be the same as the initial belief):

Pose actualPosition = _filter.Belief.Mean;

We construct a uniform distribution around the specified controls. A sample from that distribution is used to update the actual position.

Random r = new Random();
RangedUniform<DifferentialDriveAction> pu = new RangedUniform<DifferentialDriveAction>(
new DifferentialDriveAction(data.Vl * 0.95, data.Vr * 0.95, data.t),(
new DifferentialDriveAction(data.Vl / 0.95, data.Vr / 0.95, data.t)(
);(
actualPosition = ddModel.GetNextState(pu.Sample(r), actualPosition);(

Finally, we use the same technique to sample a sensor value from the actual pose.

Random r = new Random();
RangedUniform<PoseObservation> pz = new RangedUniform<PoseObservation>(
new PoseObservation(actualPosition.X-5, actualPosition.Y-5, actualPosition.Theta-0.25),
new PoseObservation(actualPosition.X+5, actualPosition.Y+5, actualPosition.Theta+0.25)
);
PoseObservation z = pz.Sample(r);

All of this is tied together below in a Silverlight application.