import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.mlab as mlab
import seaborn as sb
from scipy import stats
import time
sb.set() # Reset Style
#sb.set(style="white")
sb.set_context("talk")
%matplotlib inline
fw = 10 # figure width
# Plot the Distributions in this range:
x = np.linspace(-100,100,1000)
mean0 = 0.0 # e.g. meters or miles
var0 = 20.0
plt.figure(figsize=(fw,5))
plt.plot(x,mlab.normpdf(x, mean0, var0), label='Normal Distribution')
plt.ylim(0, 0.1);
plt.legend(loc='best');
plt.xlabel('Position');
You are at position 0
and you are pretty unsure (flat normal distribution)
meanMove = 25.0 # e.g. meters, calculated from velocity*dt or step counter or wheel encoder ...
varMove = 10.0 # Estimated or determined with static measurements
plt.figure(figsize=(fw,5))
plt.plot(x,mlab.normpdf(x, meanMove, varMove), label='Normal Distribution')
plt.ylim(0, 0.1);
plt.legend(loc='best');
plt.xlabel('Distance moved');
$\mu_\text{new}=\mu_\text{0}+\mu_\text{move}$ is the new mean and $\sigma^2_\text{new}=\sigma^2_\text{0}+\sigma^2_\text{move}$ is the new variance.
def predict(var, mean, varMove, meanMove):
new_var = var + varMove
new_mean= mean+ meanMove
return new_var, new_mean
new_var, new_mean = predict(var0, mean0, varMove, meanMove)
plt.figure(figsize=(fw,5))
plt.plot(x,mlab.normpdf(x, mean0, var0), label='Beginning Normal Distribution')
plt.plot(x,mlab.normpdf(x, meanMove, varMove), label='Movement Normal Distribution')
plt.plot(x,mlab.normpdf(x, new_mean, new_var), label='Resulting Normal Distribution')
plt.ylim(0, 0.1);
plt.legend(loc='best');
plt.title('Normal Distributions of 1st Kalman Filter Prediction Step');
plt.savefig('Kalman-Filter-1D-Step.png', dpi=150)
What you see: The resulting distribution is flat > uncertain.
The more often you run the predict
step, the flatter the distribution get
(Estimated or determined with static measurements)
meanSensor = 25.0
varSensor = 12.0
plt.figure(figsize=(fw,5))
plt.plot(x,mlab.normpdf(x, meanSensor, varSensor))
plt.ylim(0, 0.1);
$\sigma^2_\text{new}=\cfrac{1}{\cfrac{1}{\sigma^2_\text{old}}+\cfrac{1}{\sigma^2_\text{Sensor}}}$ is the new variance and the new mean value is $\mu_\text{new}=\cfrac{\sigma^2_\text{Sensor} \cdot \mu_\text{old} + \sigma^2_\text{old} \cdot \mu_\text{Sensor}}{\sigma^2_\text{old}+\sigma^2_\text{Sensor}}$
def correct(var, mean, varSensor, meanSensor):
new_mean=(varSensor*mean + var*meanSensor) / (var+varSensor)
new_var = 1/(1/var +1/varSensor)
return new_var, new_mean
var, mean = correct(new_var, new_mean, varSensor, meanSensor)
plt.figure(figsize=(fw,5))
plt.plot(x,mlab.normpdf(x, new_mean, new_var), label='Beginning (after Predict)')
plt.plot(x,mlab.normpdf(x, meanSensor, varSensor), label='Position Sensor Normal Distribution')
plt.plot(x,mlab.normpdf(x, mean, var), label='New Position Normal Distribution')
plt.ylim(0, 0.1);
plt.legend(loc='best');
plt.title('Normal Distributions of 1st Kalman Filter Update Step');
You see: Sensor readings increase the certainty!
"Kalman-Filter: Predicting the Future since 1960"
Let's say, we have some measurements for position and for distance traveled. Both have to be fused with the 1D-Kalman Filter.
positions = (10, 20, 30, 40, 50)+np.random.randn(5)
distances = (10, 10, 10, 10, 10)+np.random.randn(5)
positions
array([ 10.05476138, 20.28458674, 29.49295574, 38.62625017, 49.53275068])
distances
array([ 10.98636654, 10.00344322, 10.93369829, 9.85055967, 10.33031138])
mean = mean0
var = var0
plt.figure(figsize=(fw,5))
for m in range(len(positions)):
# Predict
var, mean = predict(var, mean, varMove, distances[m])
#print('mean: %.2f\tvar:%.2f' % (mean, var))
plt.plot(x,mlab.normpdf(x, mean, var), label='%i. step (Prediction)' % (m+1))
# Correct
var, mean = correct(var, mean, varSensor, positions[m])
print('After correction: mean= %.2f\tvar= %.2f' % (mean, var))
plt.plot(x,mlab.normpdf(x, mean, var), label='%i. step (Correction)' % (m+1))
plt.ylim(0, 0.1);
plt.xlim(-20, 120)
plt.legend();
After correction: mean= 10.32 var= 8.57 After correction: mean= 20.30 var= 7.29 After correction: mean= 30.21 var= 7.08 After correction: mean= 39.22 var= 7.05 After correction: mean= 49.54 var= 7.04
The sensors are represented as normal distributions with their parameters ($\mu$ and $\sigma^2$) and are calculated together with addition or convolution. The prediction decreases the certainty about the state, the correction increases the certainty.
If you have more than one state (here: position), than you have to use the multidimensional Kalman Filter, which is pretty much the same, but with matrices:
In order to use the Kalman filter to estimate the internal state $x$ of a process given only a sequence of noisy observations $z$, one must model the process in accordance with the framework of the Kalman filter. This means specifying the following matrices:
for each time-step, $k$.
Source: Wikipedia
And you can run the Prediction
in high update rate Open-Loop mode (means: without Correction
) and use a correction, only if it is available.
Typical Example: You have a high update rate acceleration sensor (typical 100Hz) and a low update rate GPS sensor (typical 1Hz). Then you run the Prediction
with 100Hz (mean 100 times a second) and only correct every 100th filter step.