The Kalman filter models a single hypothesis. Because the underlying model of the probability distribution for that hypothesis is unimodal Gaussian, it is not possible to represent multiple hypotheses simultaneously using the Kalman filter. A somewhat more advanced technique known as the condensation algorithm [Isard98], which is based on a broader class of estimators called particle filters, will allow us to address this issue.
To understand the purpose of the condensation algorithm, consider the hypothesis that an object is moving with constant speed (as modeled by the Kalman filter). Any data measured will, in essence, be integrated into the model as if it supports this hypothesis. Consider now the case of an object moving behind an occlusion. Here we do not know what the object is doing; it might be continuing at constant speed, it might have stopped and/or reversed direction. The Kalman filter cannot represent these multiple possibilities other than by simply broadening the uncertainty associated with the (Gaussian) distribution of the object's location. The Kalman filter, since it is necessarily Gaussian, cannot represent such multimodal distributions.
As with the Kalman filter, we have two routines for (respectively) creating and
destroying the data structure used to represent the condensation filter. The only difference
is that in this case the creation routine cvCreateConDensation()
has an extra parameter. The value entered for this
parameter sets the number of hypotheses (i.e., "particles") that the filter will maintain at
any given time. This number should be relatively large (50 or 100; perhaps more for
complicated situations) because the collection of these individual hypotheses takes the
place of the parameterized Gaussian probability distribution of the Kalman filter. See Figure 10-20.
Figure 10-20. Distributions that can (panel a) and cannot (panel b) be represented as a continuous Gaussian distribution parameterizable by a mean and an uncertainty; both distributions can alternatively be represented by a set of particles whose density approximates the represented distribution
Cv ConDensation* cvCreateConDensation( int dynam_params, int measure_params, int sample_count ); void cvReleaseConDensation( CvConDensation** condens );
This data structure has the following internal elements:
typedef struct CvConDensation { int MP; // Dimension of measurement vector int DP; // Dimension of state vector float* DynamMatr; // Matrix of the linear Dynamics system float* State; // Vector of State int SamplesNum; // Number of Samples float** flSamples; // array of the Sample Vectors float** flNewSamples; // temporary array of the Sample Vectors float* flConfidence; // Confidence for each Sample float* flCumulative; // Cumulative confidence float* Temp; // Temporary vector float* RandomSample; // RandomVector to update sample set CvRandState* RandS; // Array of structures to generate random vectors } CvConDensation;
Once we have allocated the condensation filter's data structure, we need to initialize
that structure. We do this with the routine cvConDensInitSampleSet()
. While creating the CvConDensation
structure we indicated how many particles we'd have, and for
each particle we also specified some number of dimensions. Initializing all of these
particles could be quite a hassle. [162] Fortunately, cvConDensInitSampleSet()
does
this for us in a convenient way; we need only specify the ranges for each
dimension.
void cvConDensInitSampleSet( Cv ConDensation* condens, CvMat* lower_bound, CvMat* upper_bound );
This routine requires that we initialize two CvMat
structures. Both are vectors (meaning that they have only one column), and each has as many
entries as the number of dimensions in the system state. These vectors are then used to set
the ranges that will be used to initialize the sample vectors in the CvConDensation
structure.
The following code creates two matrices of size Dim
and initializes them to -1
and +1
, respectively. When cvConDensInitSampleSet()
is called, the initial sample set will be initialized
to random numbers each of which falls within the (in this case, identical) interval from
-1
to +1
. Thus, if
Dim
were three then we would be initializing the filter
with particles uniformly distributed inside of a cube centered at the origin and with sides
of length 2.
CvMat LB = cvMat(Dim,1,CV_MAT32F,NULL); CvMat UB = cvMat(Dim,1,CV_MAT32F,NULL); cvmAlloc(&LB); cvmAlloc(&UB); ConDens = cvCreateConDensation(Dim, Dim,SamplesNum); for( int i = 0; i<Dim; i++) { LB.data.fl[i] = -1.0f; UB.data.fl[i] = 1.0f; } cvConDensInitSampleSet(ConDens,&LB,&UB);
Finally, our last routine allows us to update the condensation filter state:
void cvConDensUpdateByTime( CvConDensation* condens );
There is a little more to using this routine than meets the eye. In particular, we must update the confidences of all of the particles in light of whatever new information has become available since the previous update. Sadly, there is no convenient routine for doing this in OpenCV. The reason is that the relationship between the new confidence for a particle and the new information depends on the context. Here is an example of such an update, which applies a simple [163] update to the confidence of each particle in the filter.
// Update the confidences on all of the particles in the filter // based on a new measurement M[]. Here M has the dimensionality of // the particles in the filter. // void CondProbDens( CvConDensation* CD, float* M ) { for( int i=0; i<CD->SamplesNum; i++ ) { float p = 1.0f; for( int j=0; j<CD->DP; j++ ) { p *= (float) exp( -0.05*(M[j] - CD->flSamples[i][j])*(M[j]-CD->flSamples[i][j]) ); } CD->flConfidence[i] = p; } }
Once you have updated the confidences, you can then call cvCondensUpdateByTime()
in order to update the particles. Here "updating" means
resampling, which is to say that a new set of particles will be
generated in accordance with the computed confidences. After updating, all of the
confidences will again be exactly 1.0f
, but the
distribution of particles will now include the previously modified confidences directly into
the density of particles in the next iteration.
[162] Of course, if you know about particle filters then you know that this is where we could initialize the filter with our prior knowledge (or prior assumptions) about the state of the system. The function that initializes the filter is just to help you generate a uniform distribution of points (i.e., a flat prior).
[163] The attentive reader will notice that this update actually implies a Gaussian probability distribution, but of course you could have a much more complicated update for your particular context.