Develop Filtering Algorithms


Particle approximations have the desirable feature of replacing storage of conditional densities for the filtering equations with particle locations. Whereas it is difficult, or impossible, to store the densities for higher dimensional problems in a linear interpolative manner, it is relatively easy to store particle locations. Then, the empirical measures of properly designed interacting particle systems provide asymptotic approximations to the conditional densities of the filtering equations. Monte Carlo-type methods are used to propagate the particles instead of using infinite dimensional equation solvers. Hence, a great reduction in computational complexity occurs. The various particle methods perform quite differently under the realistic condition of a finite number of particles, and these performance differences are the subject of the ongoing investigation.

The first adaptive particle filtering method was introduced in 1994 by Del Moral and Salut.  This interacting particle method simulates a large number of particles representing possible states of a moving signal, and at each observation point redistributes the particles among the sites based on their likelihood given the observations. The earlier weighted particle method assigns a weight to each particle, and at each observation point updates the weights based on the observation. In branching particle methods, at each observation time a particle can split into two or more particles, remain unchanged, or disappear. In each method, the individual particles evolve independently of each other given the back observations.

Previously we had developed a new nonlinear filtering particle algorithm that dramatically outperformed all other methods when tested on the search and rescue, weakly interacting multi-target, and performer problems (described below). In addition, this algorithm could easily provide density estimates for the whole signal path based upon the back observations. The implementation of this first new method, the MITACS-PINTS branching method (MIBR), has been patented by one of our corporate sponsors, Lockheed Martin Corporation. (This method differs in many respects from the algorithms introduced earlier by, for example, Crisan and Lyons or Del Moral, Crisan, and Lyons. The implementation is completely novel.)   Our simulation comparisons show that our MITACS-PINTS branching algorithm significantly outperforms the earlier interacting and branching methods.  Still, even MIBR does not match up to our newest particle filter.

The key to appreciating our Selective Resampling Particle Filter (SERP) is understanding the natural tradeoff in resampling of particle filters. The first (unadaptive) particle filters did not resample at all; they merely weighted particles according to likelihoods calculated from the observations.  In practice, most particles deviate from the signal path and lose weight quickly, effectively limiting the unadapted filter approximation to a very small number of useful particles. On the other hand, resampling introduces extra noise into the system and degrades current performance.  Hence, the more recent adaptive filters like Del Moral's interacting particle filter and Crisan and Lyons' branching particle filter that resample all of the particles also produce inferior performance. Our MIBR filter, and the more recent interacting variant published in Del Moral, Kouritzin and Miclo (2001), struck a natural balance between the previous no resampling and complete resampling schemes. SERP goes much further in that it will never unnecessarily resample particles and it finds the optimal amount of resampling in terms of a feedback control.  A function of the current particle system state determines the extent of resampling. For simplicity, resampling is done pairwise, starting with the particles having the lowest and highest individual weights and is continued until the highest and lowest weights are within a factor of ρ  . Naturally, ρ is either simply chosen empirically to be some appropriate value or optimally determined by solving a stochastic control problem. The latter can be done ahead of time using the predictive model for the filtering problem. This method dramatically outperforms any other known particle methods. Moreover, after months of customizing (hash table, linked lists, and array) data structures and algorithms to bring its implementation to a similar level as MIBR and our other particle filters, we realized that it is the fastest adaptive particle method as well. Presently, it is designed for a large class of discrete-time observation filtering problems. However, by letting the discretization time decrease to zero, we will also be able to approximate continuous-observation problems, most likely including those that have correlations between the signal and the observation noise.

This method was invented by Kouritzin and developed by Kim, Kouritzin, and Ma (see Ballantyne, Kim, and Kouritzin (2002) and Bauer, Kim, and Kouritzin (2002)). Lockheed Martin is in the process of filing a patent on this method and our implementation of it (see below).

Our SERP and MIBR methods have been enhanced so that they can now provide not just smoothers, trackers, and predictors, that is, approximate estimates of the signal state at some particular point in time, but can also provide probabilistic information on the full path of the signal conditioned on the current set of observations.  For example, these new filters can answer questions like ?x201c;How likely is it that the signal started in that region, passed through that area, and ended up here??x201d; Such information is provided in real time by the recursive action of the particle system on the available system observations, just as is done by smoothers, trackers, and predictors.  To our knowledge, no other nonlinear filter was previously implemented with this capability.  Once the technique had been developed, important secondary uses of this path-space information became apparent.  The innovation was fortuitous in its availability to help solve a particular real world problem experienced by one of our industrial partners.


Partial differential equations are commonly solved using space discretization methods like finite elements and multigrid methods. The discrete analog for a stochastic differential equation would be some Markov chain. For filtering there is a distinct choice: One can discretize the signal as Kushner (1979) as well as Di Masi and Rungaldier (1981) did or directly approximate the DMZ and FKK equations. In the former case, robustness results can be used to ensure that the filtering solution with the approximate signal is not too different from the desired solution and any of the Kallianpur-Striebel formula, the Hidden Markov Model techniques, or the Gauge Transform methods can be used to calculate the approximate filter solution. We are interested in the second, more direct approach, treating the DMZ equation as a (simple) stochastic partial differential equation.  Motivation for this approach came from works in ODEs by Kurtz (1971), PDEs by Arnold & Theodosopulu (1981), Blount (1991), and SPDEs by Kouritzin and Long (2002).

Initially, we studied direct Markov chain approximations to the DMZ.  For discrete-time observation problems, our solutions could have been thought of as an interacting pair consisting of a continuous-time Markov chain to handle the signal evolution and a discrete-time Markov chain to account for the observations, both using the same state.  However, simulations showed that excess, performance-degrading noise was introduced by the continuous-time Markov chain. Our more recent approximations are therefore stochastic-deterministic hybrids.  Experience with particle methods of filtering taught us that the approximation of the unnormalized conditional distribution by blocks of fixed mass, i.e. particles, allows automatic resource management:  less calculation and storage is wasted on less significant portions of the distribution.  Hence, in the sequel we will describe our method in terms of particles. However, it must be understood that, unlike particle filters, these particles do not move independently of one another between observations.  Whereas particle filters use the generator for the signal, space discretization methods use the adjoint.

Our Refining Stochastic Grid Filter(REST) represents the state of the art space discretization filter for ?x201c;regular?x201d; signals evolving over some regular subset of Euclidean space.  The signal state space is divided into cells and particles give birth and die in these cells based upon observation-dependent information.  Each location of a cell containing associated particles, its rates of birth and death, as well as links to the neighbouring cells, are stored as a leaf in a balanced binary tree.  Rates and particle totals are calculated and stored in the internal nodes of the tree in such a manner that the rates (particle total) at a node are the sum of the rates (particle total) of all subordinate nodes.  This storage mechanism promotes very efficient calculations.  By continually pruning the tree so that it only contains cells where particles are located, we retain, or even surpass, the storage and computational advantages of particle filters.  The four most impressive features of our method are:  the cell size adaptation, the Dynamic Interleaved Binary Expansion (DIBE), the noise reduction by rate differencing, and the pushdown particle control.

When the signal has not been localized the cell sizes in our REST are large and the particles are spread out over the cells.  As particles coalesce in any dimension isolating the signal, the cell sizes automatically shrink, creating a powerful dimension-by-dimension zooming in or refining feature.  Likewise, if the signal is temporarily lost (due to severely corrupted observations) zooming out occurs.  The effect is that computations remain relatively constant and fewer particles can often be used than in particle filters.  Naturally, this varying cell size property plays havoc on our binary tree, requiring it to branch and merge.  Our DIBE mechanism is a sophisticated, efficient algorithm that handles these tree operations without resorting to destroying and rebuilding the tree.  It represents four man-months of work on its own.  The zooming in and out is done in such a manner that the tree shape remains relatively balanced.

Our REST method does not treat particles individually or rely on laws of large numbers, as particle methods do, but instead calculates net birth and death rates on a cell-by-cell basis.  (Actually, due to the time imhomogeneity we use times rather than rates but we neglect this fact in our simple description herein.)  In order to mimic particle movement, births can occur in a cell if there are particles in that cell or any of its neighbours. For example, to determine if a birth should occur in a cell, we would consider the net effect of: births due to observations plus births to simulate a particle moving into the cell from a neighbour minus deaths due to observations minus deaths to simulate a particle moving out of the cell to a neighbour.  By considering all particles in a cell together, differencing rates, and avoiding Monte Carlo simulations, REST has far less simulation noise than particle systems and, thereby, can provide similar accuracy with far fewer particles.  Conversely, its computer codes are far more complicated and it only applies to regular signals on regular domains.

To further ensure that the calculations required remain relatively constant, we control the number of particles using an efficient push down technique.  This technique is devised to add minimal additional noise and to be performed in conjunction with the births and deaths from observations and ?x201c;particle movement''. Another impressive feature of REST is that it can accommodate parameter estimation automatically simply because it fine-tunes cell size automatically.

REST was invented by Kouritzin and developed by Ballantyne, Kim and Kouritzin (see Ballantyne, Kouritzin, Long, and Sun (2002) and Bauer, Kim, and Kouritzin (2002)).  Lockheed Martin is in the process of filing a patent on this method and our implementation of it (see below).


SERP and REST are excellent for problems where the dimension of the signal space is large (i.e. five, or greater).  However, when densities exist and the dimension is small enough that densities can be stored, there are efficient exact or nearly exact methods that can frequently outperform the best particle and discretization methods. Convolutional methods are nearly the perfect complement to SERP and REST in that they are designed for efficient computation of the actual conditional densities of filtering theory with little use of approximation. The Infinite Dimensional Exact (IDEX) method amounts to characterizing the continuous-signal, discrete-time observation filtering problems that can be solved without approximation via a single convolution, multiplication, substitutions, and Bayes' rule, and also refers to the algorithms to implement such filtering solutions. Approximate Convolutional methods refer to the approximation of a very general class of filtering problems via an algorithm consisting of a small number of convolutions and transformations.  The approximations in approximate convolutional methods are very tight on a path-by-path basis, converging to the optimal filter with an error of the order just shy of 1/(n!) where n is the number of convolutions used.  Thus all convolutional methods easily produce optimal or nearly optimal algorithms in a very computationally efficient manner.  However, the convolutional methods cannot compute solutions to large dimensional problems equally well as particle methods can, nor can they, as of yet, handle signals that evolve on some irregular space or do not satisfy a stochastic differential equation.  IDEX was introduced in Kouritzin (1998, 2000) and its scope was greatly expanded in the work of Kouritzin and Remillard (2002) and the current study of Van Weelden and Kouritzin.

Michael Kouritzin and Bruno Remillard have continued a general investigation of the mathematics involved in exact convolutional filters for vector-valued signals that will generalize the initial treatment in Kouritzin (1998).  The first major step in investigating the generality under which IDEX filters hold is determining the class of vector-valued stochastic differential equations (SDEs) that yields a unique explicit solution of the form

for some functions

where W is a multi-dimensional Brownian motion, and the integrals can be taken as either iterated Itô integrals or multiple Wiener integrals.

Characterization of the corresponding is also part of the problem.  This was done in the case m=1 in Kouritzin and Remillard (2002) and is a subject of current investigation in the case m>1.  Surprisingly, it was discovered in Kouritzin (2000) and Kouritzin and Remillard (2002) that the class of SDEs yielding such a representation, even with m=1, is huge, especially in the case that the dimension of X is greater than that of W.  To connect this representation to filtering, we mention that the allowable signals for the exact convolutional method consist of those SDEs whose Fokker-Planck evolution equation can be solved by a combination of Feynman-Kac representation, measure-transformation and these representations.  The result is a very large class of SDEs with both nonlinear drift and dispersion coefficients, which are amenable to IDEX filtering.  Also, these representations often improve particle methods, allowing particles to be simulated without Euler or Mihlstein approximations or the associated accuracy and efficiency costs.

This work has provided insights into approximate convolution methods.  Eventually, techniques will be devised to combine convolutional and particle methods into a ?x201c;super?x201d; method.