back to top

**1. PARTICLE SYSTEM METHODS**

Particle approximations have the desirable feature of replacing storage of conditional densities for the filtering equations with particle locations. Whereas it is difficult, or impossible, to store the densities for higher dimensional problems in a linear interpolative manner, it is relatively easy to store particle locations. Then, the empirical measures of properly designed interacting particle systems provide asymptotic approximations to the conditional densities of the filtering equations. Monte Carlo-type methods are used to propagate the particles instead of using infinite dimensional equation solvers. Hence, a great reduction in computational complexity occurs. The various particle methods perform quite differently under the realistic condition of a finite number of particles, and these performance differences are the subject of the ongoing investigation.

The first adaptive particle filtering method was introduced in 1994 by Del Moral and Salut. This interacting particle method simulates a large number of particles representing possible states of a moving signal, and at each observation point redistributes the particles among the sites based on their likelihood given the observations. The earlier weighted particle method assigns a weight to each particle, and at each observation point updates the weights based on the observation. In branching particle methods, at each observation time a particle can split into two or more particles, remain unchanged, or disappear. In each method, the individual particles evolve independently of each other given the back observations.

Previously we had developed a new nonlinear filtering particle algorithm that dramatically outperformed all other methods when tested on the search and rescue, weakly interacting multi-target, and performer problems (described below). In addition, this algorithm could easily provide density estimates for the whole signal path based upon the back observations. The implementation of this first new method, the MITACS-PINTS branching method (MIBR), has been patented by one of our corporate sponsors, Lockheed Martin Corporation. (This method differs in many respects from the algorithms introduced earlier by, for example, Crisan and Lyons or Del Moral, Crisan, and Lyons. The implementation is completely novel.) Our simulation comparisons show that our MITACS-PINTS branching algorithm significantly outperforms the earlier interacting and branching methods. Still, even MIBR does not match up to our newest particle filter.

The key to appreciating our Selective Resampling Particle Filter (SERP) is understanding the natural tradeoff in resampling of particle filters. The first (unadaptive) particle filters did not resample at all; they merely weighted particles according to likelihoods calculated from the observations. In practice, most particles deviate from the signal path and lose weight quickly, effectively limiting the unadapted filter approximation to a very small number of useful particles. On the other hand, resampling introduces extra noise into the system and degrades current performance. Hence, the more recent adaptive filters like Del Moral's interacting particle filter and Crisan and Lyons' branching particle filter that resample all of the particles also produce inferior performance. Our MIBR filter, and the more recent interacting variant published in Del Moral, Kouritzin and Miclo (2001), struck a natural balance between the previous no resampling and complete resampling schemes. SERP goes much further in that it will never unnecessarily resample particles and it finds the optimal amount of resampling in terms of a feedback control. A function of the current particle system state determines the extent of resampling. For simplicity, resampling is done pairwise, starting with the particles having the lowest and highest individual weights and is continued until the highest and lowest weights are within a factor of ρ. . Naturally, ρ is either simply chosen empirically to be some appropriate value or optimally determined by solving a stochastic control problem. The latter can be done ahead of time using the predictive model for the filtering problem. This method dramatically outperforms any other known particle methods. Moreover, after months of customizing (hash table, linked lists, and array) data structures and algorithms to bring its implementation to a similar level as MIBR and our other particle filters, we realized that it is the fastest adaptive particle method as well. Presently, it is designed for a large class of discrete-time observation filtering problems. However, by letting the discretization time decrease to zero, we will also be able to approximate continuous-observation problems, most likely including those that have correlations between the signal and the observation noise.

This method was invented by Kouritzin and developed by Kim, Kouritzin, and Ma (see Ballantyne, Kim, and Kouritzin (2002) and Bauer, Kim, and Kouritzin (2002)). Lockheed Martin is in the process of filing a patent on this method and our implementation of it (see below).

Our SERP and MIBR methods have been enhanced so that they can now provide not just smoothers, trackers, and predictors, that is, approximate estimates of the signal state at some particular point in time, but can also provide probabilistic information on the full path of the signal conditioned on the current set of observations. For example, these new filters can answer questions like "How likely is it that the signal started in that region, passed through that area, and ended up here " Such information is provided in real time by the recursive action of the particle system on the available system observations, just as is done by smoothers, trackers, and predictors. To our knowledge, no other nonlinear filter was previously implemented with this capability. Once the technique had been developed, important secondary uses of this path-space information became apparent. The innovation was fortuitous in its availability to help solve a particular real world problem experienced by one of our industrial partners.

**2.SPACE DISCRETIZATION
FILTERS**

Partial differential equations are commonly solved using space discretization methods like finite elements and multigrid methods. The discrete analog for a stochastic differential equation would be some Markov chain. For filtering there is a distinct choice: One can discretize the signal as Kushner (1979) as well as Di Masi and Rungaldier (1981) did or directly approximate the DMZ and FKK equations. In the former case, robustness results can be used to ensure that the filtering solution with the approximate signal is not too different from the desired solution and any of the Kallianpur-Striebel formula, the Hidden Markov Model techniques, or the Gauge Transform methods can be used to calculate the approximate filter solution. We are interested in the second, more direct approach, treating the DMZ equation as a (simple) stochastic partial differential equation. Motivation for this approach came from works in ODEs by Kurtz (1971), PDEs by Arnold & Theodosopulu (1981), Blount (1991), and SPDEs by Kouritzin and Long (2002).

Initially, we studied direct Markov chain approximations to the DMZ. For discrete-time observation problems, our solutions could have been thought of as an interacting pair consisting of a continuous-time Markov chain to handle the signal evolution and a discrete-time Markov chain to account for the observations, both using the same state. However, simulations showed that excess, performance-degrading noise was introduced by the continuous-time Markov chain. Our more recent approximations are therefore stochastic-deterministic hybrids. Experience with particle methods of filtering taught us that the approximation of the unnormalized conditional distribution by blocks of fixed mass, i.e. particles, allows automatic resource management: less calculation and storage is wasted on less significant portions of the distribution. Hence, in the sequel we will describe our method in terms of particles. However, it must be understood that, unlike particle filters, these particles do not move independently of one another between observations. Whereas particle filters use the generator for the signal, space discretization methods use the adjoint.

Our Refining Stochastic Grid Filter(REST) represents the state of the art space discretization filter for "regular" signals evolving over some regular subset of Euclidean space. The signal state space is divided into cells and particles give birth and die in these cells based upon observation-dependent information. Each location of a cell containing associated particles, its rates of birth and death, as well as links to the neighbouring cells, are stored as a leaf in a balanced binary tree. Rates and particle totals are calculated and stored in the internal nodes of the tree in such a manner that the rates (particle total) at a node are the sum of the rates (particle total) of all subordinate nodes. This storage mechanism promotes very efficient calculations. By continually pruning the tree so that it only contains cells where particles are located, we retain, or even surpass, the storage and computational advantages of particle filters. The four most impressive features of our method are: the cell size adaptation, the Dynamic Interleaved Binary Expansion (DIBE), the noise reduction by rate differencing, and the pushdown particle control.

When the signal has not been localized the cell sizes in our REST are large and the particles are spread out over the cells. As particles coalesce in any dimension isolating the signal, the cell sizes automatically shrink, creating a powerful dimension-by-dimension zooming in or refining feature. Likewise, if the signal is temporarily lost (due to severely corrupted observations) zooming out occurs. The effect is that computations remain relatively constant and fewer particles can often be used than in particle filters. Naturally, this varying cell size property plays havoc on our binary tree, requiring it to branch and merge. Our DIBE mechanism is a sophisticated, efficient algorithm that handles these tree operations without resorting to destroying and rebuilding the tree. It represents four man-months of work on its own. The zooming in and out is done in such a manner that the tree shape remains relatively balanced.

Our REST method does not treat particles individually or rely on laws of large numbers, as particle methods do, but instead calculates net birth and death rates on a cell-by-cell basis. (Actually, due to the time imhomogeneity we use times rather than rates but we neglect this fact in our simple description herein.) In order to mimic particle movement, births can occur in a cell if there are particles in that cell or any of its neighbours. For example, to determine if a birth should occur in a cell, we would consider the net effect of: births due to observations plus births to simulate a particle moving into the cell from a neighbour minus deaths due to observations minus deaths to simulate a particle moving out of the cell to a neighbour. By considering all particles in a cell together, differencing rates, and avoiding Monte Carlo simulations, REST has far less simulation noise than particle systems and, thereby, can provide similar accuracy with far fewer particles. Conversely, its computer codes are far more complicated and it only applies to regular signals on regular domains.

To further ensure that the calculations required remain relatively constant, we control the number of particles using an efficient push down technique. This technique is devised to add minimal additional noise and to be performed in conjunction with the births and deaths from observations and "particle movement ". Another impressive feature of REST is that it can accommodate parameter estimation automatically simply because it fine-tunes cell size automatically.

REST was invented by Kouritzin and developed
by Ballantyne, Kim and Kouritzin (see Ballantyne, Kouritzin, Long, and
Sun (2002) and Bauer, Kim, and Kouritzin (2002)). Lockheed Martin
is in the process of filing a patent on this method and our implementation
of it (see below).

**3. EXPLICIT SOLUTIONS
THROUGH CONVOLUTIONAL FILTERS**

SERP and REST are excellent for problems where the dimension of the signal space is large (i.e. five, or greater). However, when densities exist and the dimension is small enough that densities can be stored, there are efficient exact or nearly exact methods that can frequently outperform the best particle and discretization methods. Convolutional methods are nearly the perfect complement to SERP and REST in that they are designed for efficient computation of the actual conditional densities of filtering theory with little use of approximation. The Infinite Dimensional Exact (IDEX) method amounts to characterizing the continuous-signal, discrete-time observation filtering problems that can be solved without approximation via a single convolution, multiplication, substitutions, and Bayes' rule, and also refers to the algorithms to implement such filtering solutions. Approximate Convolutional methods refer to the approximation of a very general class of filtering problems via an algorithm consisting of a small number of convolutions and transformations. The approximations in approximate convolutional methods are very tight on a path-by-path basis, converging to the optimal filter with an error of the order just shy of 1/(n!) where n is the number of convolutions used. Thus all convolutional methods easily produce optimal or nearly optimal algorithms in a very computationally efficient manner. However, the convolutional methods cannot compute solutions to large dimensional problems equally well as particle methods can, nor can they, as of yet, handle signals that evolve on some irregular space or do not satisfy a stochastic differential equation. IDEX was introduced in Kouritzin (1998, 2000) and its scope was greatly expanded in the work of Kouritzin and Remillard (2002) and the current study of Van Weelden and Kouritzin.

Michael Kouritzin and Bruno Remillard have continued a general investigation of the mathematics involved in exact convolutional filters for vector-valued signals that will generalize the initial treatment in Kouritzin (1998). The first major step in investigating the generality under which IDEX filters hold is determining the class of vector-valued stochastic differential equations (SDEs) that yields a unique explicit solution of the form

for some functions

where W is a multi-dimensional Brownian motion, and the integrals can be taken as either iterated Itô integrals or multiple Wiener integrals.

Characterization of the corresponding is also part of the problem. This was done in the case m=1 in Kouritzin and Remillard (2002) and is a subject of current investigation in the case m>1. Surprisingly, it was discovered in Kouritzin (2000) and Kouritzin and Remillard (2002) that the class of SDEs yielding such a representation, even with m=1, is huge, especially in the case that the dimension of X is greater than that of W. To connect this representation to filtering, we mention that the allowable signals for the exact convolutional method consist of those SDEs whose Fokker-Planck evolution equation can be solved by a combination of Feynman-Kac representation, measure-transformation and these representations. The result is a very large class of SDEs with both nonlinear drift and dispersion coefficients, which are amenable to IDEX filtering. Also, these representations often improve particle methods, allowing particles to be simulated without Euler or Mihlstein approximations or the associated accuracy and efficiency costs.

This work has provided insights into approximate
convolution methods. Eventually, techniques will be devised to combine
convolutional and particle methods into a "super" method.

back to top

**Computer Implementation**

**1.BENCHMARK/NON-FILTERING PROBLEMS**

In consultation with our corporate partners, we developed the following twelve-benchmark problems. They are close enough to our partners' real interests to be useful to them but still differ enough that we are allowed to describe them fully without compromising propriety information. We do not subject the reader to all the finer details of our models herein but they are available upon request. Also, the solution to many of these problems can be found on our web page: http://www.math.ualberta.ca/Pints. Clearly, some of these problems contain unrealistic simplifications. These simplifications were arrived at through consultation with our corporate partners and deemed to be reasonable with respect to their real interest. The development of these problems themselves represents a large amount of work performed in the November 2000 to November 2002 timeframe.

The Altitude Tracking Problem tracks the altitude of an aircraft based upon barometric altimeter readings. The altitude signal is modeled as a two dimensional (position, vertical velocity) Ito equation and the observations is the altimeter reading after transmission to a ground station. In particular, the observation at time is formed by taking the altitude of the aircraft at this time, adding noise due to local weather variations and measurement noise, rounding the result to the closest one hundred feet, and then adding transmission noise. The measurement noise is typically taken to be i.i.d. Gaussian and the transmission noise are usually asymmetric with heavy tails.

The Search and Rescue Problem models a dinghy lost on the ocean surface to be located, tracked, predicted, and historically tracked using highly corrupted infrared camera readings. These cameras sense body heat of occupants as well as combustion heat produced by a motor but are also subject to temperature fluctuation of the ocean itself. The dinghy has a seven dimensional state space consisting of planar position, planar speed, orientation, angular velocity, and motion type (adrift, rowing, or motoring). The first six states evolve according to an Itô equation and the last state changes according to a Markov chain. The forward operator decomposes into sums of operators with no more than four effective states due to independence. The observations are synthetic spatial pixel-by-pixel heat readings over the area of detection that are modeled as noise superimposed upon a small constant heat reading over the dingy surface. Usually, the observation noise is set to be additive and independent in space and time with some fixed distribution. However, non-additive, non-independent noise has also been considered and constructed. Various dinghy sizes have been considered from 1 to 500 pixels squared. For a 125 pixels squared dinghy in a 65,536 pixels squared detectable area, we consider pixel-by-pixel signal to noise ratios to the extreme of 1 to 41, which is approximately one-eighth the minimum detectable by the human eye and about the minimum possible (using the optimal filter) for reasonably reliable tracking.

The Fish Tracking Problem models a fish swimming in a tank, neglecting fish deformation, wire frame effects. The fish reflects off the sides of the tank and is modeled as a two-dimensional Skorohod stochastic differential equation. We neglect vertical motion in this example. As opposed to the previous example, we only model position in this model. For simplicity, we take the observable fish shape to be a constant square block of varying sizes. The observation is identical to the Search and Rescue Problem; only the type of camera used is different. Instead of an infrared camera we assume a digital vision camera in this problem.

The Performer Prediction Problem predicts the future state of a single performer moving around on a stage. Her three dimensional position, forward speed (that can be negative), and orientation are modeled as an Itô equation constrained to stay on a stage and biased to face the audience. The vertical velocity is taken to be independent of the other state variables. Acoustic measurements are formed by timing supersonic noise from each of four perimeter speakers to a microphone on the performer's person. Noise is present due to reflections and blockage of the sound waves, meaning observations can be missed. The observations are modeled as the correct travel time of the sound wave multiplied by a position-dependent Bernoulli random variable (indicating a missed observation) plus additive noise.

The Pod Tracking Problem is concerned with tracking a pod that is constrained to move along a two-dimensional manifold of three-dimensional space. Physically, the pod is a sensor external to an aircraft connected by a pole, and we need to estimate the position of the pod based on a second sensor in the plane. A Stratonovich equation with coefficient vector fields designed to keep the pod on the manifold is used to model the signal. The simplest model used has reflections at the boundary, a deterministic pull back to the origin proportional to the distance from the origin along the manifold, and additive noise, i.e. is like an Ornstein Uhlenbeck process, and the simplest manifold is the surface of a hemisphere. However, these assumptions are not quite correct and more complicated models and manifolds need to be employed for high fidelity. The observations can be either: 1) The projection of the pod position onto a plane plus additive uniformly distributed noise, or 2) Angle only information, consisting of the two angles of the pod relative to an outward normal plus additive noise.

The Weakly-Interacting Multi-Target Problem involves tracking an unknown number of (seven dimensional) dinghies that interact weakly to avoid collision and to maneuver together. The model is devised so that if there is just one dinghy it undergoes the motion described in the Search and Rescue Problem. There can be anywhere from zero to six dinghies and we must detect the presence, determine the correct number, determine the dinghy locations, and predict where they will be at some future time. The observations are as described in the Search and Rescue Problem except there are multiple dinghy images superimposed with the observation noise on the detectable region.

The Volatility Parameter Problem is concerned with estimating future volatility and parameters of a stock, volatility model based upon price information. In particular, the volatility signal is a non-negative mean-reverting Itô equation model and the discrete-time observations are the logarithm of the returns process, defined as the current price divided by the previous price. To allow heavy tails, we model observations as some nominal value (deterministic drift) plus the volatility times stable random variables. Most often we need to estimate the parameters of the signal and observation processes using combined parameter estimation and filtering algorithm. These parameters include: a volatility time constant, mean volatility in log scale, amplitude of volatility noise, the index of a centered symmetric -stable law, and a price drift constant.

The Ordered
Multi-Target Tracking Problem
is concerned with tracking an unknown number of targets (collectively the
signal) on a compact interval. These targets repulse their nearest
neighbours and the boundaries but not so strongly that they cannot cross
or leave the interval and die. Each particle has its own "power"
and "shape" that are independent of everything, evolve slowly,
and can be represented on (0, )^{2}
. There are small birth and death rates. A new particle is
positioned uniformly over the interval with a birth rate independent of
the number of living particles or their positions. A constant death
rate applies to each living particle. The initial conditions and
rates are constructed in order to anticipate between thirty and sixty targets
most of the time. We observe a time-dependent spatial impulse response
of some unknown background function plus the collection of particle shapes
centered at their location and multiplied by their power. This is
all obscured by additive noise and there are parameters to be estimated
in the background function as well as in the signal itself. Hence,
at each observation time, the observation is a time-dependent incompletely
determined function of the signal plus noise at that time. The observations
focus on different parts of the signal at different times.

The Random Environment Filtering Problem refers to the problem of filtering a non-Markov diffusion in a random environment as well as determining close path-by-path approximations to this signal. For a fixed environment, the signal can be constructed as a path-continuous Markov process using Dirichlet form theory. However, the drift is too rough to interpret the solution as more than a formal solution to a stochastic differential equation and the signal becomes non-Markov when the environment realization is incorporated. The motivation for this problem comes from the Search and Rescue Problem when the dinghy is in the high seas and the sea provides a random environment through which the dinghy must maneuver. The continuous-time observations are taken to be have classical form except there can be correlation between the observation noise and the signal. For implementation purposes, we also find it worthwhile to consider the filtering problems where: I) The signal is constrained to live in a compact interval by reflections at the boundary, and II) in addition, the gradient of the Brownian sheet (or other random field) appearing as the "drift" in the diffusion in the random environment is replaced with a truncated (non-square summable) Fourier expansion of the same. On compact intervals, we treat the filtering problem in a random environment as a combined filtering estimation problem. In the case that the Fourier expansion is not truncated it is functional estimation and, otherwise, parametric.

The Submarine Location Problem involves the location and tracking of an underwater submarine based upon sound waves that it emits and that are received by sonabuoys. Due to randomness, these sound waves evolve as a stochastic wave equation in a random environment. The observations are the partial, corrupted measurements received by the collection of sonabuoys. The signal can be taken to be the (infinite-dimensional) sound wave itself, requiring a second estimation problem of finding the submarine itself from the wave estimates. In the stochastic control variant of the problem, one is allowed to place additional sonabuoys in the optimal places.

The Pollution Problem is the monitoring of the flow of bacterial or chemical pollution that reacts, drifts, and diffuses through a sheet of groundwater (or in a region of space). The signal is therefore a stochastic reaction-diffusion equation, usually driven by Poisson measure noise. This noise source is the random dumping of the pollutant into the water by various factories along its way. The observations are then taken to be a collection of corrupted measurements representing the average value over a lake or pinpoint measurements at well sites. One is interested in answering vital questions such as: "Which factory is the biggest polluter" "In which regions is the water safe to drink?" and "What will the water quality be like at this point in the future?" all based upon filtering theory.

The Option
Pricing Problem
is to produce a pathspace empirical measure that can be used to price all
pathspace and terminal options simultaneously for a collection of stocks.
The model consists of a collection of stocks that jointly influence interest
rates through the empirical measure of the stocks. Such randomness
in the interest rates is natural and means that prices brought back to
the current time are subject to random weighting. Thus, a resampled
particle system like SERP can be expected to outperform straight Monte
Carlo method in terms of performance per computation. Naturally,
this problem is not filtering but still is estimation and solvable with
minor modifications to our filtering algorithms.

**2.OBJECT ORIENTED
PROGRAMMING**

We
have a long and growing list of both sample problems and filtering algorithms.
Therefore, to ease the burden of new implementations and reduce duplication
of code segments, we are modifying the object-oriented interfaces in all
our algorithms in order to improve the ability to swap various problems
and filters. In addition, we are upgrading our display capabilities
using external 3-D ray tracing code. The first end goal of this effort
is to trivialize the introduction of new problems, algorithms, and comparisons,
allowing users with little programming experience to create, implement,
and test new algorithms with ease. The second end goal is to create
a more flexible, expressive display mechanism. We spare the reader
the details of how we are approaching this goal but mention that very considerable
effort is being spent with the consent and encouragement of our corporate
partners.

**3.PARTICLE FILTERS**

Previously, we implemented and compared the classical weighted particle filter, Del Moral & Salut's interacting particle filter, and MIBR on our benchmark Search and Rescue, Fish Farming, and Performer problems. There was a very significant improvement in going from the weighted to interacting to MIBR filters and this was reported previously. Therefore, we will only compare SERP to the best of earlier implemented methods, MIBR.

We have implemented and tested SERP on the Search and Rescue, Fish Farming, Performer, and Option Pricing problems, empirically solving the stochastic optimal control problem for a feedback control resampling parameter as a function of the particle system state for each problem. Work is ongoing to implement SERP on the Weakly-interacting Multi-target problem. Our unveiling and first demonstration of SERP (on the Fish Farming problem) occurred at Lockheed Martin Naval Electronic and Surveillance Systems in February 2002. Early results showed a marginal increase in speed and a six to ten percent improvement in performance over MIBR using the same number of particles. However, further work on streamlining the code (as we did earlier for MIBR and all other methods) and the stochastic control selection problem produced an even faster, more accurate SERP algorithm typically beating MIBR by ten percent in speed and twenty percent in estimation. We now know that this improvement in performance increases as we consider the larger, more challenging problems. The elegance of the solution and the improved performance of the method convinced Lockheed Martin to patent the algorithm and implementation. We agreed to work with them on that.

It is believed that SERP will work on any problem for which the classical weighted particle method works. Therefore, based upon the works of Kurtz and Xiong (1998) and Kouritzin and Xiong (2002), we believe that SERP can replace MIBR everywhere and even work on problems where there is correlation between the observation noise and the signal. Moreover, the optimal control problem can promote either time to localization or longer-term error, depending upon which is more critical in the problem at hand, so there is extra flexibility that is not present in earlier particle filters.

When being used with discrete-time observations, most computational effort for SERP (and other particle methods) is usually spent evolving particles forward using Euler or Mihlstein approximations. Therefore, explicit solutions can be used (as described above) when applicable to create an incredibly fast and accurate method. We sometimes refer to particle simulation by use of explicit solution as supercharging particles.

To cope with more technical problems of our corporate partners, we implemented the following variants of MIBR. First, the introduction of a cemetery state was used as a mechanism to perform detection and tracking, as opposed to tracking a signal that is known to exist in the domain. A blurring of the sort that is used in simulated annealing, with the same reduction of the blurring in time, was applied to the particle dynamics and the inverse of the observation function to resolve problems with highly directed signal dynamics and very definite observation signatures. The particle control technique has been modified to not duplicate particles about which there is no information, but instead to resuscitate a recently deleted particle, thereby removing artifacts near areas of the domain, such as the edges, that often contain particles about which there is little knowledge. This is especially helpful in the case of a strong signal with little noise filtered using few particles. Most importantly, it was realized that the historical information stored to provide the path-space filter could be used to handle time-dependent observation functions. For example, if the observations depend on the path of the signal over the last few moments of time, the branching filter can still optimally calculate the probability that the signal is at a certain state or had taken a particular path because it can access data regarding the potential past paths of the signal while assessing the current observation. Many signals in the real world leave observable trails but are hard to detect directly, and that is why this discovery has wide applicability.

Simulations have been constructed to show the use of the advanced historical, or path-space, tracking for MIBR and SERP. The current simulations outline a use for the technique in the search and rescue situation, which had been used previously to showcase the particle methods. The new simulations offer an updated view of the path-space capabilities of the MIBR and SERP filters. It is important to note that the filter provides a full approximate distribution for the seven-dimensional dingy state at past, present, and future times, as well as full path space information. In the anti-narcotic smuggling example, the pathspace estimate can be used to construct the path over which the smugglers may have discarded or deposited their narcotics and future positions where the smugglers could be intercepted.

Adaptations
to MIBR and SERP allow them to estimate the parameters of the signal rather
than to assume a completely known model. It is important that any
such filter also retains its efficiency, so careful work was required.
Hitherto, the parameter-inclusion filter, the GMM, and the recursive least
squares filter methods have been implemented with the parameter-inclusion
filter being the clear loser. The recursive least squares filter
estimation algorithm works well in practice but mathematical proofs are
yet to be constructed.

**4.REST IMPLEMENTATION**

We have implemented and tested REST, initializing it with very few cells of large size to localize the signal in. As time progresses, REST automatically fine tunes its cell size, splitting the cells in the most profitable dimension. If the signal becomes lost, it recombines cells in the most critical dimension. We deem comparisons between REST and particle filters to be fair if either the particle filter uses a fine enough time discretization with its Euler or Mihlstein approximations of the particle evolutions to match the accuracy of REST closely or a discretized operator is used in REST so it collects, compares, and performs the net effect of a few operations in order to save computations and reduce its accuracy to that of particle systems. Otherwise, it is clear that REST is more accurate but the particle system may be faster. With the noted equalization, REST usually significantly outperforms the particle filters including SERP. Hitherto, we have implemented REST on the Fish Tracking Problem and found it to be the clear winner in terms of computations and performance measured by time to localize the fish and average longer-term error. The advantages of REST are speculated to be the introduction of far less simulation noise and the fact that initially it only tries to determine which of few large cells the signal is in rather than hitting a home run and getting the precise signal location right away. These conjectures have been partially proven by simulations, especially simulations where there are very few particles and the particle filters often do not find the fish.

We also implemented REST in the Performer problem. However, there is a huge disadvantage to discretization methods over particle filters in this problem because the forward operator decomposes into the sum of operators "with smaller dimension" due to independence and this can be made good use of by particle filters but not by REST due to the fact the "particles" in REST do not evolve independently as the signal. This means that the particle filters effectively solve a smaller dimensional problem than REST does. This problem-inherited advantage proved to be too great for the usual superiority of REST to overcome. We were impressed that REST produced numbers of the same order as the particle filters.

Our
computer code for REST is general, allowing it to work in any dimensions
and with a variety of signal and observation models. However, it
takes quite a few off-line mathematical calculations to produce the adjoint
operator, complete with boundary values, for our more complicated multi-target
problems. Moreover, REST relies on fairly regular differential operators
and domains. Hence, its implementation for the multi-target problems
is still ongoing.

**5.IDEX IMPLEMENTATION
AND PARAMETER ESTIMATION**

The
IDEX algorithm has been implemented and an initial practical simulation
has been completed for the Altitude-Tracking and Volatility problems.
In addition, IDEX has been generalized to handle predictive models driven
by Lévy process observations and, simultaneously, to estimate model
parameters using least squares and maximum likelihood methods in the volatility
problem. The program was applied to estimate mean-reverting stochastic
volatility using symmetric driven
price returns as observations. In particular, the k^{th}
observation was ,where
is the stochastic volatility
at time , and () are independent symmetric-stable random variables with
centers zero and unit spread parameters. Five parameters including were
simultaneously estimated. (A complete description of the parameter
was givenpreviously
when the volatility problem was introduced.) The method was proven
to work mathematically and experimentally in Kouritzin, Remillard and Chan
(2001). The implementation was done by C. Chan, Kouritzin and
Wiebe.

back to top

**Consistency and Rates of Convergence**

A large amount of theoretical work has been advanced on the subject of nonlinear particle filters in general. In 1998, Kurtz and Xiong used particles to show existence and uniqueness to SPDEs including the filtering ones. In 1999, Pierre Del Moral and Laurent Miclo completed a treatise on particle methods, especially the interacting one, but these works do not include MIBR or SERP, nor do they include all possible characterizations of performance. Douglas Blount and Michael Kouritzin have proved consistency for MIBR. David Ballantyne and Mike Kourtizin have proved convergence for SERP. Both of these follow from a general result of Blount and Kouritzin.

In addition, Pierre Del Moral, Michael Kouritzin, and Laurent Miclo jointly developed and analyzed an improved interacting particle method (IIPM) that was not included in the Del Moral, Miclo 1999 treatise. IIPM performs very similarly to MIBR.

As mentioned, the MITACS-PINTS team has introduced IIPM, MIBR and SERP for asymptotic solutions to continuous-discrete filtering problems with pathspace capabilities.

Suppose
that t X_{t}
is a Markov process on a Polish space and we wish to calculate the measure-valued
process T_{[0,T] )
= P({X},
0
T}
), where =
and is a distorted, corrupted, partial >observation
of . Then, in all three methods we construct a particle system with observation-dependent
sampling on
initial particles whose empirical measure up to time T,,
closely approximates_{[0,T] }.
Specifically, we establish an unnormalized pathspace filtering equation
and prove that an unnormalized empirical measure for our particle filter
converges (as n)
in distribution to the unique solution of this pathspace DMZ equation.
Recently, Douglas Blount and Michael Kouritzin (2002) have established
almost sure convergence for the finite dimensional distributions of approximations
of a large class of historical measure-valued processes including many
constructed from martingale problems, nonlinear martingale problems and
SPDEs. The conditions are general enough to handle all standard approximations
of continuous or discrete-time filtering problems, in which the historical
variant of the Duncan-Montensen-Zakai equation yields an almost surely
unique solution. Almost-sure consistency for our MIBR is thereby
established. The thesis of Ballantyne (2002) then applies this result
to establish the same strong convergence for SERP.

There are several convergence results for IIPM and its intellectual progenitor, the classical interacting method in Del Moral, Kouritzin and Miclo (2002). Douglas Blount and Michael Kouritzin are working on two other convergence results for empirical measures of particle systems including the systems produced by MIBR and SERP. The first result establishes general conditions under which the empirical measure of the particle path converges (almost surely in the topology of weak convergence for Borel measures) to a desired historical measure-valued process. This technique is to convert the finite dimensional distribution result of the previous paragraph to a full pathspace distributional convergence through new tightness arguments. The second study will establish rates for this convergence in a more specific, non-historical setting. The rates are in terms of both the number of particles and the sampling rate of the observations.

Kouritzin and Long are currently investigating the convergence of the parameters in our recursive least-squares parameter estimation and filtering algorithm. The algorithm is designed assuming access to the true filter of the model with the current parameter estimate substituted in. Unfortunately, in practice one only has access to a distribution that depends on all past estimates in some way. Therefore, there are two natural steps to prove convergence: (1) Prove convergence of the parameter estimates using the filter of the model with only the current parameter estimate substituted in. This was done based on the method of convergence of the Ljung's scheme. In particular, we combined ideas from Gerenscer (1992) and Kouritzin (1994) to complete this step. (2) Prove a strong ergodic property establishing that the difference between the actual available distribution and the filtering distribution that we just used converges to zero. To do this, we first suppose that the current and recent past estimates are very close to the true parameters. Then, it is possible to show using Step (1) that the estimates get trapped and converge into the true estimates. Next, one can use a coupling idea to keep the parameter estimates close long enough that they get trapped with some small probability. Finally, we use renewal arguments to ensure that they get this close infinitely often. The actual proof is quite involved and a few details are still being worked out. Once this convergence is proven, it is possible to adapt arguments of Kushner and Budhiraja to show that the average filtering error also converges. (We realize that there is a gap in Kunita's (1971) paper and so we have to be careful about the conditions required for the Kushner-Budhiraja argument.)

MARKOV CHAIN METHODS OF FILTERING

Our motivation for REST came from our earlier Markov chain methods. Whereas we have not as of yet been able to prove convergence for REST as the cell size decreases and the initial number of particles increases, we have been able to prove convergence for the Markov chain methods. We believe that our proofs can eventually be generalized to handle REST.

Motivated by our Markov chain approximations of spatial signals (discussed below), Kouritzin, Long, and Sun developed a Markov chain method of filtering for signals that evolve in a regular bounded subset of Euclidean space. This method differs from the standard Hidden Markov Model approach in the sense that the signal itself is not approximated but instead its unnormalized conditional distribution. The result is an efficient, direct, computer workable approximation to the filtering equations that is not the (unnormalized) conditional distribution of some approximate predictive model. Moreover, the computer code, while not quite as simple as that implementing particle methods, is easy to implement and do not suffer so dramatically from dimension-dependent signal problems as do traditional PDE equation solvers.

Kouritzin,
Long, and Sun (2001) considered signals, described by Skorohod SDEs, that
reflect at the boundary of a rectangular region of Euclidean space.
We used classical, continuous time observations. Our approximation
was done in two distinct steps: we first employed the wideband approximations
of the Zakai equation studied by Kushner and Huang (1985, 1986) and then
devised a further Markov chain approximation using the technique we developed
to approximate stochastic reaction-diffusion equations and discuss below.
The upshot of these approximations is an easily implemented filtering method
that shares the benefit of working in higher dimensions with particle filters
but runs faster than the best particle filter. (It was the inspiration
for REST.) Naturally, there will always be more assumed conditions
required on the underlying predictive model than is the case for particle
filters. We proved consistency of this method using Kushner and Huang's
result, time change methods, Dirichlet forms, stochastic calculus, and
stochastic convergence.

back to top

**Multi-Body Tracking**

In June 2001, another particle breakthrough was achieved by Ballantyne, Chan, and Kouritzin using MIBR on the weakly interacting multi-target problem. We were able to detect and track a small-unknown number (up to four) of simulated, weakly interacting bodies all evolving randomly over a sheet of water. From the theoretical side this simply amounted to imbedding all the targets into a purely atomic Markov measure-valued process that becomes the signal. Each particle then has a number of bodies in different locations. On the practical side, we had to solve a whole host of new issues like "when do we allow a particle to switch its number of targets?", "What is the best initial allotment of particles in terms of number of bodies and their positions?" and "How should the performance of the filter be judged?" These were all successfully answered and the filter worked remarkably well, given sufficient computing resources. Its success attests to the efficiency of MIBR. In the process of handling an unknown number of targets, we also solved the detection problem as a side effect. Thus, we reduced the normal sequence of track detection, track association, tracking, and track removal to a single filtering problem on a larger space.

We should have a working implementation of this problem with SERP in November 2002. It is expected to outperform MIBR and allow us to track up to five targets with careful solution of the stochastic control problem for selecting an optimal ρ resampling value. SERP's resampling method allows us to consider potentially more efficient techniques for target count determination.

Gentil and Remillard obtained an efficient algorithm of detection of ships in using a sequence of noisy images. We find the algorithm computing easily with the optimal filter. We also present some simulations, in the case of one or two ships, showing that the prediction of the position of the ships is quite good.

Sensor Vibration Neutralization

One of Lockheed Martin Naval Electronics and Surveillance Systems' and Lockheed Martin Canada's most vital concerns is to improve the onboard performance of the Canadian Wescam sensor. This sensor has been installed in several U.S. military aircraft and would be installed in many more if its performance were more satisfactory. The most practical methods of improving performance are via software-implemented mathematical algorithms. There are three separate problem areas in which nonlinear filtering can improve the performance of the Wescam filter. In simplification, they are: initialization, remote pod neutralization, and low observable tracking of vessels. We have essentially provided our sponsors with their most promising methods of improving the low observable tracking performance of the Wescam sensor through our work on SERP and REST filters. Our convolutional filter appears ideal for problems like remote pod movement neutralization. This Wescam filter enhancement is part of the realistic and quite technical motivation for our previously mentioned Search and Rescue problem.

To explain the Pod Tracking problem further, we mention that the sensor must be mounted externally to be effective and this external sensor pod is subject to random vibrations and forces. Our goal is to neutralize the effect of the first sensor movement from the first sensor readings. This is done by a filtering algorithm and a second sensor in the craft, whose only task is to estimate the pod. Depending upon the choice for this second sensor different observation models are appropriate. Due to the distances involved small errors in the pod location estimate have a dramatically amplified effect on the first sensor's readings. Therefore, we wanted to avoid the extra approximations of particle or space-discretization methods and use IDEX. Currently, we believe we have a workable explicit solution for the stochastic movement of the pod in terms of a small number of multiple Wiener integrals in the sense that the corresponding Stratonovich equation should adequately model the pod movement and the explicit solution is computer workable. Most likely, we will have to modify our solution later to account for the pole behaviour more fully. However, the explicit solution method has the highly desirable properties that the solution is forced to stay on the manifold even in the presence of numeric error and the evolution is in two dimensions (the dimension of the manifold), not three as direct particle or space discretization methods would do.

The mathematics behind this solution is reasonably involved but general enough to accommodate more general manifolds. Basically, we prove such things as that there are explicit solutions involving the one and two-parameter stochastic integrals if and only if the vector fields corresponding to the columns of the dispersion coefficient in the Stratonovich equation are two step nilpotent and the "drift" vector field satisfies some more general condition. Then, we study the class of such explicit solutions and show that it is large and contains many suitable models for this problem. In particular, we show that there are solutions that stay on the desired manifold and have the desired drift and diffusion properties on this manifold. The work is being done by Kouritzin, Remillard, and Van Weelden. Kouritzin and Wiersma are implementing the solution.

We are also talking with Lockheed about the possibility of combining the Pod Tracking problem with the Search and Rescue problem to better reflect reality and produce one large problem.

Communication Networks Applications

Our new corporate partner, Optovation of Ottawa, is paying us to investigate fibre optic signal properties. In particular, they are building a product that tests the quality of optic signals at various frequencies, using a lot of proprietary hardware and software. One of the most difficult problems left for them is the accurate simultaneous estimation of optical signal to noise ratios, peak powers, and bit rates for all the carrier frequencies on a fibre. In the Ordered Multi-Target Tracking problem, we are solving a simplified version of this problem for them. In early September 2002, Hailes and Kouritzin visited Optovation's lab to learn the characteristics of this problem and collect some data. Since then, we have built a first model for their problem including the targets, their interactions, the optical noise, and the electrical noise. The problem and model are naturally addressed by filtering theory. However, due to the huge size of this problem (up to eighty interacting targets), we will have to make significant advances in multi-target tracking and applying REST to such tracking in order to help them. Even with knowledge of problem specific simplifications that can be made, the problem seems too large for anything but possibly REST. There is also a necessary parameter estimation that must be done to characterize the optical noise in this problem. We may try to do this within REST or else using our recursive-combined algorithm.

Michael Kouritzin established a new class of models described by a nonlinear stochastic parabolic equation with arbitrary cádlág noise and arbitrary order elliptic operators. The noise sources in these models include long-range dependent processes, heavy-tailed Lévy processes, and composite processes like iterated Brownian motion. It is believed that these models are a reasonable starting point for modeling such events as the flow of information through a complicated communication network. Filtering would then be used to determine more "internal" quantities.

Mathematical Finance Applications

We hope to find new corporate sponsors in the mathematical finance and communication industries. Here, such data as a collection of stock prices or the number of jobs at various nodes of a network usually form the observation process, and long-range dependent or heavy-tailed models for observation processes are used for fidelity to reality. Thus, the long-range dependent processes are not Markov. However, as shown in a recent paper by M.L. Kleptsyna, A. Le Breton and M.C. Roubaud, entitled "An elementary approach to filtering in systems with fractional Brownian observation noise", there are still methods of constructing filters in certain long-range dependent observation settings.

Application of filtering and particle systems to mathematical finance is a growing area. Hence, it seems natural to apply our algorithms to this area in hope of attract an industry partner. In the volatility tracking problem we model and track stochastic volatility together with five unknown parameters using IDEX. In fact, due to the nature of the model we can compute the maximum likelihood parameters from the filter. Despite the large number of parameters to estimate, when applied with real stock data the model still performed very well. We published some of the results in Kouritzin, Remillard and C. Chan (2001) and believe that this technique could benefit an industry partner.

When pricing options using natural random interest rate and stock models, the future price must be brought back to the current time using a random weighting. Several books suggest constructing option prices using straight Monte Carlo methods. However, this random weighting means that a resampled particle system like SERP should outperform straight Monte Carlo pricing. We have applied SERP to a simple (single stock, bank account) model where the bank account interest rate depends on the stock price. A minor modification of SERP is then used to price all pathspace and terminal options simultaneously by the creation of a ``pricing measure''. The results were impressive, requiring far fewer particles than straight Monte Carlo. We look forward to testing SERP on the multi-stock scenario. In November and December, Jarett Hailes and Mike Kouritzin will meet with contacts from the finance sector in an attempt to secure a corporate partner and justify further work in this area.

Robotic Lighting Control Application

Acoustic Positioning Research (APR) was so impressed with our algorithms and the possibility of improving their product that they joined us as a corporate sponsor in November 2000. Will Bauer, president of APR, suggested that including our algorithm into APR's product may increase the market for their product by fifty percent. He has also agreed to pay a small royalty to our Centre on each sale of their enhanced product to reflect the fact that they will benefit from the previous work of the Centre. The first problem is to direct lights and sound effects automatically based upon inaudible high frequency sound time measurements. In the past, they have suffered from noisy measurements and poor prediction methods. We now have simulations solving this initial one performer problem using MIBR, SERP, and REST. The results are impressive. The next large steps are to allow multiple performers and for parameters to be identified in the models.

The first APR problem was to predict the future position of a performer using timing measurements as described in the Performer Prediction problem. This problem has now been solved effectively using SERP and APR is incorporating it into their full system. Our solution can improve performance of the many systems that have been sold and are in the field.

Due to current inexpensive prices for video equipment, APR is now also interested in tracking and predicting performers using digital cameras also located on the perimeter of the stage. A small pause in our collaboration was agreed upon due to the sabbatical of Kouritzin and the time required to incorporate the first solution. However, both APR and PINTS are eager to restart collaboration on the second problem, striving to predict multiple performers using the smallest number of cameras, by the end of 2003.

Model Approximation and Robustness

MARKOV CHAIN APPROXIMATIONS FOR SPATIAL SIGNALS

Continuous and discrete Markov chain approximations can be important in predictive modeling and computationally efficient filtering for a number of reasons. To dramatically increase the class of signals that can be filtered by our various methods, we have concerned ourselves with the Markov chain approximation of the extremely complex signals arising in multiple target tracking and other spatial filtering problems such as pollution or bacteria tracking within a sheet of water. Then we can substitute the computer-tractable approximation for the real signal and rely on robustness results of, for example, Kushner or Bhatt and Karandikar (1999, 2002) to justify our use of a simplified model.

With regard to constructing a general powerful method of approximating complicated signals with a Markov chain, Kouritzin and Long have made crucial innovations to advance the existing approximations of Ludwig Arnold, Peter Kotelenez, and Douglas Blount. These innovations allow for driving noise sources, more computationally efficient implementations of novel Markov chain approximations, and a larger selection of spatial processes, which can be so approximated. The innovations include enlarging the class of elliptic operators and the class of reaction functions that can be used in Markov chain approximations within stochastic reaction-diffusion equations, as well as improvements that allow for the use of much slower rates within these approximations to reduce the computational requirements. Kouritzin and Long's alterations dramatically slow Markov chain state change rates, often yielding a one hundred-fold increase in the simulation speed over the previous version of the method.

In an attempt to attract Stantec or other potential industry partner with interests in environmental monitoring, the aforementioned innovations have been first incorporated into a stochastic reaction-diffusion equation motivated by pollution distribution. (See the 1995 book by Kallianpur and Xiong for background information on this problem.) Kouritzin and Long (2002) have established the convergence of Markov chain approximations to stochastic reaction diffusion equations in both the quenched and annealed senses. Their first convergence result, a quenched law of large numbers, establishes convergence in probability of the Markov chains to the pathwise unique mild solution of the stochastic reaction diffusion equation for each fixed path of the driving Poisson measure source. Their second result is the annealed approach that establishes convergence in probability of the Markov chains to the mild solution of the stochastic reaction-diffusion equation while considering the Poisson source as a random medium for the Markov chains. These results are vital for application of filtering theory to the pollution dispersion-tracking problem, as they can be combined with the robustness results of Kushner or Bhatt and Karandikar and the aforementioned particle filtering methods to create a computer workable algorithm.

In more detail, Kouritzin and Long considered the stochastic model of ground water pollution, which mathematically can be written with a stochastic reaction diffusion equation. In the context of simulating the transport of a chemical or bacterial contaminant through a sheet of water, they extended a well-established method of approximating reaction-diffusion equations with Markov chains by allowing convection, certain Poisson measure driving sources and a larger class of reaction functions. This work applies to Lockheed Martin's interest in detecting and classifying oil slicks and vessel traces or wakes.

A
weighted L^{2}
Hilbert space was chosen to symmetrize the elliptic operator in the stochastic
reaction diffusion equation, and the existence of and convergence to pathwise
unique mild solutions of the stochastic reaction-diffusion equation was
considered. The region [0,L_{1}]
[0,L_{2}]
was divided into L_{1}N
L_{2}N
cells, and the Markov chain approximation on these cells was analyzed as
N .

The particles in cells evolve in time according to births and deaths from reaction, random walks from diffusion and drift, and some area dependent births from the Poisson noise sources. In this stochastic particle model, the formalism allows for two kinds of randomness: the external fluctuation coming from the Poisson driving sources and the internal fluctuation from the reaction and drift-diffusion on the particle level. Independent standard Poisson processes defined on another probability were used to construct the Markov chains by the random time changes method.

In a second work on the stochastic model of water pollution, which mathematically can be written with a stochastic partial differential equation driven by Poisson measure noise, Kouritzin, Long and Sun (2002) establish a more general annealed law of large numbers. It shows convergence in probability for our Markov chains to the solution of the stochastic reaction-diffusion equation while considering the Poisson source as a random medium for the Markov chains. Our proof method of the main result is substantially different from the previous work Kouritzin and Long (2002) using the weak convergence method. Here, we directly apply Cauchy criterion (convergence in probability) to our Markov chains and utilize the nice regularity of Green's function with a delicate iteration technique. (The usual Gronwall's Lemma doesn't work in our case.)

STOCHASTIC MODELS OF THE SPREAD OF POLLUTION

The results of Kouritzin and Long (2002) and Kouritzin, Long and Sun (2002) have enabled the creation of prototype stochastic models of pollution. Methods for efficient simulation of the models have been implemented in a computer code.

Kouritzin, Long, Ballantyne, and H. Chan have created a simulation of a stochastic reaction-diffusion equation which can represent the transport of a pollutant or bacteria through a river or the leaching of a pollutant through a ground water system, including adsorption effects, all in a manner amenable to filtering.

The stochastic models of the spread of pollution developed at MITACS-PINTS have the following general features:

The models employ Markov chain approximations to nonlinear SPDEs representing stochastic reaction-diffusion equations.

The equations include convective forces, as would be found in a flowing water system, and Poisson generating sources to model contamination from sites such as factories, storage ponds and agricultural facilities.

The Markov chain approximations used in the models converge to the exact solution of the stochastic reaction-diffusion equation in both the annealed and the quenched senses.

The approximations provide the basis for further work on applying filtering techniques to track the sources of contaminants given only imperfect, noise corrupted samples at a few locations.

These concept proofs provide an effective foundation for incorporating novel filtering techniques into different models. These techniques are also used to model other reactive flows, such as heat diffusion through a substance of varying heat capacity, or heat-activated internal reaction.

ROBUSTNESS AND REGULARITY OF FILTERS

Lucic and Heunis (2002) study signal robustness in the extreme case of singular perturbations with the goal of characterizing the limiting nonlinear filter (if any) as the perturbation parameter tends to zero. The signal arises from a singularly perturbed stochastic differential equation with a small parameter, in the case where the dynamics of the signal are conditioned by the observation process. We show that the nonlinear filter is a solution of a particular measure-valued martingale problem, and then show that the limiting nonlinear filter exists and characterize it completely. The approach is to use solvability of Poisson-type operator equations to construct a limiting measure-valued martingale problem, and use the uniqueness in law results of Lucic and Heunis (2001) to show that this limiting martingale problem is well posed and that its solution corresponds to the limiting nonlinear filter.

Kouritzin and Xiong (2002) study observation robustness to demonstrate the asymptotic correctness of the classical, non-instrumentable continuous-time observations via instrumentable coloured-noise approximations. In particular, we consider the effect of the observations: , where is an Ornstein-Uhlenbeck process, as . In this case, the integrated observation noise converges to Brownian motion and we show that the filter also converges to the classical observation filter. The non-integrated observations can be instrumented, so this result demonstrates that the classical observations are a natural idealized or limiting object. Our result generalizes, in some manner, previous results by Kunita (1993), Mandal and Mandrekar (2000), Gawarecki and Mandrekar (2000), and Bhatt and Karandikar (2001). Our method is to use the Kurtz-Xiong particle approach to derive a FKK-like filtering equation and uniqueness for this equation based upon observations , then we prove tightness for filter distributions, and identify a unique limit using the uniqueness result of Bhatt and Karandikar (1995).

Douglas Blount and Michael Kouritzin have derived Hölder continuity for processes related to the Zakai equation of filtering theory. Blount and Kouritzin have obtained a criterion, which gives Hölder continuity results in Hilbert space for a class of solutions of stochastic evolution equations. The class includes the superstable processes with critical binary branching and Ornstein-Uhlenbeck type SPDEs with a suitable eigenfunction expansion for the drift operator. It should also give regularity results for some types of SPDEs arising from filtering theory. The resulting paper, "Hölder continuity for spatial and path processes via spectral analysis", appeared in Probability Theory and Related Fields and was the subject of an invited talk in a session on stochastic analysis of the AMS meeting held in January 2001 in New Orleans.

Filtering for Signals in Random Environments

Michael A. Kouritzin, Hongwei Long, and Wei Sun (2002) consider the nonlinear filtering problem in which the signal to be estimated is a (non-Markov) diffusion in a random environment that evolves in some Euclidean space.

Our motivation comes from the tracking problem of a dinghy lost at sea. When the weather conditions are adverse, the motion of the dinghy will change dramatically due to drifting by random ocean surface wave propagation, rendering the Search and Rescue problem model inappropriate. The dinghy itself would try to undergo the motion described by some SDE but the random wave formation would have the effect of adding random drift; hence, the signal process for the dinghy can be formally described as an Itô process whose drift term contains the gradient of W (W being a locally bounded random field in the Wiener space). Of course, this does not really make sense as a normal SDE but is well defined through Dirichlet form or martingale problem theory. We also allow correlation between the signal and the observation noise, which would be natural from dinghy occupant motion.

It is well known that the long-time behaviour of diffusion in a random environment is much different from that of an ordinary diffusion; note that the drift term is not well defined, since W is a non-differentiable continuous function. Diffusion in random medium is not a Markov process unless one fixes an instance of the random environment. Consequently, it is impossible to construct, within the usual Itô's theory, a stochastic evolution equation for the filtering process of the diffusion in random medium based on noisy observations. However, it can be constructed from the Dirichlet form theory in any finite dimension.

REPRESENTATIONS AND CHAOS METHODS

Although
the signal itself is not Markov, it becomes Markov when a single occurrence
of the random environment becomes fixed. This results in two semigroups;
one produced by the Dirichlet form on a (randomly) weighted L^{2}-space
and another on the Banach space of bounded measurable functions.
Assuming that the initial signal distribution satisfies a certain condition
and we have access to the random environment, we show that there is a unique
weak solution to the Zakai equation that remains in the L^{2}-space
and is expressible in terms of either semigroup.

In the following, we remove the unrealistic assumption that we can "see" the random environment. We have chaos decomposition for our filtering process, namely we can represent the filtering process as a series in terms of multiple Wiener-Itô integrals, even in the case of correlated noise (i.e. when the observation noise is not independent of the signal). We first obtain the expansion for each fixed random environment. Then, we can use the classical filtering theory to obtain a Zakai equation for this fixed environment predictive model, from which we can express the unnormalized filtering process as the mild solution to the Zakai equation. Further, we formulate the multiple Wiener integral representation for the unnormalized filtering process. Finally, we show that we can integrate this expansion term by term with respect to the distribution of the random environment. We arrive at the desired unique multiple Wiener integral representation for the filtering process associated with the diffusion in the random environment. No Zakai or Kushner-Stratonovich equation is possible. Our representation formula in Kouritzin, Long and Sun (2002) provides the capability to simulate this filtering process on a computer.

PARTICLE APPROXIMATION

Our practical simulation experience suggests that particle and space discretization methods of implementing filters often work better than chaos methods. Therefore, it is important to come up with a particle system approximation for the conditional distribution of the signal in a random environment given only the observations. The difficulty that arises is that we do not want to introduce an extremely large number of particles to account for the random environment. Instead, we try to "learn" the environment. In particular, we construct a functional or white noise expansion for the random environment in the Dirichlet form, placing all randomness in the coefficients (and not the bases functions). Then, we both truncate the expansion (after a very large number of terms) and employ combined filtering-parameter estimation algorithms to "identify" the random environment coefficients while filtering or perform functional estimation to estimate all coefficients simultaneously. Inasmuch as we have already discussed the former approach in (c), we only discuss the later here.

Kouritzin, Sun, and Xiong (2002) investigated combined functional environment estimation and filtering. To justify the anticipation of a solution, we simplify the problem to one dimension and constrain the original signal to be [0,2x] by reflecting at the boundaries. These models still make sense through Dirichlet form theory and can be thought of as a formal Skorohod stochastic differential equation. Then, we replace the W gradient with its trigometric series. In this manner, we are also replacingby some trigometric expansion. (Clearly, since the gradient of a Brownian motion is not a square-integrable function we cannot expect that a trigometric series for is square summable but an infinite series expansion is still valid by deweighting the coefficients and using Sobolev spaces.) Then, motivated by the method of moments, we assume h is 1-1, take the observation noise to be independent of the signal, and form a functional estimator of the form:

which, for appropriately chosen m n, can be shown to contain consistent (as n ) estimators for all the coefficients of our trigometric coefficients for the gradient of W. Then, for each n we define an approximate signal and particle filter, show that the approximate signal converges to actual signal as n , and use the technique of Budhiraja and Kushner (2002) to prove that as n the long-term error of this method converges to zero.

The drawbacks of this approach are: 1) In practice, you must fix n apriori; and 2) the amount of work between observations also increases to infinity as n . We are also trying to derive an algorithm that will sample all frequencies infinitely often with the same number of calculations between observation so that over time we have a good estimator of all frequencies and n need not be fixed. The idea is to expand W (not ) and estimate the lowest frequencies first twice, then estimate the next lowest frequencies once, repeat both once, and then sample the third lowest frequency group, etc. Suppose we label the frequency groups of approximately the same size 1,2,3, … Then, the algorithm would use our functional estimation procedure on the groups in the following order: 1,1,2,1,1,2,3,1,1,2,1,1,2,3,4, &hellip The idea here is that the coefficients for converge to zero as the frequencies increase, so lower frequency coefficients are more important, but they cannot be estimated properly without also estimating the higher coefficients.

l. Basic Filtering Theory

UNIQUENESS FOR THE FILTERING EQUATIONS

There is much interesting mathematics being researched at MITACS-PINTS on the basic mathematical equations of classical nonlinear filtering theory. Such work is important for determining the expected properties of the often-difficult real world problems. The first classical theoretical problem is that of uniqueness. Uniqueness of the filtering equations provides an important tool in establishing convergence and rates of convergence for computer workable approximations.

In the simplest classical setting, where the signal and observation noise are independent, Szpirglas (1978) established pathwise uniqueness for the Kushner-Stratonovich and the Duncan-Mortensen-Zakai equations, which together with the Yamada-Watanabe argument also provide distributional uniqueness. His results do not apply in the correlated or feedback case. Lucic and Heunis (2001) consider uniqueness properties of the nonlinear filter equations, both normalized and unnormalized, for a nonlinear filtering problem in which the signal is conditioned by the observation process. This situation is typically encountered in stochastic feedback controlsystems, in which the output is fed back to the input. Such feedback systems arise very commonly in the applications of nonlinear filtering to guidance and control problems in aerospace engineering.

Our particular interest is in the uniqueness in law property for the filter equations because of its applicability to the singular perturbation robustness problem discussed above. Although uniqueness in law for the nonlinear filter equations is well known in the case where the signal and observation noise are independent, it was not previously established for the case where the signal is conditioned by the observations, a basically new approach was utilized. We show uniqueness in law under rather mild restrictions for both the normalized and unnormalized filter equations. As a by-product we also get pathwise uniqueness for the unnormalized filter equation. Our main tools were the uniqueness for measure-valued evolution results obtained by Bhatt and Kanadikar as well as the Yamada-Watanabe argument. It should be mentioned that the nice uniqueness results of Kurtz and Ocone (1988) could not be used to deduce distributional uniqueness since Kurtz and Ocone's results only apply on probability spaces rich enough to contain the signal, observations and filter.

APPLICABILITY OF THE FILTERING EQUATIONS

The uniqueness principle established by Kurtz and Ocone (1988) says that under very general conditions, any solution to the filtering equations on a rich enough probability space must be the conditional distribution of the signal given the observations (i.e. the desired distribution). However, the conditions under which there are known solutions were far more severe. It has been known for more than thirty years that there are solutions to the basic filtering equations under the mean-square finite energy condition and right continuity. This is a much more stringent condition than those required by Kurtz and Ocone (1988) for uniqueness. Hence, there was an open problem, that we just settled in Kouritzin and Long (2002), to determine existence (i.e. whether the conditional distributions satisfy the Duncan-Mortensen-Zakai and the Kushner-Stratonovich equations) under conditions as general as those used by Kurtz and Ocone for uniqueness. Indeed, our conditions are milder that those used by Kurtz and Ocone, and demonstrate, for example, that the filtering equations continue to hold when the sensor function is linear and the signal has heavy tails like a Levy process.

LARGE DEVIATION PRINCIPLE FOR OPTIMAL FILTERING

When
the signal-to-noise ratio goes to zero, Xiong (2002) derived a large deviation
principle for the optimal filter where the signal and the observation processes
take values in conuclear spaces. The approach follows from the framework
established Xiong 1996. The key is the verification of the exponential
tightness for the optimal filtering process and the exponential continuity
of the coefficients in the Zakai equation.

back to top

**Sensor Vibration Neutralization**

One of Lockheed Martin Naval Electronics and Surveillance Systems' and Lockheed Martin Canada's most vital concerns is to improve the onboard performance of the Canadian Wescam sensor. This sensor has been installed in several U.S. military aircraft and would be installed in many more if its performance were more satisfactory. The most practical methods of improving performance are via software-implemented mathematical algorithms. There are three separate problem areas in which nonlinear filtering can improve the performance of the Wescam filter. In simplification, they are: initialization, remote pod neutralization, and low observable tracking of vessels. We have essentially provided our sponsors with their most promising methods of improving the low observable tracking performance of the Wescam sensor through our work on SERP and REST filters. Our convolutional filter appears ideal for problems like remote pod movement neutralization. This Wescam filter enhancement is part of the realistic and quite technical motivation for our previously mentioned Search and Rescue problem.

To explain the Pod Tracking problem further, we mention that the sensor must be mounted externally to be effective and this external sensor pod is subject to random vibrations and forces. Our goal is to neutralize the effect of the first sensor movement from the first sensor readings. This is done by a filtering algorithm and a second sensor in the craft, whose only task is to estimate the pod. Depending upon the choice for this second sensor different observation models are appropriate. Due to the distances involved small errors in the pod location estimate have a dramatically amplified effect on the first sensor's readings. Therefore, we wanted to avoid the extra approximations of particle or space-discretization methods and use IDEX. Currently, we believe we have a workable explicit solution for the stochastic movement of the pod in terms of a small number of multiple Wiener integrals in the sense that the corresponding Stratonovich equation should adequately model the pod movement and the explicit solution is computer workable. Most likely, we will have to modify our solution later to account for the pole behaviour more fully. However, the explicit solution method has the highly desirable properties that the solution is forced to stay on the manifold even in the presence of numeric error and the evolution is in two dimensions (the dimension of the manifold), not three as direct particle or space discretization methods would do.

The mathematics behind this solution is reasonably involved but general enough to accommodate more general manifolds. Basically, we prove such things as that there are explicit solutions involving the one and two-parameter stochastic integrals if and only if the vector fields corresponding to the columns of the dispersion coefficient in the Stratonovich equation are two step nilpotent and the "drift" vector field satisfies some more general condition. Then, we study the class of such explicit solutions and show that it is large and contains many suitable models for this problem. In particular, we show that there are solutions that stay on the desired manifold and have the desired drift and diffusion properties on this manifold. The work is being done by Kouritzin, Remillard, and Van Weelden. Kouritzin and Wiersma are implementing the solution.

We
are also talking with Lockheed about the possibility of combining the Pod
Tracking problem with the Search and Rescue problem to better reflect reality
and produce one large problem.

back to top

**Communication Networks Applications**

Our new corporate partner, Optovation of Ottawa, is paying us to investigate fibre optic signal properties. In particular, they are building a product that tests the quality of optic signals at various frequencies, using a lot of proprietary hardware and software. One of the most difficult problems left for them is the accurate simultaneous estimation of optical signal to noise ratios, peak powers, and bit rates for all the carrier frequencies on a fibre. In the Ordered Multi-Target Tracking problem, we are solving a simplified version of this problem for them. In early September 2002, Hailes and Kouritzin visited Optovation's lab to learn the characteristics of this problem and collect some data. Since then, we have built a first model for their problem including the targets, their interactions, the optical noise, and the electrical noise. The problem and model are naturally addressed by filtering theory. However, due to the huge size of this problem (up to eighty interacting targets), we will have to make significant advances in multi-target tracking and applying REST to such tracking in order to help them. Even with knowledge of problem specific simplifications that can be made, the problem seems too large for anything but possibly REST. There is also a necessary parameter estimation that must be done to characterize the optical noise in this problem. We may try to do this within REST or else using our recursive-combined algorithm.

Michael
Kouritzin established a new class of models described by a nonlinear stochastic
parabolic equation with arbitrary cádlág noise and arbitrary
order elliptic operators. The noise sources in these models include
long-range dependent processes, heavy-tailed Lévy processes, and
composite processes like iterated Brownian motion. It is believed
that these models are a reasonable starting point for modeling such events
as the flow of information through a complicated communication network.
Filtering would then be used to determine more "internal" quantities.

back to top

**Mathematical Finance Applications **

We hope to find new corporate sponsors in the mathematical finance and communication industries. Here, such data as a collection of stock prices or the number of jobs at various nodes of a network usually form the observation process, and long-range dependent or heavy-tailed models for observation processes are used for fidelity to reality. Thus, the long-range dependent processes are not Markov. However, as shown in a recent paper by M.L. Kleptsyna, A. Le Breton and M.C. Roubaud, entitled "An elementary approach to filtering in systems with fractional Brownian observation noise", there are still methods of constructing filters in certain long-range dependent observation settings.

Application of filtering and particle systems to mathematical finance is a growing area. Hence, it seems natural to apply our algorithms to this area in hope of attract an industry partner. In the volatility tracking problem we model and track stochastic volatility together with five unknown parameters using IDEX. In fact, due to the nature of the model we can compute the maximum likelihood parameters from the filter. Despite the large number of parameters to estimate, when applied with real stock data the model still performed very well. We published some of the results in Kouritzin, Remillard and C. Chan (2001) and believe that this technique could benefit an industry partner.

When pricing options using natural random interest rate and stock models, the future price must be brought back to the current time using a random weighting. Several books suggest constructing option prices using straight Monte Carlo methods. However, this random weighting means that a resampled particle system like SERP should outperform straight Monte Carlo pricing. We have applied SERP to a simple (single stock, bank account) model where the bank account interest rate depends on the stock price. A minor modification of SERP is then used to price all pathspace and terminal options simultaneously by the creation of a ``pricing measure''. The results were impressive, requiring far fewer particles than straight Monte Carlo. We look forward to testing SERP on the multi-stock scenario. In November and December, Jarett Hailes and Mike Kouritzin will meet with contacts from the finance sector in an attempt to secure a corporate partner and justify further work in this area.

back to top

**Robotic Lighting Control Application**

Acoustic Positioning Research (APR) was so impressed with our algorithms and the possibility of improving their product that they joined us as a corporate sponsor in November 2000. Will Bauer, president of APR, suggested that including our algorithm into APR's product may increase the market for their product by fifty percent. He has also agreed to pay a small royalty to our Centre on each sale of their enhanced product to reflect the fact that they will benefit from the previous work of the Centre. The first problem is to direct lights and sound effects automatically based upon inaudible high frequency sound time measurements. In the past, they have suffered from noisy measurements and poor prediction methods. We now have simulations solving this initial one performer problem using MIBR, SERP, and REST. The results are impressive. The next large steps are to allow multiple performers and for parameters to be identified in the models.

The first APR problem was to predict the future position of a performer using timing measurements as described in the Performer Prediction problem. This problem has now been solved effectively using SERP and APR is incorporating it into their full system. Our solution can improve performance of the many systems that have been sold and are in the field.

Due to current inexpensive prices for video equipment, APR is now also interested in tracking and predicting performers using digital cameras also located on the perimeter of the stage. A small pause in our collaboration was agreed upon due to the sabbatical of Kouritzin and the time required to incorporate the first solution. However, both APR and PINTS are eager to restart collaboration on the second problem, striving to predict multiple performers using the smallest number of cameras, by the end of 2003.

back to top

**Model Approximation and Robustness**

**MARKOV CHAIN APPROXIMATIONS FOR SPATIAL SIGNALS**

Continuous and discrete Markov chain approximations can be important in predictive modeling and computationally efficient filtering for a number of reasons. To dramatically increase the class of signals that can be filtered by our various methods, we have concerned ourselves with the Markov chain approximation of the extremely complex signals arising in multiple target tracking and other spatial filtering problems such as pollution or bacteria tracking within a sheet of water. Then we can substitute the computer-tractable approximation for the real signal and rely on robustness results of, for example, Kushner or Bhatt and Karandikar (1999, 2002) to justify our use of a simplified model.

With regard to constructing a general powerful method of approximating complicated signals with a Markov chain, Kouritzin and Long have made crucial innovations to advance the existing approximations of Ludwig Arnold, Peter Kotelenez, and Douglas Blount. These innovations allow for driving noise sources, more computationally efficient implementations of novel Markov chain approximations, and a larger selection of spatial processes, which can be so approximated. The innovations include enlarging the class of elliptic operators and the class of reaction functions that can be used in Markov chain approximations within stochastic reaction-diffusion equations, as well as improvements that allow for the use of much slower rates within these approximations to reduce the computational requirements. Kouritzin and Long's alterations dramatically slow Markov chain state change rates, often yielding a one hundred-fold increase in the simulation speed over the previous version of the method.

In an attempt to attract Stantec or other potential industry partner with interests in environmental monitoring, the aforementioned innovations have been first incorporated into a stochastic reaction-diffusion equation motivated by pollution distribution. (See the 1995 book by Kallianpur and Xiong for background information on this problem.) Kouritzin and Long (2002) have established the convergence of Markov chain approximations to stochastic reaction diffusion equations in both the quenched and annealed senses. Their first convergence result, a quenched law of large numbers, establishes convergence in probability of the Markov chains to the pathwise unique mild solution of the stochastic reaction diffusion equation for each fixed path of the driving Poisson measure source. Their second result is the annealed approach that establishes convergence in probability of the Markov chains to the mild solution of the stochastic reaction-diffusion equation while considering the Poisson source as a random medium for the Markov chains. These results are vital for application of filtering theory to the pollution dispersion-tracking problem, as they can be combined with the robustness results of Kushner or Bhatt and Karandikar and the aforementioned particle filtering methods to create a computer workable algorithm.

In more detail, Kouritzin and Long considered the stochastic model of ground water pollution, which mathematically can be written with a stochastic reaction diffusion equation. In the context of simulating the transport of a chemical or bacterial contaminant through a sheet of water, they extended a well-established method of approximating reaction-diffusion equations with Markov chains by allowing convection, certain Poisson measure driving sources and a larger class of reaction functions. This work applies to Lockheed Martin's interest in detecting and classifying oil slicks and vessel traces or wakes.

A
weighted L^{2}
Hilbert space was chosen to symmetrize the elliptic operator in the stochastic
reaction diffusion equation, and the existence of and convergence to pathwise
unique mild solutions of the stochastic reaction-diffusion equation was
considered. The region [0,L_{1}]
[0,L_{2}]
was divided into L_{1}N
L_{2}N
cells, and the Markov chain approximation on these cells was analyzed as
N .

The particles in cells evolve in time according to births and deaths from reaction, random walks from diffusion and drift, and some area dependent births from the Poisson noise sources. In this stochastic particle model, the formalism allows for two kinds of randomness: the external fluctuation coming from the Poisson driving sources and the internal fluctuation from the reaction and drift-diffusion on the particle level. Independent standard Poisson processes defined on another probability were used to construct the Markov chains by the random time changes method.

In a second work on the stochastic model of water pollution, which mathematically can be written with a stochastic partial differential equation driven by Poisson measure noise, Kouritzin, Long and Sun (2002) establish a more general annealed law of large numbers. It shows convergence in probability for our Markov chains to the solution of the stochastic reaction-diffusion equation while considering the Poisson source as a random medium for the Markov chains. Our proof method of the main result is substantially different from the previous work Kouritzin and Long (2002) using the weak convergence method. Here, we directly apply Cauchy criterion (convergence in probability) to our Markov chains and utilize the nice regularity of Green's function with a delicate iteration technique. (The usual Gronwall's Lemma doesn't work in our case.)

**STOCHASTIC
MODELS OF THE SPREAD OF POLLUTION**

The results of Kouritzin and Long (2002) and Kouritzin, Long and Sun (2002) have enabled the creation of prototype stochastic models of pollution. Methods for efficient simulation of the models have been implemented in a computer code.

Kouritzin, Long, Ballantyne, and H. Chan have created a simulation of a stochastic reaction-diffusion equation which can represent the transport of a pollutant or bacteria through a river or the leaching of a pollutant through a ground water system, including adsorption effects, all in a manner amenable to filtering.

The stochastic models of the spread of pollution developed at MITACS-PINTS have the following general features:

The models employ Markov chain approximations to nonlinear SPDEs representing stochastic reaction-diffusion equations.

The equations include convective forces, as would be found in a flowing water system, and Poisson generating sources to model contamination from sites such as factories, storage ponds and agricultural facilities.

The Markov chain approximations used in the models converge to the exact solution of the stochastic reaction-diffusion equation in both the annealed and the quenched senses.

The approximations provide the basis for further work on applying filtering techniques to track the sources of contaminants given only imperfect, noise corrupted samples at a few locations.

These concept proofs provide an effective foundation for incorporating novel filtering techniques into different models. These techniques are also used to model other reactive flows, such as heat diffusion through a substance of varying heat capacity, or heat-activated internal reaction.

**ROBUSTNESS
AND REGULARITY OF FILTERS**

Lucic and Heunis (2002) study signal robustness in the extreme case of singular perturbations with the goal of characterizing the limiting nonlinear filter (if any) as the perturbation parameter tends to zero. The signal arises from a singularly perturbed stochastic differential equation with a small parameter, in the case where the dynamics of the signal are conditioned by the observation process. We show that the nonlinear filter is a solution of a particular measure-valued martingale problem, and then show that the limiting nonlinear filter exists and characterize it completely. The approach is to use solvability of Poisson-type operator equations to construct a limiting measure-valued martingale problem, and use the uniqueness in law results of Lucic and Heunis (2001) to show that this limiting martingale problem is well posed and that its solution corresponds to the limiting nonlinear filter.

Kouritzin and Xiong (2002) study observation robustness to demonstrate the asymptotic correctness of the classical, non-instrumentable continuous-time observations via instrumentable coloured-noise approximations. In particular, we consider the effect of the observations: , where is an Ornstein-Uhlenbeck process, as . In this case, the integrated observation noise converges to Brownian motion and we show that the filter also converges to the classical observation filter. The non-integrated observations can be instrumented, so this result demonstrates that the classical observations are a natural idealized or limiting object. Our result generalizes, in some manner, previous results by Kunita (1993), Mandal and Mandrekar (2000), Gawarecki and Mandrekar (2000), and Bhatt and Karandikar (2001). Our method is to use the Kurtz-Xiong particle approach to derive a FKK-like filtering equation and uniqueness for this equation based upon observations , then we prove tightness for filter distributions, and identify a unique limit using the uniqueness result of Bhatt and Karandikar (1995).

Douglas
Blount and Michael Kouritzin have derived Hölder continuity for processes
related to the Zakai equation of filtering theory. Blount and Kouritzin
have obtained a criterion, which gives Hölder continuity results in
Hilbert space for a class of solutions of stochastic evolution equations.
The class includes the superstable processes with critical binary branching
and Ornstein-Uhlenbeck type SPDEs with a suitable eigenfunction expansion
for the drift operator. It should also give regularity results for some
types of SPDEs arising from filtering theory. The resulting paper,
"Hölder continuity for spatial and path processes via spectral
analysis", appeared in Probability Theory and Related Fields
and was the subject of an invited talk in a session on stochastic analysis
of the AMS meeting held in January 2001 in New Orleans.

back to top

**Filtering for Signals in Random Environments**

Michael A. Kouritzin, Hongwei Long, and Wei Sun (2002) consider the nonlinear filtering problem in which the signal to be estimated is a (non-Markov) diffusion in a random environment that evolves in some Euclidean space.

Our motivation comes from the tracking problem of a dinghy lost at sea. When the weather conditions are adverse, the motion of the dinghy will change dramatically due to drifting by random ocean surface wave propagation, rendering the Search and Rescue problem model inappropriate. The dinghy itself would try to undergo the motion described by some SDE but the random wave formation would have the effect of adding random drift; hence, the signal process for the dinghy can be formally described as an Itô process whose drift term contains the gradient of W (W being a locally bounded random field in the Wiener space). Of course, this does not really make sense as a normal SDE but is well defined through Dirichlet form or martingale problem theory. We also allow correlation between the signal and the observation noise, which would be natural from dinghy occupant motion.

It is well known that the long-time behaviour of diffusion in a random environment is much different from that of an ordinary diffusion; note that the drift term is not well defined, since W is a non-differentiable continuous function. Diffusion in random medium is not a Markov process unless one fixes an instance of the random environment. Consequently, it is impossible to construct, within the usual Itô's theory, a stochastic evolution equation for the filtering process of the diffusion in random medium based on noisy observations. However, it can be constructed from the Dirichlet form theory in any finite dimension.

**REPRESENTATIONS AND CHAOS METHODS**

Although
the signal itself is not Markov, it becomes Markov when a single occurrence
of the random environment becomes fixed. This results in two semigroups;
one produced by the Dirichlet form on a (randomly) weighted L^{2}-space
and another on the Banach space of bounded measurable functions.
Assuming that the initial signal distribution satisfies a certain condition
and we have access to the random environment, we show that there is a unique
weak solution to the Zakai equation that remains in the L^{2}-space
and is expressible in terms of either semigroup.

In the following, we remove the unrealistic assumption that we can "see" the random environment. We have chaos decomposition for our filtering process, namely we can represent the filtering process as a series in terms of multiple Wiener-Itô integrals, even in the case of correlated noise (i.e. when the observation noise is not independent of the signal). We first obtain the expansion for each fixed random environment. Then, we can use the classical filtering theory to obtain a Zakai equation for this fixed environment predictive model, from which we can express the unnormalized filtering process as the mild solution to the Zakai equation. Further, we formulate the multiple Wiener integral representation for the unnormalized filtering process. Finally, we show that we can integrate this expansion term by term with respect to the distribution of the random environment. We arrive at the desired unique multiple Wiener integral representation for the filtering process associated with the diffusion in the random environment. No Zakai or Kushner-Stratonovich equation is possible. Our representation formula in Kouritzin, Long and Sun (2002) provides the capability to simulate this filtering process on a computer.

**PARTICLE
APPROXIMATION**

Our practical simulation experience suggests that particle and space discretization methods of implementing filters often work better than chaos methods. Therefore, it is important to come up with a particle system approximation for the conditional distribution of the signal in a random environment given only the observations. The difficulty that arises is that we do not want to introduce an extremely large number of particles to account for the random environment. Instead, we try to "learn" the environment. In particular, we construct a functional or white noise expansion for the random environment in the Dirichlet form, placing all randomness in the coefficients (and not the bases functions). Then, we both truncate the expansion (after a very large number of terms) and employ combined filtering-parameter estimation algorithms to "identify" the random environment coefficients while filtering or perform functional estimation to estimate all coefficients simultaneously. Inasmuch as we have already discussed the former approach in (c), we only discuss the later here.

Kouritzin, Sun, and Xiong (2002) investigated combined functional environment estimation and filtering. To justify the anticipation of a solution, we simplify the problem to one dimension and constrain the original signal to be [0,2x] by reflecting at the boundaries. These models still make sense through Dirichlet form theory and can be thought of as a formal Skorohod stochastic differential equation. Then, we replace the W gradient with its trigometric series. In this manner, we are also replacingby some trigometric expansion. (Clearly, since the gradient of a Brownian motion is not a square-integrable function we cannot expect that a trigometric series for is square summable but an infinite series expansion is still valid by deweighting the coefficients and using Sobolev spaces.) Then, motivated by the method of moments, we assume h is 1-1, take the observation noise to be independent of the signal, and form a functional estimator of the form:

which, for appropriately chosen mn, can be shown to contain consistent (as n ) estimators for all the coefficients of our trigometric coefficients for the gradient of W. Then, for each n we define an approximate signal and particle filter, show that the approximate signal converges to actual signal as n , and use the technique of Budhiraja and Kushner (2002) to prove that as n the long-term error of this method converges to zero.

The
drawbacks of this approach are: 1) In practice, you must fix n apriori;
and 2) the amount of work between observations also increases to infinity
as n
. We are also trying to derive an algorithm that will sample all
frequencies infinitely often with the same number of calculations between
observation so that over time we have a good estimator of all frequencies
and n
need not be fixed. The idea is to expand W
(not ) and estimate the lowest frequencies first twice, then estimate the
next lowest frequencies once, repeat both once, and then sample the third
lowest frequency group, etc. Suppose we label the frequency groups of approximately
the same size 1,2,3, … Then, the algorithm would use our functional
estimation procedure on the groups in the following order: 1,1,2,1,1,2,3,1,1,2,1,1,2,3,4,
&hellip The idea here is that the coefficients for W
converge to zero as the frequencies increase, so lower frequency coefficients
are more important, but they cannot be estimated properly without also
estimating the higher coefficients.

back to top

**Basic Filtering Theory**

**UNIQUENESS FOR THE FILTERING EQUATIONS**

There is much interesting mathematics being researched at MITACS-PINTS on the basic mathematical equations of classical nonlinear filtering theory. Such work is important for determining the expected properties of the often-difficult real world problems. The first classical theoretical problem is that of uniqueness. Uniqueness of the filtering equations provides an important tool in establishing convergence and rates of convergence for computer workable approximations.

In the simplest classical setting, where the signal and observation noise are independent, Szpirglas (1978) established pathwise uniqueness for the Kushner-Stratonovich and the Duncan-Mortensen-Zakai equations, which together with the Yamada-Watanabe argument also provide distributional uniqueness. His results do not apply in the correlated or feedback case. Lucic and Heunis (2001) consider uniqueness properties of the nonlinear filter equations, both normalized and unnormalized, for a nonlinear filtering problem in which the signal is conditioned by the observation process. This situation is typically encountered in stochastic feedback controlsystems, in which the output is fed back to the input. Such feedback systems arise very commonly in the applications of nonlinear filtering to guidance and control problems in aerospace engineering.

Our particular interest is in the uniqueness in law property for the filter equations because of its applicability to the singular perturbation robustness problem discussed above. Although uniqueness in law for the nonlinear filter equations is well known in the case where the signal and observation noise are independent, it was not previously established for the case where the signal is conditioned by the observations, a basically new approach was utilized. We show uniqueness in law under rather mild restrictions for both the normalized and unnormalized filter equations. As a by-product we also get pathwise uniqueness for the unnormalized filter equation. Our main tools were the uniqueness for measure-valued evolution results obtained by Bhatt and Kanadikar as well as the Yamada-Watanabe argument. It should be mentioned that the nice uniqueness results of Kurtz and Ocone (1988) could not be used to deduce distributional uniqueness since Kurtz and Ocone's results only apply on probability spaces rich enough to contain the signal, observations and filter.

**APPLICABILITY OF THE FILTERING EQUATIONS**

The uniqueness principle established by Kurtz and Ocone (1988) says that under very general conditions, any solution to the filtering equations on a rich enough probability space must be the conditional distribution of the signal given the observations (i.e. the desired distribution). However, the conditions under which there are known solutions were far more severe. It has been known for more than thirty years that there are solutions to the basic filtering equations under the mean-square finite energy condition and right continuity. This is a much more stringent condition than those required by Kurtz and Ocone (1988) for uniqueness. Hence, there was an open problem, that we just settled in Kouritzin and Long (2002), to determine existence (i.e. whether the conditional distributions satisfy the Duncan-Mortensen-Zakai and the Kushner-Stratonovich equations) under conditions as general as those used by Kurtz and Ocone for uniqueness. Indeed, our conditions are milder that those used by Kurtz and Ocone, and demonstrate, for example, that the filtering equations continue to hold when the sensor function is linear and the signal has heavy tails like a Levy process.

**LARGE
DEVIATION PRINCIPLE FOR OPTIMAL FILTERING**

When
the signal-to-noise ratio goes to zero, Xiong (2002) derived a large deviation
principle for the optimal filter where the signal and the observation processes
take values in conuclear spaces. The approach follows from the framework
established Xiong 1996. The key is the verification of the exponential
tightness for the optimal filtering process and the exponential continuity
of the coefficients in the Zakai equation.

back to top

**Combined Parameter-State Estimation Algorithms**

In many cases, we do not know the model of the signal exactly; the model may have unknown parameters which cannot be determined a priori They can be incorporated into the filtering problem by increasing the dimension of the signal and this can be an effective strategy when used with REST (and its automated refining grid property). However, in conjunction with other methods, it is cumbersome and slow. In the sequel, we refer to this technique as parameter inclusion filtering.

A known strategy for estimating parameters within filtering problems is to apply the Expected Maximum (EM) algorithm between observations to estimate the parameters as in Dembo and Zeitouni (1986). However, this relies on an iterative convergence between the observations with no fixed upper bound on the computations required, it is only known to apply to a specific class of models, and a general consistency result for the parameter estimates has not been proven. Another interesting parameter estimation strategy involving the EM algorithm is the Hidden Markov Model (HMM) approach (see e.g. Elliott (2002)). Here, in a very limited setting, it is possible to come up with equations and an expanded filter that provide the solution to the combined maximum-likelihood parameter estimation and filtering problem. The parameter estimates are proven to converge to at least a local minimum. Still, neither of these methods on their own yields a satisfactory solution to our problems, but we are investigating the possibility of approximating the signal with a finite-dimensional Markov chain and applying the HMM approach. Of course, this involves more approximations and is done on a case-by-case basis to ensure the desired parameters appear properly in the approximation. The final known strategy that we are investigating is that of the Generalized Method of Moments (GMM) studied by Hansen (1982). This approach is intriguing since the parameter estimates can be constructed without the filtering algorithm and there is no cross dependence between the parameter estimation and filtering, simplifying convergence proofs. Still, this method is also not completely general (for example, we require as a condition invertibility of the sensor function h) and requires many problem-dependent calculations or the convergence of an iteration scheme between observations. Hence, alternative strategies are still actively sought.

Hubert Chan, Michael Kouritzin, and Hongwei Long have developed a combined least square parameter estimation filtering algorithm that works very well in practice for a variety of signals evolving over a compact space. The signal has to be enlarged to include derivatives of the original signal but this is almost expected by analogy to the HMM and the GMM methods. The filter for this enlarged signal contains enough information to produce proper least squares estimates. The algorithm is designed to minimize (in the unknown parameters) the weighted mean square error between the new observation and the predicted observation based on the model and the past observations. Hence, our method can be thought of as an extension of L. Ljung's scheme for partially observable systems. Currently, we allow parameters in both the drift and diffusion terms in the signal but assume that the observation noise is independent of the signal and is additive. The implementation therefore involves predicting the signal one step in advance, which already is calculated in filtering algorithms just prior to doing the Bayes' rule update. However, to make the whole algorithm recursive we must substitute the distribution that we have calculated in terms of all prior parameter estimates in place of the desired filtering distribution, calculated assuming that the current estimate is the true parameter value. This problem makes the mathematical proof of combined long-time convergence of the parameter, filtering estimator pair extremely challenging. Still, if the parameter estimates are close to the true value and not changing dramatically and the filter forgets earlier estimates, then you can argue that the near future estimates should remain close to the true value. Hence, there is reason to be optimistic but new mathematical methods definitely need to be employed to construct a proof. We discuss our ongoing work below. back to top