State Estimation Algorithms for Markov Chains Observed in Arbitrary Noise

Loading...
Thumbnail Image

Date

Authors

Malcolm, William

Journal Title

Journal ISSN

Volume Title

Publisher

Institute of Electrical and Electronics Engineers (IEEE Inc)

Abstract

In this article we compute state estimation schemes for discrete-time Markov chains observed in arbitrary observation noise. Here we assume the observation noise distribution is known in advance. Appealing to a fundamental L1 convergence result in[1] we propose to represent any practical observation noise model by a convex combination of Gaussian densities, that is, a mixture function that is itself a valid probability density function. To compute our state estimation schemes we use the techniques of reference probability, (see[2]). Here however, our Gaussian mixtures appear as sums in a product representation of Radon-Nikodym derivatives. The state estimation schemes we compute are; an information state recursion (filter), a general smoothing theorem, an M-ary detection scheme. A computer simulation is provided to indicate the performance of our recursive filter in a non-Gaussian observation noise scenario.

Description

Citation

Source

Proceedings of IEEE Conference on Decision and Control 2008

Book Title

Entity type

Access Statement

License Rights

Restricted until

2037-12-31