A tutorial on bayesian estimation and tracking applicable to nonlinear and non gaussian processes

59 88 0
A tutorial on bayesian estimation and tracking applicable to nonlinear and non gaussian processes

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

Approved for Public Release; Distribution Unlimited Case # 05-0211 MTR 05W0000004 MITRE TECHNICAL REPORT A Tutorial on Bayesian Estimation and Tracking Techniques Applicable to Nonlinear and Non-Gaussian Processes January 2005 A.J Haug Sponsor: Dept No.: MITRE MSR W400 The views, opinions and/or Þndings contained in this report are those of the MITRE Corporation and should not be construed as an official Government position, policy, or decision, unless designated by other documentation c 2005 The MITRE Corporation ° Corporate Headquarters McLean, Virginia Contract No.: Project No.: W15P7T-04-D199 01MSR0115RT MITRE Department Approval: Dr Frank Driscoll MITRE Project Approval: Dr Garry Jacyna ii Abstract Nonlinear Þltering is the process of estimating and tracking the state of a nonlinear stochastic system from non-Gaussian noisy observation data In this technical memorandum, we present an overview of techniques for nonlinear Þltering for a wide variety of conditions on the nonlinearities and on the noise We begin with the development of a general Bayesian approach to Þltering which is applicable to all linear or nonlinear stochastic systems We show how Bayesian Þltering requires integration over probability density functions that cannot be accomplished in closed form for the general nonlinear, non-Gaussian multivariate system, so approximations are required Next, we address the special case where both the dynamic and observation models are nonlinear but the noises are additive and Gaussian The extended Kalman Þlter (EKF) has been the standard technique usually applied here But, for severe nonlinearities, the EKF can be very unstable and performs poorly We show how to use the analytical expression for Gaussian densities to generate integral expressions for the mean and covariance matrices needed for the Kalman Þlter which include the nonlinearities directly inside the integrals Several numerical techniques are presented that give approximate solutions for these integrals, including Gauss-Hermite quadrature, unscented Þlter, and Monte Carlo approximations We then show how these numerically generated integral solutions can be used in a Kalman Þlter so as to avoid the direct evaluation of the Jacobian matrix associated with the extended Kalman Þlter For all Þlters, step-by-step block diagrams are used to illustrate the recursive implementation of each Þlter To solve the fully nonlinear case, when the noise may be non-additive or non-Gaussian, we present several versions of particle Þlters that use importance sampling Particle Þlters can be subdivided into two categories: those that re-use particles and require resampling to prevent divergence, and those that not re-use particles and therefore require no resampling For the Þrst category, we show how the use of importance sampling, combined with particle re-use at each iteration, leads to the sequential importance sampling (SIS) particle Þlter and its special case, the bootstrap particle Þlter The requirement for resampling is outlined and an efficient resampling scheme is presented For the second class, we discuss a generic importance sampling particle Þlter and then add speciÞc implementations, including the Gaussian particle Þlter and combination particle Þlters that bring together the Gaussian particle Þlter, and either the Gauss-Hermite, unscented, or Monte Carlo Kalman Þlters developed above to specify a Gaussian importance density When either the dynamic or observation models are linear, we show how the Rao-Blackwell simpliÞcations can be applied to any of the Þlters presented to reduce computational costs We then present results for two nonlinear tracking examples, one with additive Gaussian noise and one with non-Gaussian embedded noise For each example, we apply the appropriate nonlinear Þlters and compare performance results iii Acknowledgement The author would like to thank Drs Roy Bethel, Chuck Burmaster, Carol Christou Garry Jacyna and for their review and many helpful comments and suggestions that have contributed to the clarity of this report Special thanks to Roy Bethel for his help with Appendix A and to Garry Jacyna for his extensive work on the likelihood function development for DIFAR sensors found in Appendix B iv Introduction Nonlinear Þltering problems abound in many diverse Þelds including economics, biostatistics, and numerous statistical signal and array processing engineering problems such as time series analysis, communications, radar and sonar target tracking, and satellite navigation The Þltering problem consists of recursively estimating, based on a set of noisy observations, at least the Þrst two moments of the state vector governed by a dynamic nonlinear non-Gaussian state space model (DSS) A discrete time DSS consists of a stochastic propagation (prediction or dynamic) equation which links the current state vector to the prior state vector and a stochastic observation equation that links the observation data to the current state vector In a Bayesian formulation, the DSS speciÞes the conditional density of the state given the previous state and that of the observation given the current state When the dynamic and observation equations are linear and the associated noises are Gaussian, the optimal recursive Þltering solution is the Kalman Þlter [1] The most widely used Þlter for nonlinear systems with Gaussian additive noise is the well known extended Kalman Þlter (EKF) which requires the computation of the Jacobian matrix of the state vector [2] However, if the nonlinearities are signiÞcant, or the noise is non-Gaussian, the EKF gives poor performance (see [3] and [4], and the references contained therein.) Other early approaches to the study of nonlinear Þltering can be found in [2] and [5] Recently, several new approaches to recursive nonlinear Þltering have appeared in the literature These include grid-based methods [3], Monte Carlo methods, Gauss quadrature methods [6]-[8] and the related unscented Þlter [4], and particle Þlter methods [3], [7], [9]-[13] Most of these Þltering methods have their basis in computationally intensive numerical integration techniques that have been around for a long time but have become popular again due to the exponential increase in computer power over the last decade In this paper, we will review some of the recently developed Þltering techniques applicable to a wide variety of nonlinear stochastic systems in the presence of both additive Gaussian and non-Gaussian noise We begin in Section with the development of a general Bayesian approach to Þltering, which is applicable to both linear and nonlinear stochastic systems, and requires the evaluation of integrals over probability and probability-like density functions The integrals inherent in such a development cannot be solved in closed form for the general multi-variate case, so integration approximations are required In Section 3, the noise for both the dynamic and observation equations is assumed to be additive and Gaussian, which leads to efficient numerical integration approximations It is shown in Appendix A that the Kalman Þlter is applicable for cases where both the dynamic and measurement noise are additive and Gaussian, without any assumptions on the linearity of the dynamic and measurement equations We show how to use analytical expressions for Gaussian densities to generate integral expressions for the mean and covariance matrices needed for the Kalman Þlter, which include the nonlinearities directly inside the integrals The most widely used numerical approximations used to evaluate these integrals include Gauss-Hermite quadrature, the unscented Þlter, and Monte Carlo integration In all three approximations, the integrals are replaced by discrete Þnite sums, leading to a nonlinear approximation to the kalman Þlter which avoids the direct evaluation of the Jacobian matrix associated with the extended Kalman Þlter The three numerical integration techniques, combined with a Kalman Þlter, result in three numerical nonlinear Þlters: the Gauss-Hermite Kalman Þlter (GHKF), the unscented Kalman Þlter (UKF) and the Monte Carlo Kalman Þlter (MCKF) Section returns to the general case and shows how it can be reformulated using recursive particle Þlter concepts to offer an approximate solution to nonlinear/non-Gaussian Þltering problems To solve the fully nonlinear case, when the noise may be non-additive and/or non-Gaussian, we present several versions of particle Þlters that use importance sampling Particle Þlters can be subdivided into two categories: those that re-use particles and require resampling to prevent divergence, and those that not re-use particles and therefore require no resampling For the particle Þlters that require resampling, we show how the use of importance sampling, combined with particle re-use at each iteration, leads to the sequential importance sampling particle Þlter (SIS PF) and its special case, the bootstrap particle Þlter (BPF) The requirement for resampling is outlined and an efficient resampling scheme is presented For particle Þlters requiring no resampling, we discuss a generic importance sampling particle Þlter and then add speciÞc implementations, including the Gaussian particle Þlter and combination particle Þlters that bring together the Gaussian particle Þlter, and either the Gauss-Hermite, unscented, or Monte Carlo Kalman Þlters developed above to specify a Gaussian importance density from which samples are drawn When either the dynamic or observation models are linear, we show how the Rao-Blackwell simpliÞcations can be applied to any of the Þlters presented to reduce computational costs [14] A roadmap of the nonlinear Þlters presented in Sections through is shown in Fig In Section we present an example in which the noise is assumed additive and Gaussian In the past, the problem of tracking the geographic position of a target based on noisy passive array sensor data mounted on a maneuvering observer has been solved by breaking the problem into two complementary parts: tracking the relative bearing using noisy narrowband array sensor data [15], [16] and tracking the geographic position of a target from noisy bearings-only measurements [10], [17], [18] In this example, we formulate a new approach to single target tracking in which we use the sensor outputs of a passive ring array mounted on a maneuvering platform as our observations, and recursively estimate the position and velocity of a constant-velocity target in a Þxed geographic coordinate system First, the sensor observation model is extended from narrowband to broadband Then, the complex sensor data are used in a Kalman Þlter that estimates the geo-track updates directly, without Þrst updating relative target bearing This solution is made possible by utilizing an observation model that includes the highly nonlinear geographicto-array coordinate transformation and a second complex-to-real transformation For this example we compare the performance results of the Gauss-Hermite quadrature, the unscented, and the Monte Carlo Kalman Þlters developed in Section A second example is presented in Section in which a constant-velocity vehicle is tracked through a Þeld of DIFAR (Directional Frequency Analysis and Recording) sensors For this problem, the observation noise is non-Gaussian and embedded in the nonlinear Figure 1: Roadmap to Techniques developed in Sections Through observation equation, so it is an ideal application of a particle Þlter All of the particle Þlters presented in Section are applied to this problem and their results are compared All particle Þlter applications require an analytical expression for the likelihood function, so Appendix B presents the development of the likelihood function for a DIFAR sensor for target signals with bandwidth-time products much greater than one Our summary and conclusions are found in Section In what follows, we treat bold small x and large Q letters as vectors and matices, respectively, with [·]H representing the complex conjugate transpose of a vector or matrix, [·]| representing just the transpose and h·i or E (·) used as the expectation operator It should be noted that this tutorial assumes that the reader is well versed in the use of Kalman and extended Kalman Þlters General Bayesian Filter A nonlinear stochastic system can be deÞned by a stochastic discrete-time state space transition (dynamic) equation xn = fn (xn−1 , wn−1 ) , (1) and the stochastic observation (measurement) process yn = hn (xn , ) , (2) where at time tn , xn is the (usually hidden or not observable) system state vector, wn is the dynamic noise vector, yn is the real (in comparison to complex) observation vector and is the observation noise vector The deterministic functions fn and hn link the prior state to the current state and the current state to the observation vector, respectively For complex observation vectors, we can always make them real by doubling the observation vector dimension using the in-phase and quadrature parts (see Appendix A.) In a Bayesian context, the problem is to quantify the posterior density p (xn |y1:n ), where the observations are speciÞed by y1:n , {y1 , y2 , , yn } The above nonlinear non-Gaussian state-space model, Eq 1, speciÞes the predictive conditional transition density, p (xn |xn−1 , y1:n−1 ) , of the current state given the previous state and all previous observations Also, the observation process equation, Eq 2, speciÞes the likelihood function of the current observation given the current state, p (yn |xn ) The prior probability, p (xn |y1:n−1 ) , is deÞned by Bayes’ rule as Z p (xn |y1:n−1 ) = p (xn |xn−1 , y1:n−1 ) p (xn−1 |y1:n−1 ) dxn−1 (3) Here, the previous posterior density is identiÞed as p (xn−1 |y1:n−1 ) The correction step generates the posterior probability density function from p (xn |y1:n ) = cp (yn |xn ) p (xn |y1:n−1 ) , where c is a normalization constant (4) The Þltering problem is to estimate, in a recursive manner, the Þrst two moments of xn given y1:n For a general distribution, p (x), this consists of the recursive estimation of the expected value of any function of x, say hg (x)ip(x) , using Eq’s and and requires calculation of integrals of the form Z hg (x)ip(x) = g (x) p (x) dx (5) But for a general multivariate distribution these integrals cannot be evaluated in closed form, so some form of integration approximation must be made This memorandum is primarily concerned with a variety of numerical approximations for solving integrals of the form given by Eq The Gaussian Approximation Consider the case where the noise is additive and Gaussian, so that Eq’s and can be written as xn = fn (xn−1 ) + wn−1 , (6) and yn = hn (xn ) + , (7) where wn and are modeled as independent Gaussian random variables with mean and covariances Qn and Rn , respectively The initial state x0 is also modeled as a b0 and covariance Pxx stochastic variable, which is independent of the noise, with mean x Now, assuming that deterministic functions f and h, as well as the covariance matrices Q and R, are not dependent on time, from Eq we can identify the predictive conditional density as p (xn |xn−1 , y1:n−1 ) = N (xn ; f (xn−1 ) , Q) , (8) where the general form of the multivariate Gaussian distribution N (t; s, ) is deịned by 1 | −1 N (t; s, Σ) , p (9) exp − [t − s] (Σ) [t − s] (2π)n kΣk We can now write Eq as Z p (xn |y1:n−1 ) = N (xn ; f (xn−1 ) , Q) p (xn−1 |y1:n−1 ) dxn−1 (10) Much of the Gaussian integral formulation shown below is a recasting of the material found in Ito, et al [6] For the Gaussian distribution N (t; f (s) , Σ), we can write the expected value of t as Z hti , tN (t; f (s) , Σ) dt = f (s) (11) Using Eq 10, it immediately follows that hxn |y1:n−1 i , E {xn |y1:n−1 } Z = xn p (xn |y1:n−1 ) dxn ∙Z ¸ Z = xn N (xn ; f (xn−1 ) , Q) p (xn−1 |y1:n−1 ) dxn−1 dxn ¸ Z ∙Z = xn N (xn ; f (xn−1 ) , Q) dxn p (xn−1 |y1:n−1 ) dxn−1 Z = f (xn−1 ) p (xn−1 |y1:n−1 ) dxn−1 , where Eq 11 was used to evaluate the inner integral above Now, assume that ¡ ¢ bn−1|n−1 , Pxx p (xn−1 |y1:n−1 ) = N xn−1 ; x n−1|n−1 , (12) (13) bn−1|n−1 and Pxx where x n−1|n−1 are estimates of the mean and covariance of xn−1 , given bn|n−1 and y1:n−1 , respectively Estimates of the mean and covariance of xn , given y1:n−1 , x Pxx , respectively, can now be obtained from Eq 12 as follows n|n−1 Z ¡ ¢ bn|n−1 = f (xn−1 ) N xn−1 ; x bn−1|n−1 , Pxx x (14) n−1|n−1 dxn−1 , and Pxx n|n−1 =Q+ Z ¡ ¢ bn−1|n−1 , Pxx f (xn−1 ) f | (xn−1 ) N xn−1 ; x n−1|n−1 dxn−1 bn|n−1 b|n|n−1 x −x (15) The expected value of yn , given xn and y1:n−1 , can be obtained from hyn |xn , y1:n−1 i , E {yn |xn , y1:n−1 } Z = yn p (xn |y1:n−1 ) dxn Now, if we use a Gaussian approximation of p (xn |y1:n−1 ) given by ¡ ¢ bn|n−1 , Pxx p (xn |y1:n−1 ) = N xn ; x n|n−1 , bn|n−1 , of hyn |xn , y1:n−1 i from we can obtain an estimate, y Z ¡ ¢ bn|n−1 = yn N xn ; x bn|n−1 , Pxx y n|n−1 dxn Z ¢ ¡ bn|n−1 , Pxx = h (xn ) N xn ; x n|n−1 dxn (16) (17) (18) Appendix A A Kalman Filter for Nonlinear and Complex Observation Processes The posterior (conditional) density is the PDF of x (tn ) , x (n) given the observations y1:n , {y1 , y2 , , yn } It can be written in terms of the joint density of x (n) and y1:n as p (x (n) , y1:n ) p (x (n) |y1:n ) = (A-1) p (y1:n ) Since the left-hand side is a density deÞned in terms of the real variable x (n), the righthand side must also be written in terms of real variables For the case were p (x (n) |y1:n ) is a normalized Gaussian PDF, if we deÞne a normalized jointly Gaussian PDF p (x (n) , y1:n ) such that the normalization constant p (y1:n ) can be ignored, then p (x (n) |y1:n ) = p (x (n) , y1:n ) (A-2) We will approximate the joint density p (x (n) , y1:n ) by the predictive density, i.e., p (x (n) , y1:n ) ' p (x (n) , y (n) |x (n − 1) , y1:n−1 ) (A-3) b,x bn|n = E {x (n) |y1:n } , x (A-4) Now, let b] [x (n) − x b]| |y1:n } Pxx , Pxx n|n = E {[x (n) − x The Gaussian posterior density can then be written as ¢ ¡ b, Pxx p (x (n) |y1:n ) = N x ¾ ½ 1 = exp − A , 1/2 (2π)Nx /2 |Pxx | (A-5) (A-6) where Nx is the dimension of x, and ¡ ¢−1 b] b]| Pxx [x (n) − x A , [x (n) − x ¢ ¢−1 ¡ ¡ −1 b x = x| (n) Pxx x (n) − x| (n) Pxx ¡ ¢ ¡ ¢ −1 −1 b b| Pxx −b x| Pxx x x (n) + x (A-7) Returning to the joint density p (x (n) , y (n)), we want to address the case where y (n) is complex Since the joint PDF must be written in terms of real variables, let ∙ I ¸ y (n) y (n) , , (A-8) yQ (n) 41 where yI (n) and yQ (n) are the in-phase and quadrature parts of y (n) , respectively Now deịne the joint vector x (n) (A-9) z (n) , y (n) Assuming that the joint PDF is Gaussian, p (z (n)) ∼ N (z (n) , Pzz ) , where z (n) = ∙ x (n) y (n) ,E x (n) y (n) ¸ |x (n − 1) , y1:n−1 (A-10) ¾ = ∙ and bn|n−1 x bn|n−1 y ¸ Pzz = E {[z (n) − z (n)] [z (n) − z (n)]| |x (n − 1) , y1:n−1 } ¸ ∙ xx ¸ ∙ xx Pn|n−1 Pxy P Pxy n|n−1 , = yy Pyx Pyy Pyx n|n−1 Pn|n−1 , (A-11) (A-12) The inverse of Pzz is given by zz −1 (P ) = ∙ C11 C12 C21 C22 ¸ , (A-13) with C11 , ¡ xx ¢−1 P − Pxy (Pyy )−1 Pyx , xy yy −1 C12 , −C11 P (P ) , C21 , −C22 Pyx (Pxx )−1 , ¡ ¢−1 C22 , Pyy − Pyx (Pxx )−1 Pxy (A-14a) (A14b) (A-14c) (A-14d) Thus, the joint PDF is p (z (n)) = with B , ∙ (2π)(Nx +2Ny )/2 |Pzz |1/2 exp B , (A-15) ¸| ¸ ∙ zz −1 (x (n) − x (n)) (y (n) − y (n)) (x (n) − x (n)) (y (n) − y (n)) (P ) = [x (n) − x (n)]| C11 [x (n) − x (n)] + [x (n) − x (n)]| C12 [y (n) − y (n)] + [y (n) − y (n)]| C21 [x (n) − x (n)] + [y (n) − y (n)]| C21 [y (n) − y (n)] = x| (n) C11 x (n) +x| (n) [−C11 x (n) + C12 (y (n) − y (n))] +··· (A-16) 42 Comparing the Þrst term of Eq A-7 with the ịrst term of Eq A-16 yields Ă Â1 ¡ xx ¢−1 = C11 = Pxx − Pxy (Pyy )−1 Pyx P Thus, Pxx = Pxx − Pxy (Pyy )−1 Pyx (A-17) (A-18) Comparing the second term of Eq A-7 with the second term of Eq A-16 results in ¡ xx ¢−1 b = C11 x (n) − C12 (y (n) − y (n)) , P x ¡ ¢−1 = Pxx − Pxy (Pyy )−1 Pyx x (n) ¢ ¡ xx −1 −1 Pxy (Pyy )−1 (y (n) − y (n)) (A-19) + P − Pxy (Pyy ) Pyx b and using Eq A-18 yields Solving for x b = x (n) + Pxy (Pyy )−1 [y (n) − y (n)] x (A-20) Kn , Pxy (Pyy )−1 , (A-21) Now, deÞning the Kalman Gain as and using Eq’s A-18, A-20, and A-21, the Kalman ịlter equations for complex observations are given by Ê Ô bn|n = x bn|n−1 bn|n−1 + Kn y (n) − y x (A-22) yy xx xx | (A-23) Pn|n = Pn|n−1 − Kn Pn|n−1 Kn , where (A-24) Pxy n|n−1 (A-25) Pyy n|n1 B âÊ ÔÊ Ô| ê bn|n1 xn|n1 x bn|n1 xn|n1 x âÊ ÔÊ Ô| ê bn|n−1 yn|n−1 − y bn|n−1 = E xn|n−1 − x âÊ ÔÊ Ô| ê bn|n1 yn|n1 y bn|n1 = E yn|n−1 − y Pxx n|n−1 = E (A-26) Derivation of the Likelihood Function for a DIFAR Bearing Observation We start by considering θ (t) = tan−1 (C2 (t) /C1 (t)) , with C1 (t) ∼ N (η , σ 21 ) and C2 (t) ∼ N (η , σ 22 ) C1 (t) and C2 (t) are deÞned by the time domain equivalent of Eq 103b We will ịrst determine the Ă probability  density pz (z) when z = C2 (t) /C1 (t) and then the likelihood function p θ (tn ) |xn|n , where θ (tn ) is the bearing output of the DIFAR sensor at time tn and xn|n is the estimated vehicle position at time tn based on all observations up to and including time tn Consider the case where C2 (t) /C1 (t) ≤ z Then, it follows that: C2 (t) ≤ zC1 (t) if C1 (t) ≥ C2 (t) ≥ zC1 (t) if C1 (t) ≤ 43 (B-1) The distribution function Fz (z) can therefore be written as Fz (z) = Z ∞ Z C1 z p (C1 , C2 ) dC2 dC1 + −∞ Z −∞ Z ∞ p (C1 , C2 ) dC2 dC1 (B-2) C1 z Then, the probability density function can be obtained from Z ∞ Z d C1 p (C1 , C1 z) dC1 − C1 p (C1 , C1 z) dC1 Fz (z) = pz (z) , dz −∞ Z ∞ Z ∞ = C1 p (C1 , C1 z) dC1 + C1 p (−C1 , −C1 z) dC1 (B-3) Since C1 and C2 are jointly Gaussian n h 1√ 1) − p (C1 , C2 ) = 2πσ σ 1−r2 exp − 2(1−r2 ) (C1 −η σ 2r(C1 −η1 )(C2 −η ) σ1 σ2 + (C2 −η )2 σ22 io , (B-4) where r is the normalized correlation coefficient If we note that C2 = C1 z and expand the exponential and complete the square, p (C1 , C2 ) can be written as ( ả2 ) C1 p (C1 , C1 z) = B exp − (1 r2 ) ảắ 22 − , (B-5) × exp − (1 − r2 ) α1 where √ 2πσ σ − r2 2rz z2 , − + , σ 21 σ σ σ 22 η r (η z + η ) η z , 12 − + 2, σ1 σ1σ2 σ2 2 η 2rη η η , 12 − + σ1 σ1σ2 σ2 B , (B-6a) α1 (B-6b) α2 Now, the ịrst half of Eq B-3 becomes Z ∞ C1 p (C1 , C1 z) dC1 = B exp − where I, Z ∞ ( ảắ 22 I, (1 r2 ) ả2 ) C1 exp − dC1 C1 − 2 (1 − r ) α1 44 (B-6c) (B-6d) (B-7) (B-8) Now, we modify I in the following way: ( ả2 ) α2 α2 α1 C1 − exp − dC1 C1 − I = α1 (1 − r2 ) α1 ( ả2 ) Z 2 C1 − dC1 + exp − α1 (1 − r2 ) α1 Z DeÞning u , C1 − Z I= ∞ α − α2 ∞µ (B-9) α2 , this becomes Z α1 α2 ∞ 2 exp − exp − u udu + u du (1 − r2 ) α1 − αα2 (1 − r2 ) (B-10) The ịrst integral can be evaluated directly to yield ½ ¾ Z ∞ − r2 α1 α22 /α1 exp − exp − u udu = α (1 − r2 ) α1 (1 − r2 ) − α2 (B-11) If we let v , α2 α1 Z ∞ α − α2 q α1 u, (1−r2 ) ½ exp − then the second integral in Eq B-10 becomes s ¾ Z α1 α22 (1 − r2 ) ∞ u du = δ (α2 ) (1 − r2 ) α31 δ(α2 ) α2 /α1 1−r e−v /2 dv, (B-12) where δ (x) , sign of x DeÞning a function Φ (x) (related to the error function) as Z ∞ Φ (x) , √ e−u /2 du, (B-13) 2π x we can write Eq B-12 as α2 α1 Z ∞ α − α2 o n α1 du = δ (α2 ) exp − 2(1−r u 2) r 222 (1r2 ) 31 q 22 /1 −δ (α2 ) 1−r2 Substituting Eq’s B-11 and B-14 into Eq B-10 yields r µ n h o i1/2 ¶ − r2 2πα22 (1−r2 ) α2 /α1 α2 /α1 I= exp − 2(1−r2 ) + δ (α2 ) Φ −δ (α2 ) 2(1−r2 ) , α31 α1 (B-14) (B-15) and Eq B-7 becomes Z ∞ C1 p (C1 , C1 z) dC1 r n o 2πα22 (1−r2 ) α3 + δ (α2 ) B = B α1 exp − 2(1−r 2) α31 ³ ´o µ i1/2 n h 22 /1 ì exp − 2(1−r2 ) α3 − α1 Φ −δ (α2 ) 2(1−r2 ) (B-16) 1−r2 45 DeÞne α3 σ 22 η 21 − 2rσ σ η η + σ 21 η 22 α (σ , σ , η , η , r) , = , (1 − r2 ) 2σ 21 σ 22 (1 − r2 ) (B-17) α22 [(σ 22 η − rσ σ η ) + (σ 21 η − rσ σ η ) z] = , (B-18) α1 σ 21 σ 22 (σ 22 − 2rσ σ z + σ 21 z ) ³ ´ (η − η z)2 α22 γ (z; σ , σ , η , η , r) , 2(1−r2 ) α3 − α1 = , (B-19) (σ 22 − 2rσ σ z + σ 21 z ) £¡ ¢ ¡ Â Ô (z; , , η , η , r) = sign σ 22 η − rσ σ η + σ 21 η − rσ σ η z (B-20) β (z; σ , σ , η , η , r) , After extensive algebra, using the above deÞnitions, we obtain Z ∞ 1/2 (1−r2 ) σ1 σ2 C1 p (C1 , C1 z) dC1 = 2π σ2 −2rσ σ z+σ2 z2 e−α(σ1 ,σ2 ,η1 ,η2 ,r) ( 2 ) 1/2 σ β(z;σ ,σ ,η ,η ,r) +δ σ√12π e−γ(z;σ1 ,σ2 ,η1 ,η2 ,r) (σ22 −2rσ1 σ2 z+σ21 z2 ) ³ ´ 1/2 ,η ,η ,r) (B-21) ×Φ −δ (z; σ , σ , η , η , r) β(z;σ1 ,σ(1−r 2) R∞ C1 p (−C1 , −C1 z) dC1 is identical in form Returning to Eq B-7, we can note that R∞ to C1 p (C1 , C1 z) dC1 if we replace η by −η and η by −η Thus, we can Þnally write ( 1/2 Ô (1 r2 ) Ê (1 ,2 ,1 ,η2 ,r) σ1 σ2 pz (z) = σ2 −2rσ σ z+σ2 z2 + e−α(σ1 ,σ2 ,−η1 ,−η2 ,r) e ( 2 2π ) δ + √ β (z; σ , σ , η , η , r)1/2 e−γ(z;σ1 ,σ2 ,η1 ,η2 ,r) 2π ´ ³ β(z;σ1 ,σ2 ,η1 ,η2 ,r)1/2 ×Φ −δ (z; σ , σ , η , η , r) (1−r2 ) δ − √ β (z; σ , σ , −η , −η , r)1/2 e−γ(z;σ1 ,σ2 −,η1 ,−η2 ,r) 2π ³ ´o β(z;σ ,σ2 ,−η1 ,−η ,r)1/2 , × Φ δ (z; σ , σ , −η , −η , r) (1−r2 ) (B-22) where we have used the fact that δ (z; σ , σ ¡ , −η , −η ¢ , r) = −δ (z; σ , σ , η , η , r) The likelihood function of the form pθ θ (t) |θn|n is given by ¡ ¢ ¡ ¢ pz z (t) |θn|n ¯ ¯ , (B-23) pθ θ (t) |θn|n = ¯ dθ(t) ¯ ¯ dz(t) ¯ where θ (t) is the output of the DIFAR sensor at time t, θn|n is the estimate of θ obtained from xn|n using Eq 107, and z = tan θ (t) In what follows, let θ (t) → θ and θn|n → θ0 Note that η i , σ i , and r are functions of θ0 It follows immediately that ¡ ¢ pθ (θ|θ0 ) = pz (z|θ0 ) + z , (B-24) 46 where ≤ θ ≤ 2π We must now determine η , η , σ , σ , and r Examine the correlation output deÞned by Ci = T Z T /2 yi (t) y0 (t) dt; i = 1, 2, (B-25) −T /2 where y0 (t) , s (t) + n0 (t) , y1 (t) , s (t) cos θ0 + n1 (t) , y2 (t) , s (t) sin θ0 + n2 (t) (B.4a) (B.4b) (B-26) Assume that s (t) , n0 (t) , n1 (t) , and n2 (t) are independent zero-mean Gaussian processes Now â ê η = E {C1 } = E s2 (t) cos θ0 Z ∞ = cos θ0 Ss (f ) df, (B-27) −∞ where Ss (f ) is the source spectrum Now examine But â ê E Ci2 = T Z T /2 −T /2 Z T /2 E {yi (t) yi (s) y0 (t) y0 (s)} dtds (B-28) T /2 â ê E {yi (t) yi (s) y0 (t) y0 (s)} = cos2 θ0 E s2 (t) s2 (s) + E {s (t) s (s)} E {ni (t) ni (s)} + cos2 θ0 E {s (t) s (s)} E {n0 (t) n0 (s)} +E {n (t) n0 (s)} E {ni (t) ni (s)} Ê Ô = Rs (0) + 2Rs2 (t − s) cos2 θ0 +Rs (t − s) Rni (t − s) +Rs (t − s) Rn0 (t − s) cos2 θ0 +Rn0 (t − s) Rni (t − s) , (B-30) where Rs (τ ) and Rni (τ ) are the signal and noise autocorrelation functions under a widesense stationary assumption Examine Z T /2 Z T /2 J = R1 (t − s) R2 (t − s) dtds T −T /2 −T /2 Z T /2 Z T /2−s = R1 (u) R2 (u) duds (B-31) T −T /2 −T /2−s 47 Interchanging the order of integration, it follows that Z T Z T /2−u Z Z T /2 1 R1 (u) R2 (u) du + R1 (u) R2 (u) du J = T −T /2 T −T −T /2−u Z T (T − |u|) R1 (u) R2 (u) du = T T Z |u| T = (B-32) R1 (u) R2 (u) du T −T T If the correlation time τ i ¿ T , implying that BT À 1, where B is the bandwidth, then Z ∞ J = R1 (u) R2 (u) du T −∞ Z ∞ S1 (f ) S2 (f ) df, (B-33) = T −∞ where S1 (f ) and S2 (f ) are the corresponding spectra Using the above results, we conclude that Z Z â 2ê 2 2 2 E C1 Ss (f ) df + cos θ0 Ss (f ) Sn0 (f ) df = Rs (0) cos θ0 + cos θ0 T T −∞ −∞ Z Z ∞ ∞ Sn (f ) Ss (f ) df + Sn (f ) Sn1 (f ) df (B-34) + T −∞ T −∞ Thus σ 21 = T Similarly σ 22 = T Z ∞ −∞ Z © ª 2Ss (f ) cos2 θ0 + Ss (f ) Sn0 (f ) cos2 θ0 + Ss (f ) Sn1 (f ) + Sn0 (f ) Sn1 df (B-35) â ê 2Ss2 (f ) sin2 + Ss (f ) Sn0 (f ) sin2 θ0 + Ss (f ) Sn2 (f ) + Sn0 (f ) Sn2 df (B-36) In addition, it follows that Thus E {C1 C2 } = E {y1 (t) y0 (t) y2 (t) y0 (t)} Ô Ê = Rs2 (0) + 2Rs2 (t − s) + Rs (t − s) Rn0 (t − s) cos θ0 sin θ0 (B-37) E {(C1 − η ) (C2 − η )} = T and now we can identify ½Z ∞ −∞ Ê Ô 2Ss (f ) + Ss (f ) Sn0 (f ) df ¾ cos θ0 sin θ0 , E {(C1 − η ) (C2 − η )} ẵZ Ô Ê = 2Ss (f ) + Ss (f ) Sn0 (f ) df cos θ0 sin θ0 T σ1σ2 −∞ (B-38) r = 48 (B-39) Note that provided BT À 1, both C1 and C2 are approximately jointly Gaussian random variables For computational purposes, we make some simplifying assumptions Let σ 2N , 2B Sni (f ) = ρSn0 (f ) , σ 2S Ss (f ) = 2B Sn0 (f ) = (B-40a) (B-40b) (B-40c) which assumes that the noise has a bandlimited white spectrum Note that this can be modiÞed for any given bandlimited spectra, requiring integration over the frequency of the band ρ is the dipole channel noise gain, which is 1/2 or 1/3 for 2-D or 3-D isotropic noise, respectively [27] Now we obtain η = σ 2S cos θ0 , η = σ 2S sin θ0 , ∙ ∙ ¸ ¸ 2B 2σ 4S σ 2S σ 2N 2B σ 2S σ 2N 2σ 4N 2 σ1 = cos θ0 + ρ + + T 4B 4B T 4B 4B Ô 4N Ê SNR (2SNR + 1) cos2 θ0 + ρ (SNR + 1) , = 4BT Ô 4N Ê SNR (2SNR + 1) sin2 θ0 + ρ (SNR + 1) , σ 22 = 4BT (B-41a) (B-41b) (B-41c) (B-41d) where SNR = σ 2S /σ 2N In addition, ¸ ∙ 2B 2σ 4S σ 2S σ 2N cos θ0 sin θ0 E {(C1 − η ) (C2 − η )} = + T 4B 4B σ 4N = SNR (2SNR + 1) cos θ0 sin θ0 , 4BT and, thus r= SNR(2SNR+1) cos θ0 sin θ0 [SNR(2SNR+1) cos2 1/2 θ0 +ρ(SNR+1)]1/2 [SNR(2SNR+1) sin2 θ0 +ρ(SNR+1)] (B-42) Given the form of Eq’s B-17 - B-20, we can normalize η , η , σ , σ by σ 2N /2BT such that Eq B-41d becomes √ 2BT SNR cos θ0 , (B-43a) e η1 = √ 2BT SNR sin θ0 , (B-43b) e η2 = 2 σ e1 = SNR (2SNR + 1) cos θ0 + ρ (SNR + 1) , (B-43c) 2 (B-43d) σ e2 = SNR (2SNR + 1) sin θ0 + ρ (SNR + 1) To summarize, for z = tan θ: 49 pθ (θ|θ0 ) = (1 + z) pz (z|θ0 ) , e2 σ e1 σ pz (z|θ0 ) = σ e2 − 2re σ1σ e2 z + σ e21 z ( (B-44) 1/2 (1 − r2 ) 2π e−α(σ1 ,σ2 ,η1 ,η2 ,r) e ,e η ,e η , r) δ (z; σ e1 , σ √2 β (z; σ e2 , e η1 , e η , r)1/2 e−γ(z;σ1 ,σ2 ,η1 ,η2 ,r) e1 , σ 2π h ³ ´ 1/2 ,η ,η ,r) × Φ −δ (z; σ e1 , σ e2 , e η1, e η , r) β(z;σ1 ,σ(1−r 2) ³ ´io 1/2 ,η ,η ,r) − Φ δ (z; σ e1 , σ , (B-45) e2 , e η1, e η , r) β(z;σ1 ,σ(1−r 2) + with η 21 − 2re σ1σ e2e η 1e η2 + σ e21e η 22 σ e22e , 2e σ 21 σ e22 (1 r2 )  à  Ô2 £¡ e1e η − re σ1σ e2e η2 + σ η − re σ1σ e2e η1 z σ e2e ¡ ¢ , e2 , e η1 , e η , r) , β (z; σ e1 , σ σ e21 σ e22 σ σ1σ e2 z + σ e21 z e2 − 2re e2 , e η1 , e η , r) , α (e σ1, σ and (B-46a) (B-46b) η z)2 (e η2 − e ¡ ¢, e2 , e η1 , e η , r) , γ (z; σ e1 , σ (B-46c) σ e2 − 2re σ1σ e2 z + σ e21 z £¡ Â Ă Â Ô e2 , e , e η , r) = sign σ e2 e η − re σ1σ e2 e η2 + σ η − re σ1σ e2 e η z , (B-46c) e1 e δ (z; σ e1 , σ Z ∞ e−u /2 du (B-47) Φ (x) , √ 2π x A plot of pθ (θ|θ0 ) as a function of SNR and bearing for a target vehicle at 20o is shown in Figure B1 References [1] A Gelb, Ed Applied Optimal Estimation, The MIT Press, Cambridge, MS, 1974 [2] A H Jazwinski, Stochastic Process and Filtering Theory, Academic Press, San Diego, CA, 1970 [3] M S Arulampalam, S Maskell, N Gordon and T Clapp, "A Tutorial on Particle Filters for Online Nonlinear/Non-Gaussian Bayesian Tracking," IEEE Trans on Signal Processing, 50(2), February 2002, pp 174-188 [4] S J Julier and J K Uhlmann, "Unscented Filtering and Nonlinear Estimation," Proceedings of the IEEE, 92(3), March 2004, pp 401-422 50 Figure B.1: Likelihood Function for a Signal at 20 Degrees for a Variety of SNRs 51 [5] H J Kushner, “Approximations to Optimal Nonlinear Filters,” IEEE Trans on Automatic Control, AC-12, No 5, October 1967, pp 546-556 [6] K Ito and K Xiong, "Gaussian Filters for Nonlinear Filtering Problems," IEEE Trans Automatic Control, 45(5), May 2000, pp 910-927 [7] E Bolviken and G Storvik, "Deterministic and Stochastic Particle Filters in StateSpace Models," in Sequential Monte Carlo Methods in Practice, A Doucet, J F G de Freitas, and N J Gordon, Eds New York, Springer-Verlag, 2001 [8] A Honkela, "Approximating Nonlinear Transformations of Probability Distributions for Nonlinear Independent Component Analysis," IEEE Int Conf on Neural Nets (IJCNN 2004), Proceedings of, Budapest, Hungary, pp 2169-2174, 2004 [9] A Doucet, J F G de Freitas, and N J Gordon, "An Introduction to Sequential Monte Carlo Methods," in Sequential Monte Carlo Methods in Practice, A Doucet, J F G de Freitas, and N J Gordon, Eds New York, Springer-Verlag, 2001 [10] N Gordon, D Salmond, and A F M Smith, "Novel Approach to Nonlinear and Non-Gaussian State Estimation, Proc Inst Elect Eng., F, Vol 140, pp 107-113, 1993 [11] S Maskell and N Gordon, "A Tutorial on Particle Filters for On-line Nonlinear/NonGaussian Bayesian Tracking," Target Tracking: Algorithms and Applications, IEE Workshop on, pp 2-1 to 2-15, 16 October 2001 [12] J H Kotecha and P M Djuri´c, "Gaussian Particle Filtering," IEEE Trans on Signal Processing, 51(10), October 2003, pp 2592-2601 [13] R van der Merwe, A Doucet, N de Freitas, and E Wan, "The Unscented Particle Filter," Technical Report CUED/F-INTENG/TR 380, Cambridge University Engineering Department, August 2000 [14] K.Murphy and S Russell, "Rao-Blackwellised Particle Filtering for Dynamic Bayesian Networks," in Sequential Monte Carlo Methods in Practice, A Doucet, J F G de Freitas, and N J Gordon, Eds., Springer-Verlag, New York, NY 2001 [15] R E Zarnich, K L Bell, and H L Van Trees, "A UniÞed Method for Measurement and Tracking of Contacts from an Array of Sensors," IEEE Trans Signal Processing, 49(12), December 2001, pp 2950-2961 [16] M Orton and W Fitzgerald, "A Bayesian Approach to Tracking Multiple Targets Using Sensor Arrays and Particle Filters," IEEE Trans Signal Processing, 50(2), February 2002, pp 216-223 [17] S C Nardone, A G Lindgren, and K F Gong, "Fundamental Properties and Performance of Conventional Bearings-Only Target Motion Analysis," IEEE Trans Automatic Control, AC-29(9), September 1984, pp 775-787 52 [18] R L Streit and M J Walsh, "A Linear Least Squares Algorithm for Bearing-Only Target Motion Analysis," Aerospace Conf., Proceedings of the 1999 IEEE, March 6-13, 1999, Vol 4, pp 6-13 [19] J M Bernardo and A F Smith, Bayesian Theory, John Wiley & Sons, New York, NY, 1994 [20] W H Press, B P Flannery, S A Teukolsky and W T Vetterling, Numerical Recipes in C, The Art of ScientiÞc Computing, 2nd Edition Cambridge University Press, New York, NY, 1992, pp 147-161 [21] G H Golub, "Some ModiÞed Matrix Eigenvalue Problems," SIAM Review, 15(2), April, 1973, pp 318-334 [22] J S Ball, "Orthogonal Polynomials, Gaussian Quadratures, and PDEs," Computing in Science and Engineering, November/December, 1999, pp 92-95 [23] S M Ross, Introduction to Probability Models, Fourth Edition, Academic Press, San Diego, CA, 1989 [24] P M Djuri´c, Y Huang, and T Ghirmai, "Perfect Sampling: A Review and Application to Signal Processing," IEEE Trans on Signal Processing, 50(2), February 2002, pp 345-356 [25] B D Ripley, Stochastic Simulation, John Wiley & sons, New York, NY, 1987 [26] M Briers, S R Maskell, and R Wright, "A Rao-Blackwellised Unscented Kalman Filter," Info Fusion 2003, Proceedings of the 6th Int Conf of, Vol 1, July 8-11, 2003 [27] C Casella and C P Robert, "Rao-Blackwellisation of Sampling Schemes," Biometrica, 83(1), pp 81-94, 1996 [28] C Hue, J LeCadre, and P Perez, "Sequential Monte Carlo Methods for Multiple Target Tracking and Data fusion," IEEE Trans Signal Processing, 50(2), pp 309325, February 2002 [29] S W Davies, "Bearing Accuracies for Arctan Processing of Crossed Dipole Arrays," OCEANS 1987, 19 September 1987, pp 351-356 [30] H Cox and R M Zeskind, "Adaptive Cardioid Processing," Signals, Systems and Computers, 26th Asilomar Conference on, 26-28 October 1992, pp 1058-1062, Vol [31] B H Maranda, "The Statistical Accuracy of an Arctangent Bearing Estimator," OCEANS 2003, Proceedings, 22-26 September 2003, pp 2127-2132, Vol [32] A D Mars, "Asynchronous Multi-Sensor Tracking in Clutter with Uncertain Sensor Locations using Bayesian Sequential Monte Carlo Methods," Aerospace Conference, 2001, IEEE Proceedings, Vol 5, 10-17 March 2001, pp 2171-2178 53 [33] "Special Issue on Monte Carlo Methods for Statistical Signal Processing," IEEE Transactions on Signal Processing, 50 (2), February 2002 [34] "Special Issue on Sequential State Estimation," Proceedings of the IEEE, 92 (3) March 2004 [35] P Fearnhead, "Sequential Monte Carlo Methods in Filter Theory," Mercer College, University of Oxford, 1998 [36] R Karlsson, "Simulation Based Methods for Target Tracking," Division of Automatic Control & Communication Systems, Department of Electrical Engineering, Linköpings universitet, Linköping, Sweden, 2002 [37] S M Herman, "A Particle Filtering Approach to Joint Passive Radar Tracking and Target ClassiÞcation," University of Illinois at Urbana-Champaign, 2002, [38] T Schön, "On Computational Methods for Nonlinear Estimation," Division of Automatic Control & Communication Systems, Department of Electrical Engineering, Linköpings universitet, Linköping, Sweden, 2003 [39] C Andrieu, M Davy, and A Doucet, "Efficient particle Filtering for Jump Markov Systems Application to Time-Varying Autoregressions," IEEE Trans Signal Processing, 51(7), July 2003, pp 1762-1770 [40] C Andrieu and A Doucet, "Joint Bayesian Model Selection and Estimation of Noisy Sinusoids via Reversible Jump MCMC," IEEE Trans Signal Processing, 47(10), October 1999, pp 2667-2676 [41] J R Larocque and J P Reilly, "Reversible Jump MCMC for Joint Detection and Estimation of Sources in Colored Noise," IEEE Trans Signal Processing, 50(2), February 2002, pp 231-240 [42] J J Rajan and P J W Rayner, "Parameter Estimation of Time-Varying Autoregressive Models using the Gibbs Sampler," Electronic Letters, 31(13), June 1995, pp 1035-1036 [43] J J Rajan and A Kawana, "Bayesian Model Order Selection using the Gibbs Sampler," Electronic Letters, 32(13), June 1996, pp 1156-1157 [44] D Sornette and K Ide, "The Kalman-Levy Filter," Physica D, vol 151, pp 142-174, 2001 [45] N Gordon, J Percival, and M Robinson, "The Kalman-Levy Filter and HeavyTailed Models for Tracking Maneuvering Targets," Proc 6th Int Conf on Information Fusion, pp 1024-1031, 7-10 July, 2003 54 Distribution List A Shah E Giannopoulos T Luginbuhl S Grenadier ARL/UT PMS 401 R Graman H Megonigal ASTO W400 R Zarnich T Oliver F Driscoll A Haug G Jacyna S Polk V Wrick GD-AIS W407 M Cho C Burmaster C Christou K McAdow J Messerschmidt Anteon BA & H JHU/APL J Stapleton W901 Metron R Bethel D Clites D Colella J Creekmore S Pawlukiewicz L Stone NUWC 55 ... of a stochastic propagation (prediction or dynamic) equation which links the current state vector to the prior state vector and a stochastic observation equation that links the observation data... to be additive and Gaussian, which leads to efficient numerical integration approximations It is shown in Appendix A that the Kalman Þlter is applicable for cases where both the dynamic and measurement... the use of Kalman and extended Kalman Þlters General Bayesian Filter A nonlinear stochastic system can be deÞned by a stochastic discrete-time state space transition (dynamic) equation xn = fn

Ngày đăng: 14/12/2018, 08:22

Từ khóa liên quan

Tài liệu cùng người dùng

  • Đang cập nhật ...

Tài liệu liên quan