Tài liệu GPS - đường dẫn quán tính và hội nhập P7 pdf

50 362 0
Tài liệu GPS - đường dẫn quán tính và hội nhập P7 pdf

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

7 Kalman Filter Basics 7.1 INTRODUCTION 7.1.1 What is a Kalman Filter? It is an extremely effective and versatile procedure for combining noisy sensor outputs to estimate the state of a system with uncertain dynamics. For our purposes in this book:  The noisy sensors may include GPS receivers and inertial sensors (acceler- ometers and gyroscopes, typically)but may also include speed sensors (e.g., wheel speeds of land vehicles, water speed sensors for ships, air speed sensors for aircraft, or Doppler radar), and time sensors (clocks).  The system state in question may include the position, velocity, acceleration, attitude,andattitude rate of a vehicle on land, at sea, in the air, or in space, but the system state may include ancillary ``nuisance variables'' for modeling correlated noise sources (e.g., GPS Selective Availability timing errors)and time-varying parameters of the sensors, such as scale factor, output bias, or (for clocks)frequency. Selective Availability has been suspended as of May 1, 2000.  Uncertain dynamics includes unpredictable disturbances of the host vehicle, whether caused by a human operator or by the medium (e.g., winds, surface currents, turns in the road, or terrain changes), but it may also include unpredictable changes in the sensor parameters. 179 Global Positioning Systems, Inertial Navigation, and Integration, Mohinder S. Grewal, Lawrence R. Weill, Angus P. Andrews Copyright # 2001 John Wiley & Sons, Inc. Print ISBN 0-471-35032-X Electronic ISBN 0-471-20071-9 More abstract treatments of the Kalman ®lter are presented in [18, 19, 40, 46, 67, 69, 71, 72], and a more basic introduction can be found in [31]. 7.1.2 How it Works The Kalman ®lter maintains two types of variables: 1. Estimated State Vector. The components of the estimated state vector include the following: (a)The variables of interest (i.e., what we want or need to know, such as position and velocity). (b)``Nuisance variables'' that are of no intrinsic interest but may be necessary to the estimation process. These nuisance variables may include, for example, the selective availability errors of the GPS satellites. We generally do not wish to know their values but may be obliged to calculate them to improve the receiver estimate of position. (c)The Kalman ®lter state variables for a speci®c application must include all those system dynamic variables that are measurable by the sensors used in that application. For example, a Kalman ®lter for a system containing accelerometers and rate gyroscopes must contain acceleration and rotation rate components to which these instruments respond. The acceleration and angular rate components do not have to be those along the sensor input axes, however. The Kalman ®lter state variables could be the components along locally level earth-®xed coordinates, even though the sensors measure components in vehicle-body-®xed coordinates. In similar fashion, the Kalman ®lter state variables for GPS-only navigation must contain the position coordinates of the receiver antenna, but these could be geodetic latitude, longitude, and altitude with respect to a reference ellipsoid or geocentric latitude, longitude, and altitude with respect to a reference sphere, or ECEF Cartesian coordinates, or ECI coordinates, or any equivalent coordinates. 2. A Covariance Matrix: a Measure of Estimation Uncertainty. The equations used to propagate the covariance matrix (collectively called the Riccati equation)model and manage uncertainty, taking into account how sensor noise and dynamic uncertainty contribute to uncertainty about the estimated system state. By maintaining an estimate of its own estimation uncertainty and the relative uncertainty in the various sensor outputs, the Kalman ®lter is able to combine all sensor information ``optimally,'' in the sense that the resulting estimate minimizes any quadratic loss function of estimation error, including the mean-squared value of any linear combination of state estimation errors. The Kalman gain is the optimal 180 KALMAN FILTER BASICS weighting matrix for combining new sensor data with a prior estimate to obtain a new estimate. 7.2 STATE AND COVARIANCE CORRECTION The Kalman ®lter is a two-step process, the steps of which we call ``prediction'' and ``correction.'' The ®lter can start with either step, but we will begin by describing the correction step ®rst. The correction step makes corrections to an estimate, based on new information obtained from sensor measurements. The Kalman gain matrix K is the crown jewel of Kalman ®ltering. All the effort of solving the matrix Riccati equation is for the sole purpose of computing the ``optimal'' value of the gain matrix K used for correcting an estimate ^ x, ^ x |{z} corrected  ^ xÀ |{z} predicted  K gain |{z} z |{z} meas: À H ^ xÀ |{z} pred: meas: P R Q S ; 7:1 based on a measurement z  Hx  noise 7:2 that is a linear function of the vector variable x to be estimated plus additive noise with known statistical properties. We will derive a formula for the Kalman gain based on an analogous ®lter called the Gaussian maximum-likelihood estimator. It uses the analogies shown in Table 7.1 between concepts in Kalman ®ltering, Gaussian probability distributions, and likelihood functions. The derivation begins with background on properties of Gaussian probability distributions and Gaussian likelihood functions, then development of models for noisy sensor outputs and a derivation of the associated maximum-likelihood estimate (MLE)for combining prior estimates with noisy sensor measurements. 7.2.1 Gaussian Probability Density Functions Probability density functions (PDFs)are nonnegative integrable functions whose integral equals unity (i.e., 1). The density functions of Gaussian probability distributions all have the form px 1  2p n det P p expÀ 1 2 x À m T P À1 x À m; 7:3 7.2 STATE AND COVARIANCE CORRECTION 181 where n is the dimension of P (i.e., P is an n  n matrix)and the parameters m  def E xPx m;P hxi7:4  def  x 1 dx 1 ÁÁÁ  x 1 dx n pxx; 7:5 P  def E xPx m;P hx À mx À m T i7:6  def  x 1 dx 1 ÁÁÁ  x 1 dx n pxx À mx À m T : 7:7 The parameter m is the mean of the distribution. It will be a column vector with the same dimensions as the variate x. The parameter P is the covariance matrix of the distribution. By its de®nition, it will always be an n  n symmetric and nonnegative de®nite matrix. However, because its determinant appears in the denominator of the square root and its inverse appears in the exponential function argument, it must be positive de®nite as well. That is, its eigenvalues must be real and positive for the de®nition to work. The constant factor 1=  2p n det P p in Eq. 7.3 is there to make the integral of the probability density function equal to unity, a necessary condition for all probability density functions. The operator EhÁi is the expectancy operator, also called the expected-value operator. The notation x Pxm; P denotes that the variate (i.e., random variable) x is drawn from the Gaussian distribution with mean m and covariance P. Gaussian distributions are also called normal or Laplace distributions. TABLE 7.1 Analogous Concepts in Three Different Contexts Context Kalman ®ltering 6 Gaussian probability distributions 6 Maximum- likelihood estimation Concepts Probability distribution 6 Likelihood function v Estimate 6 Mean 6 argmax(v a Covariance 6 Covariance 6 Information a Argmax(f ) returns the argument x of the function f where fx achieves its maximum value. For example, argmax(sin)  p=2 and argmax(cos) 0. 182 KALMAN FILTER BASICS 7.2.2 Likelihood Functions Likelihood functions are similar to probability density functions, except that their integrals are not constrained to equal unity, or even required to be ®nite. They are useful for comparing relative likelihoods and for ®nding the value of the unknown independent variable x at which the likelihood function achieves its maximum, as illustrated in Fig. 7.1. 7.2.2.1 Gaussian Likelihood Functions Gaussian likelihood functions have the form vx; m; YexpÀ 1 2 x À m T Yx À m; 7:8 where the parameter Y is called the information matrix of the likelihood function. It replaces P À1 in the Gaussian probability density function. If the information matrix Y is nonsingular, then its inverse Y À1  P, a covariance matrix. However, an information matrix is not required to be nonsingular. This property of information matrices is important for representing the information from a set of measurements (sensor outputs)with incomplete information for determining the unknown vector x. That is, the measurements may provide no information about some linear combina- tions of the components of x, as illustrated in Fig. 7.2. 7.2.2.2 Scaling of Likelihood Functions Maximum-likelihood estimation is based on the argmax of the likelihood function, but for any positive scalar c > 0, argmaxcv  argmaxv: 7:9 Fig. 7.1 Maximum-likelihoodestimate. 7.2 STATE AND COVARIANCE CORRECTION 183 That is, positive scaling of likelihood functions will have no effect on the maximum- likelihood estimate. As a consequence, likelihood functions can have arbitrary positive scaling. 7.2.2.3 Independent Likelihood Functions The joint probability PAB of independent events A and B is the product PA&BPAÂPB. The analogous effect on independent likelihood functions v A and v B is the pointwise product. That is, at each ``point'' x, v AB xv A xÂv B x: 7:10 7.2.2.4 Pointwise Products One of the remarkable attributes of Gaussian likelihood functions is that their pointwise products are also Gaussian likelihood functions, as illustrated in Fig. 7.3. Given two Gaussian likelihood functions with parameter sets m A ; Y A ÈÉ and fm B ; Y B g, their pointwise product is a scaled Gaussian likelihood function with parameters fm AB ; Y AB g such that, for all x, expÀ 1 2 x À m AB  T Y AB x À m AB   c expÀ 1 2 x À m A  T Y A x À m A   exp À 1 2 x À m B  T Y B x À m B   7:11 for some constant c > 0. & & && & && Fig. 7.2 Likelihoodwithout maximum. 184 KALMAN FILTER BASICS One can solve Eq. 7.11 for the parameters fm AB ; Y AB g as functions of the parameters fm A ; Y A ; m B ; Y B g: Taking logarithms of both sides of Eq. 7.11 will produce the equation À 1 2 x À m AB  T Y AB x À m AB   logcÀ 1 2 x À m A  T Y A x À m A  À 1 2 x À m B  T Y B x À m B : 7:12 Next, taking the ®rst and second derivatives with respect to the independent variable x will produce the equations Y AB x À m AB Y A x À m A Y B x À m B ; 7:13 Y AB  Y A  Y B ; 7:14 respectively. Information is Additive The information matrix of the combined likelihood function (Y AB in Eq. 7.14)equals the sum of the individual information matrices of the component likelihood functions (Y A and Y B in Eq. 7.14). CombinedMaximum-LikelihoodEstimate is a WeightedAverage Equation 7.13 evaluated at x  0is Y AB m AB  Y A m A  Y B m B ; 7:15 && && & && & & && Fig. 7.3 Pointwise products of Gaussian likelihood functions. 7.2 STATE AND COVARIANCE CORRECTION 185 which can be solved for m AB  Y y AB Y A m A  Y B m B 7:16 Y A  Y B  y Y A m A  Y B m B ; 7:17 where y denotes the Moore±Penrose inverse of a matrix (de®ned in Section B.1.4.7). Equations 7.14 and 7.17 are key results for deriving the formula for Kalman gain. All that remains is to de®ne likelihood function parameters for noisy sensors. 7.2.3 Noisy Measurement Likelihoods The term measurements refers to outputs of sensors that are to be used in estimating the argument vector x of a likelihood function. Measurement models represent how these measurements are related to x, including those errors called measurement errors or sensor noise. These models can be expressed in terms of likelihood functions with x as the argument. 7.2.3.1 Measurement Vector The collective output values from a multitude ` of sensors are the components of a vector z  def z 1 z 2 z 3 . . . z ` P T T T T T R Q U U U U U S ; 7:18 called the measurement vector, a column vector with ` rows. 7.2.3.2 Measurement Sensitivity Matrix We suppose that the measured values z i are linearly 1 related to the unknown vector x we wish to estimate, z  Hx 7:19 where H is the measurement sensitivity matrix. 7.2.3.3 Measurement Noise Measurement noise is the electronic noise at the output of the sensors. It is assumed to be additive, z  Hx  v; 7:20 && 1 The Kalman ®lter is de®ned in terms of the measurement sensitivity matrix H, but the extended Kalman ®lter (described in Section 7.6.4)can be de®ned in terms of a suitably differentiable vector-valued function hx. 186 KALMAN FILTER BASICS where the measurement noise vector v is assumed to be zero-mean Gaussian noise with known covariance R, Ehvi def 0; 7:21 R  def Ehvv T i: 7:22 7.2.3.4 Measurement Likelihood A measurement vector z and its associated covariance matrix of measurement noise R de®ne a likelihood function for the ``true'' value of the measurement (i.e., without noise). This likelihood function will have mean m z  z and information matrix Y z  R À1 ; assuming R is nonsingular. 7.2.3.5 Unknown Vector Likelihoods The same parameters de®ning measurement likelihoods also de®ne an inferred likelihood function for the true value of the unknown vector, with mean m x  H y m z 7:23  H y z 7:24 and information matrix Y x  H T Y z H 7:25  H T R À1 H; 7:26 where the n  ` matrix H y is de®ned as the Moore±Penrose generalized inverse (de®ned in Appendix B)of the `  n matrix H. This information matrix will be singular if `<n (i.e., there are fewer sensor outputs than unknown variables), which is not unusual for GPS=INS integration. 7.2.4 Gaussian MLE 7.2.4.1 Variables Gaussian MLE uses the following variables: ^ x, the maximum likelihood estimate of x. It will always equal the argmax (mean m)of an associated Gaussian likelihood function, but it can have different values: 7.2 STATE AND COVARIANCE CORRECTION 187 ^ xÀ, representing the likelihood function prior to using the measurements. ^ x, representing the likelihood function after using the measurements. P, the covariance matrix of estimation uncertainty. It will always equal the inverse of the information matrix Y of the associated likelihood function. It also can have two values: PÀ, representing the likelihood function prior to using the measurements. P, representing the likelihood function after using the measurements. z, the vector of measurements. H, the measurement sensitivity matrix. R, the covariance matrix of sensor noise. 7.2.4.2 Update Equations The MLE formula for updating the variables ^ x and P to re¯ect the effect of measurements can be derived from Eqs. 7.14 and 7.17 with initial likelihood parameters m A  ^ xÀ; 7:27 the MLE before measurements, and Y A  PÀ À1 ; 7:28 the inverse of the covariance matrix of MLE uncertainty before measurements. The likelihood function of x inferred from the measurements alone (i.e., without taking into account the prior estimate)is represented by the likelihood function parameters Y B  H T R À1 H; 7:29 the information matrix of the measurements, and m B  H y z; 7:30 where z is the measurement vector. 7.2.4.3 Covariance Update The Gaussian likelihood function with param- eters fm AB ; Y AB g of Eqs. 7.14 and 7.17 then represents the state of knowledge about the unknown vector x combining both sources (i.e., the prior likelihood and the effect of the measurements). That is, the covariance of MLE uncertainty after using the measurements will be P  Y À1 AB ; 7:31 and the MLE of x after using the measurements will then be ^ x  m AB : 7:32 Equation 7.14 can be simpli®ed by applying the following general matrix formula 2 : A À1  BC À1 D À1  A À ABC  DAB À1 DA; 7:33 && & & 2 A formula with many discoverers. Henderson and Searle [53] list some earlier ones. 188 KALMAN FILTER BASICS [...]... Systems of First-Order Linear Differential Equations The so-called state space models for dynamic systems replace higher order differential equations with systems of ®rst-order differential equations This can be done by de®ning the ®rst n À 1 derivatives of an nth-order differential equation as state variables Example 7.2 State Space Model for Harmonic Oscillator Equation 7.48 is a linear second-order …n... Equation 7.59 is the discrete-time dynamic system model corresponding to the continuous-time dynamic system model of Eq 7.51 The matrix FkÀ1 (de®ned in Eq 7.60) in the discrete-time model (Eq 7.59) is called a state transition matrix for the dynamic system de®ned by F Note that F depends only on F, and not on the dynamic disturbance function w…t† The noise vectors wk are the discrete-time analog of the dynamic... of the state vector, and the covariance matrix represents the mean-squared uncertainty in the estimate The time update equations 7.3 197 STATE AND COVARIANCE PREDICTION of the Kalman ®lter propagate the estimated value and its associated mean-squared uncertainty forward in time 7.3.2.1 Zero-Mean White Gaussian Noise Processes A zero-mean white Gaussian noise process in discrete time is a sequence of... time period between samples and t is the exponential decay time constant of the process The steady-state variance s2 of such a process is the solution to its steady-state variance equation, s2 ˆ e2 s2 ‡ Q Q 1 À e2 Q ˆ ; 1 À eÀ2Dt=t ˆ …7:76† …7:77† …7:78† where Q is the variance of the scalar zero-mean white-noise process fwk g The autocovariance sequence of an exponentially correlated random process in... tt is the correlation time for the time-varying oscillator damping time constant, and to is the correlation time for the time-varying resonant frequency of the oscillator If only the in-phase component or the oscillator output can be sensed, then the measurement sensitivity matrix will have the form  Hˆ 1 0 0 à 0 : Figure 7.9 is a sample output of the MATLAB m-®le osc ekf:m on the accompanying diskette,... steady-state covariance matrix However, the covariance of uncertainty in the estimated system state vector can still converge to a ®nite steady-state value, even if the process itself is unstable Methods for determining whether estimation uncertainties will diverge are presented in Chapter 8 7.5.1.3 Exponentially Correlated Noise Exponentially correlated random processes have ®nite, constant steady-state... of Example 7.4 has no time-dependent terms in its coef®cient matrix (Eq 7.53), making it a time-invariant model with state transition matrix F ˆ exp…Dt F† P sin…o Dt† cos…o Dt† ‡ T ot ˆ eÀDt=t T R sin…o Dt† À …1 ‡ o2 t2 † o t2 sin…o Dt† o cos…o Dt† À Q U U; sin…o Dt† S …7:62† ot where o ˆ oresonant ; the resonant frequency t ˆ tdamping ; the damping time constant Dt ˆ discrete-time step The eigenvalues... individual GPS satellite time references Its theoretical autocorrelation function is plotted in Fig 7.6 along with an exponential correlation curve The two are scaled to coincide at the autocorrelation coef®cient value of 1=e % 0:36787944 , the argument at which correlation time is de®ned Unlike exponentially correlated noise, this source has greater short-term correlation and less long-term correlation... varying error sources in GPS= INS integration Slow variables may also include many of the calibration parameters of the inertial sensors, which can be responding to temperature variations or other unknown but slowly changing in¯uences Like SA errors, these other slow variations of these variables can often be tracked and compensated by combining the INS navigation estimates with the GPS- derived estimates... Gwx;kÀ1 wx;kÀ1 ‡ Dxx;kÀ1 jkÀ1 ; …7:80† 7.5 ACCOMMODATING CORRELATED NOISE 205 Fig 7.7 Spectral properties of some common noise types where wkÀ1 ˆ zero-mean white (i.e., uncorrelated) disturbance noise Gwx;kÀ1 ˆ white-noise distribution matrix jkÀ1 ˆ zero-mean correlated disturbance noise Dxx;kÀ1 ˆ correlated noise distribution matrix If the correlated dynamic disturbance noise can be modeled as yet another . Copyright # 2001 John Wiley & Sons, Inc. Print ISBN 0-4 7 1-3 5032-X Electronic ISBN 0-4 7 1-2 007 1-9 More abstract treatments of the Kalman ®lter are presented. the components along locally level earth-®xed coordinates, even though the sensors measure components in vehicle-body-®xed coordinates. In similar fashion,

Ngày đăng: 15/12/2013, 03:15

Từ khóa liên quan

Tài liệu cùng người dùng

  • Đang cập nhật ...

Tài liệu liên quan