Discrete Time Systems Part 3 ppt

30 281 0
Discrete Time Systems Part 3 ppt

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

49 Distributed Fusion Prediction for Mixed Continuous-Discrete Linear Systems Conclusions In this chapter, two fusion predictors (FLP and PFF) for mixed continuous-discrete linear systems in a multisensor environment are proposed Both of these predictors are derived by using the optimal local Kalman estimators (filters and predictors) and fusion formula The fusion predictors represent the optimal linear combination of an arbitrary number of local Kalman estimators and each is fused by the MSE criterion Equivalence between the two fusion predictors is established However, the PFF algorithm is found to more significantly reduce the computational complexity, due to the fact that the PFF’s weights b(i) not tk depend on the leads Δ > in contrast to the FLP’s weights a(i) t+Δ Appendix Proof of Theorem (a), (c) Equation (12) and formula (14) immediately follow as a result of application of the general fusion formula [20] to the optimization problem (10), (11) (b) In the absence of observations differential equation for the local prediction error ˆτ x(i) = x τ -x(i) takes the form τ ˆτ x(i) =x τ -x(i) =Fτ x(i) +G τ v τ τ τ ( T (A.1) ) (ij) Then the prediction cross-covariance Pτ =E x(i)x(j) associated with the x(i) and x(j) τ τ τ τ satisfies the time update Lyapunov equation (see the first and third equations in (13)) At t=t k the local error x(i) can be written as t k ( ) ˆt ˆt ˆ ˆ x(i) =x tk -x(i) =xt k -x(i) -L(i) ⎡ y(i) -H(i) x(i) ⎤ =x(i) -L(i) ⎡H(i) x tk +w(i) -H(i) x(i) ⎤ = I n -L(i) H(i) x(i) -L(i) w(i) (A.2) tk tk ⎢ tk tk tk ⎥ tk tk tk ⎥ tk tk tk tk tk k k ⎣ ⎦ tk tk ⎢ tk ⎣ ⎦ - x(i) , tk Given that random vectors w(i) tk and are mutually uncorrelated at i ≠ j , we obtain w(j) tk ( observation update equation (13) for Pt(ij) =E x(i) x(j) tk tk k T ) This completes the proof of Theorem Proof of Theorem ˆτ ˆτ It is well known that the local Kalman filtering estimates x(i) are unbiased, i.e., E(x(i) )=E(x τ ) (i) ˆ (i) =0 at ≤ τ ≤ t k With this result we can prove unbiased property at or E x τ =E x τ -x τ t k < τ ≤ t+Δ Using (8) we obtain ( ) ( ) ˆτ x(i) =x τ -x(i) =Fτ x(i) +G τ v τ , x(i) =x(i) , t k ≤ τ ≤ t+Δ , τ τ τ=t t k or d dτ ( ) ( ) ( k ) ( ) E x(i) =Fτ E x(i) , E x(i) =E x(i) =0 , t k ≤ τ ≤ t+Δ τ τ τ=t t k k (A.3) (A.4) Differential equation (A.4) is homogeneous with zero initial condition therefore it has zero solution E x(i) ≡ or E x(i) =E ( x τ ) , t k ≤ τ ≤ t+Δ ˆτ τ ( ) ( ) ˆ t+Δ Since the local predictors x(i) , i = 1, ,N are unbiased, then we have N ⎡N ⎤ ˆ t+Δ ˆ t+Δ E xFLP =∑ a(i) E x(i) = ⎢ ∑ a(i) ⎥ E ( x t+Δ ) =E ( x t+Δ ) t+Δ t+Δ i=1 ⎣ i=1 ⎦ ( ) ( ) (A.5) 50 Discrete Time Systems This completes the proof of Theorem Proof of Theorem a., c Equations (18) and (19) immediately follow from the general fusion formula for the filtering problem (Shin et al., 2006) b Derivation of observation update equation (13) is given in Theorem ˆ t+Δ d Unbiased property of the fusion estimate xPFF is proved by using the same method as in Theorem This completes the proof of Theorem Proof of Theorem By integrating (8) and (17), we get ˆ t+Δ ˆt ˆ t+Δ ˆt x(i) =Φ ( t+Δ,t k ) x(i) , i = 1, ,N , xPFF =Φ(t+Δ,t k )xFF , k (A.6) k where Φ(t,s) is the transition matrix of (8) or (17) From (10) and (16), we obtain N N N ˆ t+Δ ˆ ˆt xFLP =∑ a(i) x(i) =∑ a(i) Φ(t+Δ,t k )x(i) =∑ A(i) t+Δ t+Δ t+Δ t,t i=1 k i=1 ˆ t+Δ ˆt xPFF =Φ(t+Δ,t k )xFF = k N ∑ i=1 i=1 N ˆ Φ(t+Δ,t k )b(i) x(i) = tk tk k ∑ B(i) t,t i=1 ˆ (i) ,Δ x t k , k (A.7) ˆ (i) ,Δ x t , k where the new weights take the form: A(i) t,t k (i) ,Δ =a t+Δ Φ ( t+Δ,t k ) , B(i) t,t k ,Δ =Φ ( t+Δ,t k ) b(i) t (A.8) k Next using (12) and (18) we will derive equations for the new weights (A.8) Multiplying the first (N-1) homogeneous equations (18) on the left hand side and right hand side by the nonsingular matrices Φ(t+Δ,tk) and Φ(t+Δ,tk)T, respectively, and multiplying the last nonhomogeneous equation (18) by Φ(t+Δ,tk) we obtain N ∑ Φ ( t+Δ,t k ) b(i) ⎡Pt(ij) -Pt(iN) ⎤Φ ( t+Δ,t k ) t ⎣ ⎦ i=1 N k ∑ Φ( i=1 k k T =0, j=1, ,N-1; (A.9) ) t+Δ,t k b(i) =Φ(t+Δ,t k ) tk Using notation for the B(i) ,Δ , i = 1, ,N such that t,t difference δPs(ijN) =Ps(ij) -Ps(iN) we obtain equations for k N ∑ B(i) t,t i=1 k (ijN) ,Δ δPt k Φ ( t+Δ,t k )T =0, j=1, ,N-1; N ∑ B(i) t,t i=1 k ,Δ =Φ(t+Δ,t k ) (A.10) Analogously after simple manipulations equation (12) takes the form N ∑ a(i) Φ ( t+Δ,t k ) Φ ( t+Δ,t k ) t+Δ i=1 N ∑ i=1 a(i) Φ(t+Δ,t k )= t+Δ N ∑ i=1 −1 N (ij) (ijN) (iN) ⎡Pt+Δ -Pt+Δ ⎤=∑ A(i) Φ ( t+Δ,t k )−1 δPt+Δ =0, t,t k ,Δ ⎣ ⎦ A(i)k ,Δ =Φ(t+Δ,t k ) t,t i=1 (A.11) Distributed Fusion Prediction for Mixed Continuous-Discrete Linear Systems 51 or N ∑ A(i) t,t i=1 k ,Δ Φ (ijN) ( t+Δ,t k )−1 δPt+Δ =0, j = 1, ,N-1; N ∑ A(i) t,t i=1 k ,Δ =Φ(t+Δ,t k ) (A.12) As we can see from (A.10) and (A.12) if the equality (ijN) δPt(ijN)Φ ( t+Δ,t k ) =Φ ( t+Δ,t k ) δPt+Δ T -1 k (A.13) will be hold then the new weights A(i) ,Δ and B(i) ,Δ satisfy the identical equations To t,t k t,t k show that let consider differential equation for the difference δPs(ijN) =Ps(ij) -Ps(iN) Using (13) we obtain the Lyapunov homogeneous matrix differential equation ( ) ( ) (iN) δPs(ijN) =Ps(ij) -Ps(iN) =Fs Ps(ij) -Ps(iN) + Ps(ij) -Ps FsT =FsδPs(ijN) +δPs(ijN)FsT , t k ≤ s ≤ t+Δ, (A.14) which has the solution (ijN) δPt+Δ =Φ ( t+Δ,t k ) δPt(ijN)Φ ( t+Δ,t k ) T k (A.15) By the nonsingular property of the transition matrix Φ(t+Δ,t k ) the equality (A.13) holds, then A(i) ,Δ = B(i) ,Δ , and finally using (A.7) we get t,t t,t k k N ˆ t+Δ x FLP =∑ A(i) t,t i=1 k ˆ (i) ,Δ x t k N = ∑ B(i) t,t i=1 k ˆ (i) ,Δ x t k ˆ t+Δ = xPFF (A.16) This completes the proof of Theorem References Alouani, A T & Gray, J E (2005) Theory of distributed estimation using multiple asynchronous sensors, IEEE Transations on Aerospace and Electronic Systems, Vol 41, No 2, pp 717-722 Bar-Shalom, Y & Campo, L (1986) The effect of the common process noise on the twosensor fused track covariance, IEEE Transactions on Aerospace and Electronic Systems, Vol 22, No 6, pp 803–805 Bar-Shalom, Y (1990) Multitarget-multisensor tracking: advanced applications, Artech House, Norwood, MA Bar-Shalom, Y & Li, X R (1995) Multitarget-multisensor tracking: principles and techniques, YBS Publishing Bar-Shalom, Y (2006) On hierarchical tracking for the real world, IEEE Transactions on Aerospace and Electronic Systems, Vol 42, No, 3, pp 846–850 Berg, T M & Durrant-Whyte, H F (1994) General decentralized Kalman filter, Proceedings of American Control Conference, pp 2273-2274, Maryland Chang, K C.; Saha, R K & Bar-Shalom, Y (1997) On Optimal track-to-track fusion, IEEE Transactions on Aerospace and Electronic Systems, Vol 33, No 4, pp 1271–1275 52 Discrete Time Systems Chang, K C.; Tian, Z & Saha, R K (2002) Performance evaluation of track fusion with information matrix filter, IEEE Transactions on Aerospace and Electronic Systems, Vol 38, No 2, pp 455–466 Deng, Z L.; Gao, Y.; Mao, L & Hao, G (2005) New approach to information fusion steadystate Kalman filtering, Automatica, Vol 41, No, 10, pp 1695-1707 Gelb, A (1974) Applied Optimal Estimation, MIT Press, Cambridge, MA Hall, D L (1992) Mathematical techniques in multisensor data Fusion, Artech House, London Hashemipour, H R.; Roy, S & Laub, A J (1998) Decentralized structures for parallel Kalman filtering, IEEE Transactions on Automatic Control, Vol 33, No 1, pp 88-94 Jannerup, O E & Hendricks, E (2006) Linear Control System Design, Technical University of Denmark Lee, S H & Shin, V (2007) Fusion Filters Weighted by Scalars and Matrices for Linear Systems, World Academy of Science, Engineering and Technology, Vol 34, pp 88-93 Lewis, F L (1986) Optimal Estimation with an Introduction to Stochastic Control Theory, John Wiley & Sons, New York Li, X R.; Zhu, Y M.; Wang, J & Han, C (2003) Optimal Linear Estimation Fusion - Part I: Unified Fusion Rules, IEEE Transations on Information Theory, Vol 49, No 9, pp 2192-2208 Ren, C L & Kay, M G (1989) Multisensor integration and fusion in intelligent systems, IEEE Transactions on Systems, Man, and Cybernetics, Vol 19, No 5, pp 901-931 Roecker, J A & McGillem, C D (1998) Comparison of two-sensor tracking methods based on state vector fusion and measurement fusion, IEEE Transactions on Aerospace and Electronic Systems, Vol 24, No 4, pp 447–449 Shin, V.; Lee, Y & Choi, T (2006) Generalized Millman’s formula and its applications for estimation problems, Signal Processing, Vol 86, No 2, pp 257–266 Shin, V.; Shevlyakov, G & Kim, K S (2007) A new fusion formula and its application to continuous-time linear systems with multisensor environment, Computational Statistics & Data Analysis, Vol 52, No 2, pp 840-854 Song, H R.; Joen, M G.; Choi, T S & Shin, V (2009) Two Fusion Predictors for DiscreteTime Linear Systems with Different Types of Observations, International Journal of Control, Automation, and Systems, Vol 7, No 4, pp 651-658 Sun, S L (2004) Multi-sensor optimal information fusion Kalman filters with applications, Aerospace Science and Technology, Vol 8, No 1, pp 57–62 Sun, S L & Deng, Z L (2005) Multi-sensor information fusion Kalman filter weighted by scalars for systems with colored measurement noises, Journal of Dynamic Systems, Measurement and Control, Vol 127, No 4, pp 663–667 Zhou, J.; Zhu, Y.; You, Z & Song, E (2006) An efficient algorithm for optimal linear estimation fusion in distributed multisensor systems, IEEE Transactions on System, Man, Cybernetics, Vol 36, No 5, pp.1000–1009 Zhu,Y M & Li, X R (1999) Best linear unbiased estimation fusion, Proceeding of International Conference on Multisource-Multisensor Information Fusion, Sunnyvale, CA, pp 1054-1061 Zhu,Y M.; You, Z.; Zhao, J.; Zhang, K & Li, X R (2001) The optimality for the distributed Kalman filtering fusion with feedback, Automaica, Vol 37, No 9, pp.1489–1493 Zhu, Y M (2002) Multisensor decision and estimation fusion, Kluwer Academic, Boston New Smoothers for Discrete-time Linear Stochastic Systems with Unknown Disturbances Akio Tanikawa Osaka Institute of Technology Japan Introduction We consider discrete-time linear stochastic systems with unknown inputs (or disturbances) and propose recursive algorithms for estimating states of these systems If mathematical models derived by engineers are very accurate representations of real systems, we not have to consider systems with unknown inputs However, in practice, the models derived by engineers often contain modelling errors which greatly increase state estimation errors as if the models have unknown disturbances The most frequently discussed problem on state estimation is the optimal filtering problem which investigates the optimal estimate of state xt at time t or xt+1 at time t + with minimum variance based on the observation Yt of the outputs {y0 , y1 , · · · , yt }, i.e., Yt = σ{ys , s = 0, 1, · · · , t} ( the smallest σ-field generated by {y0 , y1 , · · · , yt } (see e.g., Katayama (2000), Chapter 4)) It is well known that the standard Kalman filter is the optimal linear filter in the sense that it minimizes the mean-square error in an appropriate class of linear filters (see e.g., Kailath (1974), Kailath (1976), Kalman (1960), Kalman (1963) and Katayama (2000)) But we note that the Kalman filter can work well only if we have accurate mathematical modelling of the monitored systems In order to develop reliable filtering algorithms which are robust with respect to unknown disturbances and modelling errors, many research papers have been published based on the disturbance decoupling principle Pioneering works were done by Darouach et al (Darouach; Zasadzinski; Bassang & Nowakowski (1995) and Darouach; Zasadzinski & Keller (1992)), Chang and Hsu (Chang & Hsu (1993)) and Hou and Müller (Hou & Müller (1993)) They utilized some transformations to make the original systems with unknown inputs into some singular systems without unknown inputs The most important preceding study related to this paper was done by Chen and Patton (Chen & Patton (1996)) They proposed the simple and useful optimal filtering algorithm, ODDO (Optimal Disturbance Decoupling Observer), and showed its excellent simulation results See also the papers such as Caliskan; Mukai; Katz & Tanikawa (2003), Hou & Müller (1994), Hou & R J Patton (1998) and Sawada & Tanikawa (2002) and the book Chen & Patton (1999) Their algorithm recently has been modified by the author in Tanikawa (2006) (see Tanikawa & Sawada (2003) also) We here consider smoothing problems which allow us time-lags for computing estimates of ˆ the states Namely, we try to find the optimal estimate xt− L/t of the state xt− L based on the observation Yt with L > We often classify smoothing problems into the following three types For the first problem, the fixed-point smoothing, we investigate the optimal estimate 54 Discrete Time Systems ˆ xk/t of the state xk for a fixed k based on the observations {Yt , t = k + 1, k + 2, · · · } Algorithms ˆ for computing xk/t , t = k + 1, k + 2, · · · , recursively are called fixed-point smoothers For ˆ the second problem, the fixed-interval smoothing, we investigate the optimal estimate xt/N of the state xt at all times t = 0, 1, · · · , N based on the observation Y N of all the outputs ˆ {y0 , y1 , · · · , y N } Fixed-interval smoothers are algorithms for computing xt/N , t = 0, 1, · · · , N recursively The third problem, the fixed-lag smoothing, is to investigate the optimal estimate ˆ xt− L/t of the state xt− L based on the observation Yt for a given L ≥ Fixed-lag smoothers ˆ are algorithms for computing xt− L/t , t = L + 1, L + 2, · · · , recursively See the references such as Anderson & Moore (1979), Bryson & Ho (1969), Kailath (1975) and Meditch (1973) for early research works on smoothers More recent papers have been published based on different approaches such as stochastic realization theory (e.g., Badawi; Lindquist & Pavon (1979) and Faurre; Clerget & Germain (1979)), the complementary models (e.g., Ackner & Kailath (1989a), Ackner & Kailath (1989b), Bello; Willsky & Levy (1989), Bello; Willsky; Levy & Castanon (1986) Desai; Weinert & Yasypchuk (1983) and Weinert & Desai (1981)) and others Nice surveys can be found in Kailath; Sayed & Hassibi (2000) and Katayama (2000) When stochastic systems contain unknown inputs explicitly, Tanikawa (Tanikawa (2006)) obtained a fixed-point smoother for the first problem The second and the third problems were discussed in Tanikawa (2008) In this chapter, all three problems are discussed in a comrehensive and self-contained manner as much as possible Namely, after some preliminary results in Section 2, we derive the fixed-point smoothing algorithm given in Tanikawa (2006) in Section for the system with unknown inputs explicitly by applying the optimal filter with disturbance decoupling property obtained in Tanikawa & Sawada (2003) In Section 4, we construct the fixed-interval smoother given in Tanikawa (2008) from the fixed-point smoother obtained in Section In Section 5, we construct the fixed-lag smoother given in Tanikawa (2008) from the optimal filter in Tanikawa & Sawada (2003) Finally, the new feature and advantages of the obtained results are summarized here To the best of our knowledge, no attempt has been made to investigate optimal fixed-interval and fixed-lag smoothers for systems with unknown inputs explicitly (see the stochastic system given by (1)-(2)) before Tanikawa (2006) and Tanikawa (2008) Our smoothing algorithms have similar recursive forms to the standard optimal filter (i.e., the Kalman filter) and smoothers Moreover, our algorithms reduce to those known smoothers derived from the Kalman filter (see e.g., Katayama (2000)) when the unknown inputs disappear Thus, our algorithms are consistent with the known smoothing algorithms for systems without unknown inputs Preliminaries Consider the following discrete-time linear stochastic system for t = 0, 1, 2, · · · : x t + = A t x t + Bt u t + Et d t + ζ t , y t = Ct x t + η t , where xt ∈ Rn the state vector, yt ∈ R the output vector, m (1) (2) New Smoothers for Discrete-time Linear Stochastic Systems with Unknown Disturbances u t ∈ Rr the known input vector, dt ∈ Rq 55 the unknown input vector Suppose that ζ t and ηt are independent zero mean white noise sequences with covariance matrices Qt and Rt Let At , Bt , Ct and Et be known matrices with appropriate dimensions ˆ In Tanikawa & Sawada (2003), we considered the optimal estimate xt+1/t+1 of the state xt+1 which was proposed by Chen and Patton (Chen & Patton (1996) and Chen & Patton (1999)) with the following structure: zt+1 = Ft+1 zt + Tt+1 Bt u t + Kt+1 yt , ˆ xt+1/t+1 = zt+1 + Ht+1 yt+1 , (3) (4) ˆ for t = 0, 1, 2, · · · Here, x0/0 is chosen to be z0 for a fixed z0 Denote the state estimation error ˆ and its covariance matrix respectively by et and Pt Namely, we use the notations et = xt − xt/t and Pt = E {et et T } for t = 0, 1, 2, · · · Here, E denotes expectation and T denotes transposition of a matrix We assume in this paper that random variables e0 , {ηt }, {ζ t } are independent As in Chen & Patton (1996), Chen & Patton (1999) and Tanikawa & Sawada (2003), we consider state estimate (3)-(4) with the matrices Ft+1 , Tt+1 , Ht+1 and Kt+1 of the forms: K t +1 = K t +1 + K t +1 , Et = Ht+1 Ct+1 Et , (6) Tt+1 = I − Ht+1 Ct+1 , (7) Ft+1 = At − K t +1 (5) Ht+1 Ct+1 At − Kt+1 Ct , (8) = Ft+1 Ht (9) The next lemma on equality (6) was obtained and used by Chen and Patton (Chen & Patton (1996) and Chen & Patton (1999)) Before stating it, we assume that Ek is a full column rank matrix Notice that this assumption is not an essential restriction Lemma 2.1 Equality (6) holds if and only if rank (Ct+1 Et ) = rank ( Et ) (10) When this condition holds true, matrix Ht+1 which satisfies (6) must have the form Ht+1 = Et (Ct+1 Et ) T (Ct+1 Et ) −1 ( Ct + E t ) T (11) Hence, we have Ct+1 Ht+1 = Ct+1 Et (Ct+1 Et ) T (Ct+1 Et ) which is a non-negative definite symmetric matrix −1 ( Ct + E t ) T (12) 56 Discrete Time Systems When the matrix Kt+1 has the form Kt+1 = A1+1 Pt Ct T − Ht Rt t Ct Pt Ct T + Rt −1 , A1+1 = At − Ht+1 Ct+1 At , t (13) (14) we obtained the following result (Theorem 2.7 in Tanikawa & Sawada (2003)) on the optimal filtering algorithm Proposition 2.2 If Ct Ht and Rt are commutative, i.e., Ct Ht Rt = Rt Ct Ht , (15) then the optimal gain matrix Kt+1 which makes the variance of the state estimation error et+1 minimum is determined by (13) Hence, we obtain the optimal filtering algorithm: ˆ ˆ ˆ xt+1/t+1 = A1+1 { xt/t + Gt (yt − Ct xt/t )} + Ht+1 yt+1 + Tt+1 Bt u t , t Pt+1 = A1+1 t T Mt A1+1 t + Tt+1 Qt Tt+1 + Ht+1 Rt+1 Ht+1 , T where Gt = Pt Ct T − Ht Rt Ct Pt Ct T + Rt T −1 , (16) (17) (18) and Mt = Pt − Gt Ct Pt − Rt Ht T Remark 2.3 (19) If the matrix Rt has the form Rt = rt I with some positive number rt for each t = 1, 2, · · · , then it is obvious to see that condition (15) holds Finally, we have the following proposition which indicates that the standard Kalman filter is a special case of the optimal filter proposed in this section (see e.g., Theorem 5.2 (page 90) in Katayama (2000)) Proposition 2.4 Suppose that Et ≡ O holds for all t (i.e., the unknown input term is zero) Then, Lemma 2.1 cannot be applied directly But, we can choose Ht ≡ O for all t in this case, and the optimal filter given in Proposition 2.2 reduces to the standard Kalman filter The fixed-point smoothing ˆ Let k be a fixed time We study an iterative algorithm to compute the optimal estimate xk/t of the state xk based on the observation Yt , t = k + 1, k + 2, · · · , with Yt = σ{ys , s = 0, 1, · · · , t} We define state vectors θt , t = k, k + 1, · · · , by θt+1 = θt , t = k, k + 1, · · · ; θk = xk (20) New Smoothers for Discrete-time Linear Stochastic Systems with Unknown Disturbances 57 ˆ It is easy to observe that the optimal estimate θt/t of the state θt based on the observation Yt ˆ is identical to the optimal smoother xk/t in view of the equalities θt = xk , t = k, k + 1, · · · In order to derive the optimal fixed-point smoother, we consider the following augmented system for t = k, k + 1, · · · : x t +1 θ t +1 At O O I = Bt Et I xt ζ, dt + + ut + O t θt O O x y t + = [ Ct + O ] t + + η t + θ t +1 (21) (22) Denote these equations respectively by x t + = A t x t + B t u t + E t d t + Jt ζ t , (23) y t + = Ct + x t + + η t + , (24) where xt = xt , At = θt At O , Bt = O I Bt , Et = O Et I , Jt = O O and Ct+1 = [Ct+1 O] Here, I and O are the identity matrix and the zero matrix respectively with appropriate dimensions By making use of the notations Ht+1 = Ht+1 I O − Ht+1 Ct+1 , , Tt+1 = O I O we have the equalities: Ct+1 Et = Ct+1 Et , Tt+1 = Tt+1 O , A1+1 = Tt+1 At = t O I A1+1 O t O I We introduce the covariance matrix Pt of the state estimation error of the augmented system (23)-(24): T (1,1) (1,2) ˆ ˆ xt − xt/t xt − xt/t Pt Pt Pt = =E (25) (2,1) (2,2) ˆ ˆ θt − θt/t θt − θt/t Pt Pt (1,1) Notice that Pt is equal to Pt Applying the optimal filter given in Proposition 2.2 to the augmented system (21)-(22), we obtain the following optimal fixed-point smoother Theorem 3.1 If Ct Ht and Rt are commutative, i.e., Ct Ht Rt = Rt Ct Ht , then we have the optimal fixed-point smoother for (21)-(22) as follows: (26) 58 Discrete Time Systems (i) the fixed-point smoother ˆ ˆ ˆ xk/t+1 = xk/t + Dt (k) [yt − Ct xt/t ] , (27) (ii) the gain matrix (2,1) Dt (k) = Pt Ct Pt Ct T + Rt Ct T −1 , (28) (iii) the covariance matrix of the mean-square error (2,1) Pt+1 = (2,2) (2,1) Pt (2,2) Pt+1 = Pt (2,1) − Pt (2,1) − Pt (2,1) Ct T Ct Pt Ct T + Rt Ct T −1 Ct Pt Ct T + Rt −1 Ct Pt − Rt Ht T (2,1) T Ct Pt T A1+1 , t (29) (30) (2,2) ˆ = Pk = Pk We notice that xt/t is the optimal filter of the original system Here, we note that Pk (1)-(2) given in Tanikawa & Sawada (2003) Proof Applying the optimal filter given by (16)-(17) in Proposition (2.2) to the augmented system (23)-(24), we have xt+1/t+1 = At+1 xt/t + Gt yt − Ct xt/t + Ht+1 yt+1 + Tt+1 Bt u t (31) This can be rewritten as ˆ xt+1/t+1 ˆ θt+1/t+1 = ⎧ ⎨ (1,1) ˆ O xt/t Ct T − Ht Rt Pt (2,1) ˆt/t + O I ⎩ θ Pt Ct T ⎫ ⎬ −1 Ht+1 yt+1 T B u ˆ × Ct Pt Ct T + Rt + t +1 t t (yt − Ct xt/t ) + O O ⎭ A1+1 t Thus, we have ˆ xt+1/t+1 = A1+1 t (1,1) ˆ xt/t + Pt Ct T − Ht Rt Ct Pt Ct T + Rt −1 ˆ (yt − Ct xt/t ) + Ht+1 yt+1 + Tt+1 Bt u t and (32) (2,1) ˆ ˆ θt+1/t+1 = θt/t + Pt Ct T Ct Pt Ct T + Rt −1 ˆ (yt − Ct xt/t ) (33) Here, we used the equalities T Ct Pt Ct + Rt = [ Ct O] (1,1) Pt (2,1) Pt = Ct Pt Ct T + Rt (1,2) Pt (2,2) Pt Ct T O + Rt (34) 64 Discrete Time Systems where ⎤ At O O ⎢ I O O⎥ ⎢ x t −1 ⎥ ⎥ ⎥ ⎢ ⎢ xt = ⎢ ⎥ , At = ⎢ ⎥ , Bt = ⎦ ⎣ ⎣ ⎦ O I O xt− L ⎡ ⎤ I ⎢O⎥ ⎢ ⎥ Jt = ⎢ ⎥ and Ct+1 = [Ct+1 O O] ⎣ ⎦ O ⎡ xt ⎤ ⎡ ⎡ ⎤ ⎤ Et Bt ⎢O⎥ ⎢O⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ , Et = ⎢ ⎥ , ⎣ ⎦ ⎣ ⎦ ⎡ O O Here, I and O are the identity matrix and the zero matrix respectively with appropriate dimensions By making use of the notations ⎤ Ht+1 ⎢ O ⎥ ⎥ ⎢ Ht+1 = ⎢ ⎥ and ⎣ ⎦ O ⎡ Tt+1 = I − Ht+1 Ct+1 , we have the equalities: ⎡ ⎤ Et ⎢O⎥ ⎢ ⎥ Ct + E t = [ Ct + O O ] ⎢ ⎥ = Ct + E t , ⎣ ⎦ ⎤ O ⎤ ⎡ Tt+1 O O Ht+1 ⎢ O I O⎥ ⎢ O ⎥ ⎥ ⎥ ⎢ ⎢ Tt+1 = I − ⎢ ⎥ [Ct+1 O O] = ⎢ ⎥, ⎣ ⎣ ⎦ ⎦ O O O I ⎤⎡ ⎤ ⎡ ⎡ At O O Tt+1 O O A t +1 ⎢ O I O⎥⎢ I O O⎥ ⎢ I ⎥⎢ ⎥ ⎢ ⎢ A1+1 = Tt+1 At = ⎢ ⎥⎢ ⎥=⎢ t ⎦ ⎣ ⎣ ⎦⎣ O O I O I O O ⎡ ⎤ O O O O⎥ ⎥ ⎥ ⎦ I O We introduce the covariance matrix Pt of the state estimation error of augmented system (71)-(72): ⎧⎡ ⎤⎡ ⎤T ⎫ ⎪ ⎪ ˆ ˆ xt − xt/t xt − xt/t ⎪ ⎪ ⎪ ⎪ ⎪⎢ x ⎪ ⎨ ˆ ˆ t−1 − xt−1/t ⎥ ⎢ xt−1 − xt−1/t ⎥ ⎬ ⎢ ⎥ ⎥⎢ (73) Pt = E ⎢ ⎥ ⎥⎢ ⎪⎣ ⎦ ⎪ ⎦⎣ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩ x ˆ ˆ xt− L − xt− L/t ⎭ t− L − xt− L/t New Smoothers for Discrete-time Linear Stochastic Systems with Unknown Disturbances 65 By using the notations ˆ ˆ Pt−i,t− j/t = E ( xt−i − xt−i/t ) xt− j − xt− j/t T , Pt−i/t = Pt−i,t−i/t , we can write ⎡ Pt/t ⎢ Pt−1,t/t ⎢ Pt = ⎢ ⎣ Pt− L,t/t ⎤ Pt,t−1/t Pt,t− L/t Pt−1/t Pt−1,t− L/t ⎥ ⎥ ⎥ ⎦ Pt− L,t−1/t Pt− L/t (74) Here, it is easy to observe that Pt/t = Pt holds We also note that T Ct Pt Ct + Rt = Ct Pt/t Ct T + Rt (75) From now on, we use the following notation for brevity: Ct : = Ct Pt Ct T + Rt (76) Applying the optimal filter given in Proposition 2.2 to augmented system (71)-(72), we have xt+1/t+1 = A1+1 t xt/t + Gt yt − Ct xt/t + Ht+1 yt+1 + Tt+1 Bt u t , ⎤ Pt/t Ct T − Ht Rt ⎥ ⎢ P T −1 ⎥ −1 ⎢ t−1,t/t Ct T ⎥ Ct Ct Pt Ct + Rt =⎢ ⎥ ⎢ ⎦ ⎣ T Pt− L,t/t Ct (77) ⎡ where T Gt = Pt Ct − Ht Rt (78) Identifying the component matrices of (77)-(78), we have the following optimal fixed-lag smoother Theorem 5.1 If Ct Ht and Rt are commutative, i.e., Ct Ht Rt = Rt Ct Ht , (79) then we have the optimal fixed-lag smoother for (1)-(2) as follows: (i) the fixed-lag smoother ˆ ˆ ˆ xt− j/t+1 = xt− j/t + St ( j) (yt − Ct xt/t ) ( j = 0, 1, · · · , L − 1) , (80) (ii) the optimal filter ˆ ˆ ˆ xt+1/t+1 = A1+1 {xt/t + Gt (yt − Ct xt/t )} + Ht+1 yt+1 + Tt+1 Bt u t , t (81) with Gt defined by (18) in Proposition 2.2, (iii) the gain matrices St ( j) = Pt− j,t/t Ct T − δ0,j Ht Rt Ct −1 ( j = 0, 1, · · · , L − 1) , (82) 66 Discrete Time Systems where δi,j stands for the Kronecker’s delta, i.e., δi,j = for i = j , for i = j (83) (iv) the covariance matrix of the mean-square error (0,0) Pt+1/t+1 = A1+1 Mt t Pt+1,t− j/t+1 = T A1+1 + Tt+1 Qt Tt+1 T + Ht+1 Rt+1 Ht+1 T , t (0,j ) A1+1 Mt t Pt− j,t+1/t+1 = Pt+1,t− j/t+1 Pt−i,t− j/t+1 = (84) ( j = 0, 1, · · · , L − 1) , ( i,j ) Mt (85) ( j = 0, 1, · · · , L − 1) , T (86) (i, j = 0, 1, · · · , L − 1) , (87) and ( i,j ) Mt = Pt−i,t− j/t − Pt−i,t/t Ct T − δ0,i Ht Rt Ct −1 Ct Pt,t− j/t − δ0,j Rt Ht T (i, j = 0, 1, · · · , L ) (88) Remark 5.2 Since the equalities Pt/t and (0,0) Mt = = Pt Mt ( in Proposition 2.2 ) ( in Proposition 2.2 ) hold, the part of the optimal filter in Theorem 5.1 is identical to that in Proposition 2.2 When Et ≡ O holds for all t (i.e., the unknown input term is zero), we shall see that fixed-lag smoother (80)-(88) is identical to the well known fixed-lag smoother (see e.g Katayama (2000)) obtained from the standard Kalman filter Thus, our algorithm is consistent with the known fixed-lag smoothing algorithm for systems without unknown inputs This can be readily shown as in Remark 4.3 Proof of Theorem 5.1 Rewriting (77)-(78) with the component matrices explicitly, we have ⎡ ⎤ ˆ xt+1/t+1 ⎢ xt/t+1 ⎥ ˆ ⎢ ⎥ ⎢ xt−1/t+1 ⎥ ˆ ⎢ ⎥= ⎢ ⎥ ⎣ ⎦ ˆ xt− L+1/t+1 ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ ˆ A1+1 xt/t + Pt/t Ct T − Ht Rt C t t ˆ xt/t + Pt/t Ct T − Ht Rt C t ˆ xt−1/t + Pt−1,t/t Ct T Ct −1 ˆ xt− L+1/t + Pt− L+1,t/t Ct T Ct −1 −1 ˆ (yt − Ct xt/t ) ˆ (yt − Ct xt/t ) ˆ (yt − Ct xt/t ) −1 ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ ˆ (yt − Ct xt/t ) ⎤ ⎡ Ht+1 yt+1 + Tt+1 Bt u t ⎥ ⎢ O ⎥ ⎢ ⎥ ⎢ O +⎢ ⎥ (89) ⎥ ⎢ ⎦ ⎣ O 67 New Smoothers for Discrete-time Linear Stochastic Systems with Unknown Disturbances The statements in (i)-(iii) easily follow from (89) Let Mt be defined by Mt = Pt − Gt ⎡ Ct Pt − Rt Ht Pt/t Ct T − Ht Rt T ⎤ ⎡ Pt/t Ct T − Ht Rt ⎢ T ⎢ P ⎢ t−1,t/t Ct ⎢ ⎢ T = Pt − ⎢ Pt−2,t/t Ct ⎢ ⎢ ⎢ ⎢ ⎣ ⎥ ⎢ T ⎥ ⎢ P ⎢ t−1,t/t Ct ⎥ ⎢ ⎥ ⎥ −1 ⎢ ⎥ C t ⎢ Pt−2,t/t Ct T ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ Pt− L,t/t Ct T ⎤T ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ Pt− L,t/t Ct T We also introduce component matrices of Mt as follows: ⎡ (0,0) (0,1) (0,2) Mt Mt Mt ⎢ ⎢ (1,0) (1,1) (1,2) ⎢Mt Mt Mt ⎢ ⎢ (2,0) (2,1) (2,2) ⎢ Mt Mt Mt = ⎢Mt ⎢ ⎢ ⎢ ⎢ ⎣ ( L,0) Mt ( L,1) Mt ( L,2) Mt (0,L)⎤ Mt ⎥ ⎥ ⎥ ⎥ (2,L)⎥ Mt ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ (1,L)⎥ Mt ( L,L) Mt Concerning Pt+1 , we have T T T T Pt+1 = A1+1 Mt A1+1 + Tt+1 Jt Qt Jt Tt+1 + Ht+1 Rt+1 Ht+1 t t ⎡ (0,0) T (0,0) (0,1) (0,L−1) ⎤ A t +1 M t A t +1 A t +1 M t A t +1 M t A1+1 Mt t ⎢ ⎥ ⎢ ⎥ (0,0) T (0,0) (0,1) (0,L−1) ⎢ Mt ⎥ A t +1 Mt Mt Mt ⎢ ⎥ ⎢ ⎥ (1,0) (1,1) (1,L−1) ⎢ M (1,0) A1 T ⎥ =⎢ Mt Mt Mt ⎥ t t +1 ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ ( L−1,0) Mt A1+1 t T ⎡ ⎢ ⎢ ⎢ ⎢ +⎢ ⎢ ⎢ ⎢ ⎣ ( L−1,0) Mt ( L−1,1) Mt ( L−1,L−1) Mt Tt+1 Qt Tt+1 T+ Ht+1 Rt+1 Ht+1 T O O O O O O The final part (iv) can be obtained from the last three equalities ⎤ ⎥ O O O⎥ ⎥ O O O⎥ ⎥ ⎥ ⎥ ⎥ ⎦ O O O 68 Discrete Time Systems Conclusion In this chapter, we considered discrete-time linear stochastic systems with unknown inputs (or disturbances) and studied three types of smoothing problems for these systems We derived smoothing algorithms which are robust to unknown disturbances from the optimal filter for stochastic systems with unknown inputs obtained in our previous papers These smoothing algorithms have similar recursive forms to the standard optimal filters and smoothers Moreover, since our algorithms reduce to those known smoothers derived from the Kalman filter when unknown inputs disappear, these algorithms are consistent with the known smoothing algorithms for systems without unknown inputs This work was partially supported by the Japan Society for Promotion of Science (JSPS) under Grant-in-Aid for Scientific Research (C)-22540158 References Ackner, R & Kailath, T (1989a) Complementary models and smoothing, IEEE Trans Automatic Control, Vol 34, pp 963–969 Ackner, R & Kailath, T (1989b) Discrete-time complementary models and smoothing, Int J Control, Vol 49, pp 1665–1682 Anderson, B D O & Moore, J B (1979) Optimal Filtering, Prentice-Hall, Englewood Cliffs, NJ Badawi, F A.; Lindquist, A & Pavon, M (1979) A stochastic realization approach to the smoothing problem, IEEE Trans Automatic Control, Vol 24, pp 878–888 Bello, M G.; Willsky, A S & Levy, B C (1989) Construction and applications of discrete-time smoothing error models, Int J Control, Vol 50, pp 203–223 Bello, M G.; Willsky, A S.; Levy, B C & Castanon, D A (1986) Smoothing error dynamics and their use in the solution of smoothing and mapping problems, IEEE Trans Inform Theory, Vol 32, pp 483–495 Bryson, Jr., A E & Ho, Y C (1969) Applied Optimal Control, Blaisdell Publishing Company, Waltham, Massachusetts Caliskan, F.; Mukai, H.; Katz, N & Tanikawa, A (2003) Game estimators for air combat games with unknown enemy inputs, Proc American Control Conference, pp 5381–5387, Denver, Colorado Chang, S & Hsu, P (1993) State estimation using general structured observers for linear systems with unknown input, Proc 2nd European Control Conference: ECC’93, pp 1794–1799, Groningen, Holland Chen, J & Patton, R J (1996) Optimal filtering and robust fault diagnosis of stochastic systems with unknown disturbances, IEE Proc of Control Theory Applications, Vol 143, No 1, pp 31–36 Chen, J & Patton, R J (1999) Robust Model-based Fault Diagnosis for Dynamic Systems, Kluwer Academic Publishers, Norwell, Massachusetts Chen, J.; Patton, R J & Zhang, H -Y (1996) Design of unknown input observers and robust fault detection filters, Int J Control, Vol 63, No 1, pp 85–105 Darouach, M.; Zasadzinski, M.; Bassang, O A & Nowakowski, S (1995) Kalman filtering with unknown inputs via optimal state estimation of singular systems, Int J Systems Science, Vol 26, pp 2015–2028 New Smoothers for Discrete-time Linear Stochastic Systems with Unknown Disturbances 69 Darouach, M.; Zasadzinski, M & Keller, J Y (1992) State estimation for discrete systems with unknown inputs using state estimation of singular systems, Proc American Control Conference, pp 3014–3015 Desai, U B.; Weinert, H L & Yasypchuk, G (1983) Discrete-time complementary models and smoothing algorithms: The correlated case, IEEE Trans Automatic Control, Vol 28, pp 536–539 Faurre, P.; Clerget, M & Germain, F (1979) Operateurs Rationnels Positifs, Dunod, Paris, France Frank, P M (1990) Fault diagnosis in dynamic system using analytical and knowledge based redundancy: a survey and some new results, Automatica, Vol 26, No 3, pp 459–474 Hou, M & Müller, P C (1993) Unknown input decoupled Kalman filter for time-varying systems, Proc 2nd European Control Conference: ECC’93, Groningen, Holland, pp 2266–2270 Hou, M & Müller, P C (1994) Disturbance decoupled observer design: a unified viewpoint, IEEE Trans Automatic Control, Vol 39, No 6, pp 1338–1341 Hou, M & R J Patton, R J (1998) Optimal filtering for systems with unknown inputs, IEEE Trans Automatic Control, Vol 43, No 3, pp 445–449 Kailath, T (1974) A view of three decades of linear filtering theory, IEEE Trans Inform Theory, Vol 20, No 2, pp 146–181 Kailath, T (1975) Supplement to a survey to data smoothing, Automatica, Vol 11, No 11, pp 109–111 Kailath, T (1976) Lectures on Linear Least-Squares Estimation, Springer Kailath, T.; Sayed, A H & Hassibi, B (2000) Linear Estimation, Prentice Hall Kalman, R E (1960) A new approach to linear filtering and prediction problems, in Trans ASME, J Basic Eng., Vol 82D, No 1, pp 34–45 Kalman, R E (1963) New methods in Wiener filtering theory, Proc of First Symp Eng Appl of Random Function Theory and Probability (J L Bogdanoff and F Kozin, eds.), pp 270-388, Wiley Katayama, T (2000) Applied Kalman Filtering, New Edition, in Japanese, Asakura-Shoten, Tokyo, Japan Meditch, J S (1973) A survey of data smoothing for linear and nonlinear dynamic systems, Automatica, Vol 9, No 2, pp 151–162 Patton, R J.; Frank, P M & Clark, R N (1996) Fault Diagnosis in Dynamic Systems: Theory and Application, Prentice Hall Sawada, Y & Tanikawa, A (2002) Optimal filtering and robust fault diagnosis of stochastic systems with unknown inputs and colored observation noises, Proc 5th IASTED Conf Decision and Control, pp 149-154, Tsukuba, Japan Tanikawa, A (2006) On a smoother for discrete-time linear stochastic systems with unknown disturbances, Int J Innovative Computing, Information and Control, Vol 2, No 5, pp 907–916 Tanikawa, A (2008) On new smoothing algorithms for discrete-time linear stochastic systems with unknown disturbances, Int J Innovative Computing, Information and Control, Vol 4, No 1, pp 15–24 Tanikawa, A & Mukai, H (2010) Minimum variance state estimators with disturbance decoupling property for optimal filtering problems with unknown inputs and fault detection (in preparation) 70 Discrete Time Systems Tanikawa, A & Sawada, Y (2003) Minimum variance state estimators with disturbance decoupling property for optimal filtering problems with unknown inputs, Proc of the 35th ISCIE Int Symp on Stochastic Systems Theory and Its Appl., pp 96-99, Ube, Japan Weinert, H L & Desai, U B (1981) On complementary models and fixed-interval smoothing, IEEE Trans Automatic Control, Vol 26, pp 863–867 On the Error Covariance Distribution for Kalman Filters with Packet Dropouts Eduardo Rohr, Damián Marelli, and Minyue Fu University of Newcastle Australia Introduction The fast development of network (particularly wireless) technology has encouraged its use in control and signal processing applications Under the control system’s perspective, this new technology has imposed new challenges concerning how to deal with the effects of quantisation, delays and loss of packets, leading to the development of a new networked control theory Schenato et al (2007) The study of state estimators, when measurements are subject to random delays and losses, finds applications in both control and signal processing Most estimators are based on the well-known Kalman filter Anderson & Moore (1979) In order to cope with network induced effects, the standard Kalman filter paradigm needs to undergo certain modifications In the case of missing measurements, the update equation of the Kalman filter depends on whether a measurement arrives or not When a measurement is available, the filter performs the standard update equation On the other hand, if the measurement is missing, it must produce open loop estimation, which as pointed out in Sinopoli et al (2004), can be interpreted as the standard update equation when the measurement noise is infinite If the measurement arrival event is modeled as a binary random variable, the estimator’s error covariance (EC) becomes a random matrix Studying the statistical properties of the EC is important to assess the estimator’s performance Additionally, a clear understanding of how the system’s parameters and network delivery rates affect the EC, permits a better system design, where the trade-off between conflicting interests must be evaluated Studies on how to compute the expected error covariance (EEC) can be dated back at least to Faridani (1986), where upper and lower bounds for the EEC were obtained using a constant gain on the estimator In Sinopoli et al (2004), the same upper bound was derived as the limiting value of a recursive equation that computes a weighted average of the next possible error covariances A similar result which allows partial observation losses was presented in Liu & Goldsmith (2004) In Dana et al (2007); Schenato (2008), it is shown that a system in which the sensor transmits state estimates instead of raw measurements will provide a better error covariance However, this scheme requires the use of more complex sensors Most of the available research work is concerned with the expected value of the EC, neglecting higher order statistics The problem of finding the complete distribution function of the EC has been recently addressed in Shi et al (2010) 72 Discrete Time Systems This chapter investigates the behavior of the Kalman filter for discrete-time linear systems whose output is intermittently sampled To this end we model the measurement arrival event as an independent identically distributed (i.i.d.) binary random variable We introduce a method to obtain lower and upper bounds for the cumulative distribution function (CDF) of the EC These bounds can be made arbitrarily tight, at the expense of increased computational complexity We then use these bounds to derive upper and lower bounds for the EEC Problem description In this section we give an overview of the Kalman filtering problem in the presence of randomly missing measurements Consider the discrete-time linear system: xt+1 = Axt + wt yt = Cxt + vt (1) where the state vector xt ∈ R n has initial condition x0 ∼ N (0, P0 ), y ∈ R p is the measurement, w ∼ N (0, Q) is the process noise and v ∼ N (0, R) is the measurement noise The goal of the ˆ Kalman filter is to obtain an estimate xt of the state xt , as well as providing an expression for ˜ ˆ the covariance matrix Pt of the error xt = xt − xt We assume that the measurements yt are sent to the Kalman estimator through a network subject to random packet losses The scheme proposed in Schenato (2008) can be used to deal with delayed measurements Hence, without loss of generality, we assume that there is no delay in the transmission Let γt be a binary random variable describing the arrival of a measurement at time t We define that γt = when yt was received at the estimator and γt = otherwise We also assume that γt is independent of γs whenever t = s The probability to receive a measurement is given by λ = P ( γ t = 1) (2) ˆ Let xt|s denote the estimate of xt considering the available measurements up to time s Let ˜ ˆ ˜ ˜ ˜ ˜ xt|s = xt − xt|s denote the estimation error and Σt|s = E {( xt|s − E { xt|s })( xt|s − E { xt|s }) } denote its covariance matrix If a measurement is received at time t (i.e., if γt = 1), the estimate and its EC are recursively computed as follows: ˆ ˆ xt|t = xt|t−1 + Kt (yt − Cxt ) Σ t | t = ( I − K t C ) Σ t | t −1 (3) (4) ˆ ˆ x t +1| t = A x t | t (5) Σt+1|t = AΣt|t A + Q, (6) with the Kalman gain Kt given by Kt = Σt|t−1 C (CΣt|t−1C + Q)−1 (7) On the other hand, if a measurement is not received at time t (i.e., if γt = 0), then (3) and (4) are replaced by ˆ ˆ x t | t = x t | t −1 (8) On the Error Covariance Distribution for Kalman Filters with Packet Dropouts 73 Σ t | t = Σ t | t −1 (9) We will study the statistical properties of the EC Σt|t−1 To simplify the notation, we define Pt = Σt|t−1 Then, the update equation of Pt can be written as follows: Φ1 ( Pt ), γt = Φ0 ( Pt ), γt = Pt+1 = (10) with Φ1 ( Pt ) = APt A + Q − APt C (CPt C + R)−1 CPt A (11) Φ0 ( Pt ) = APt A + Q (12) We point out that when all the measurements are available, and the Kalman filter reaches its steady state, the EC is given by the solution of the following algebraic Riccati equation P = APA + Q − APC (CPC + R)−1 CPA (13) Throughout this chapter we use the following notation For given T ∈ N and ≤ m ≤ 2T − 1, T the symbol Sm denotes the binary sequence of length T formed by the binary representation T of m We also use Sm (i ), i = 1, · · · , T to denote the i-th entry of the sequence, i.e., T T T T Sm = {Sm (1), Sm (2), , Sm ( T )} and m= (14) T T ∑ 2k − S m ( k ) (15) k =1 T T (Notice that S0 denotes a sequence of length T formed exclusively by zeroes.) We use | Sm | to T , i.e., denote the number of ones in the sequence Sm T | Sm | = T T ∑ S m ( k ) (16) k =1 T For a given sequence Sm , and a matrix P ∈ R n×n , we define the map T φ( P, Sm ) = Φ Sm ( T ) ◦ Φ Sm ( T −1) ◦ Φ Sm (1) ( P ) T T T (17) where ◦ denotes the composition of functions (i.e f ◦ g( x ) = f ( g( x ))) Notice that if m is chosen so that T S m = { γ t −1 , γ t −2 , , γ t − T }, (18) T then the map φ(·, Sm ) updates Pt− T according to the measurement arrivals in the last T sampling times, i.e., T Pt = φ( Pt− T , Sm ) = Φ γt−1 ◦ Φ γt−1 ◦ Φ γt− T ( Pt− T ) (19) 74 Discrete Time Systems Bounds for the cumulative distribution function In this section we present a method to compute lower and upper bounds for the limit CDF F ( x ) of the trace of the EC, which is defined by F ( x ) = lim F T ( x ) (20) T→∞ F T ( x ) = P (Tr{ PT } < x ) = T −1 ∑ m =0 (21) T T P Sm H x − Tr{φ( P0 , Sm )} , (22) T where H (·) is the Heaviside step function, and the probability to observe the sequence Sm is given by T P Sm = λ|Sm | (1 − λ) T −|Sm | T T (23) The basic idea is to start with either the lowest or the highest possible value of EC, and then evaluate the CDF resulting from each starting value after a given time horizon T Doing so, T for each T, we obtain a lower bound F T ( x ) and an upper bound F ( x ) for F ( x ), i.e., T F T ( x ) ≤ F ( x ) ≤ F ( x ), for all T ∈ R (24) As we show in Section 3.3, both bounds monotonically approach F ( x ) as T increases To derive these results we make use of the following lemma stating properties of the maps Φ0 (·) and Φ1 (·) defined in (11)-(12) Lemma 3.1 Let X, Y ∈ R n×n be two positive semi-definite matrices Then, Φ ( X ) < Φ ( X ) (25) Φ (Y ) ≥ Φ ( X ) (26) Φ (Y ) ≥ Φ ( X ) (27) If Y ≥ X, Proof: The proof of (25) is direct from (11)-(12) Equation (26) follows straightforwardly since Φ0 ( X ) is affine in X Using the matrix inversion lemma, we have that Φ ( X ) = A ( X −1 + C R −1 C ) −1 A + Q (28) which shows that Φ1 ( X ) is monotonically increasing with respect to X 3.1 Upper bounds for the CDF The smallest possible value of the EC is obtained when all the measurements are available and the Kalman filter reaches its steady state In this case, the EC P is given by (13) Now, 75 On the Error Covariance Distribution for Kalman Filters with Packet Dropouts T fix T, and suppose that m is such that Sm = {γ T −1 , γ T −2 , , γ0 } describes the measurement T arrival sequence Then, assuming that1 P0 ≥ P , from (26)-(27), it follows that PT ≥ φ( P, Sm ) Hence, from (22), an upper bound of F ( x ) is given by T F (x) = T −1 ∑ m =0 T T P Sm H x − Tr{φ( P, Sm )} (29) 3.2 Lower bounds for the CDF A lower bound for the CDF can be obtained using an argument similar to the one we used T above to derive an upper bound To this we need to replace in (22) Tr{φ( P0 , Sm )} by an T To this we use the following lemma upper bound of Tr{ PT } given the arrival sequence Sm T Lemma 3.2 Let m be such that Sm = {γ T −1 , γ T −2 , · · · , γ0 } and ≤ t1 , t2 , · · · , t I ≤ T − denote the indexes where γti = 1, i = 1, · · · , I Define ⎡ t −1 ⎤ CA j QA T −t1 + j ⎥ ⎢ ∑ ⎢ j =0 ⎥ ⎡ ⎤ ⎢ t2 − ⎥ CAt1 ⎢ j QA T − t2 + j ⎥ t2 ⎥ ⎢ ∑ CA ⎥ ⎢ CA ⎢ ⎥ ⎢ ⎥ O = ⎢ ⎥ , Σ Q = ⎢ j =0 ⎥ , ⎦ ⎢ ⎥ ⎣ ⎢ ⎥ ⎢ ⎥ CAt I ⎢ t −1 ⎥ ⎣ I ⎦ ∑ CA j QA T −t I + j (30) j =0 and the matrix ΣV ∈ R pI × pI , whose (i, j)-th submatrix [ ΣV ] i,j ∈ R p× p is given by [ ΣV ] i,j = min{ti ,t j }−1 ∑ k =0 CAti −1−k QA t j −1−k C + Rδ(i, j) where 1, i=j 0, δ(i, j) = i = j (31) (32) If O has full column rank, then T PT ≤ P (Sm ), (33) T T where the Sm -dependant matrix P (Sm ) is given by T − P ( Sm ) = A T O ΣV O −1 −1 AT+ T −1 ∑ j =0 −1 −1 A j QA j − A T (ΣV O)† ΣV Σ Q + (34) −1 − − − − − Σ Q ΣV (ΣV O) † A T − Σ Q ΣV − ΣV O(O ΣV O)−1 O ΣV Σ Q , −1 −1 with (ΣV O)† denoting the Moore-Penrose pseudo-inverse of ΣV O Ben-Israel & Greville (2003) If this assumption does not hold, one can substitute P by P0 without loss of generality 76 Discrete Time Systems Proof: Let YT be the vector formed by the available measurements YT = yt1 yt2 · · · yt I = Ox0 + VT , where (35) (36) ⎡ ⎤ t −1 ∑ j1 CAt1 −1− j w j + vt1 = ⎢ t2 − ⎥ ⎢ ∑ j=0 CAt2 −1− j w j + vt2 ⎥ ⎢ ⎥ VT = ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ t I −1 t I −1− j w + v ∑ j=0 CA tI j (37) From the model (1), it follows that xT YT Σ x Σ xY Σ xY ΣY , ∼N where T −1 ∑ Σ x = A T P0 A T + A j QA (38) j (39) j =0 Σ xY = A T P0 O + Σ Q (40) ΣY = OP0O + ΣV (41) ˆ Since the Kalman estimate x T at time T is given by, ˆ x T = E { x T |YT } , (42) it follows from (Anderson & Moore, 1979, pp 39) that the estimation error covariance is given by − (43) PT = Σ x − Σ xY ΣY Σ xY Substituting (39)-(41) in (43), we have PT = A T P0 A T + T −1 ∑ j =0 A j QA j + − A T P0 O + Σ Q (44) OP0 O + ΣV = A T P0 − P0 O OP0O + ΣV − A T P0 O OP0 O + ΣV − Σ Q OP0 O + ΣV −1 −1 −1 −1 A T P0 O + Σ Q O P0 A T + T −1 ∑ j =0 A j QA j + Σ Q − Σ Q OP0 O + ΣV −1 (45) O P0 A T + ΣQ Now, from (19), T PT = φ( P0 , Sm ) (46) 77 On the Error Covariance Distribution for Kalman Filters with Packet Dropouts Notice that for any P0 we can always find a k such that kIn ≥ P0 , where In is the identity T matrix of order n From the monotonicity of φ(·, Sm ) (Lemma 3.1), it follows that T PT ≤ lim φ(kIn , Sm ) (47) PT ≤ PT,1 + PT,2 + PT,3 + PT,3 + PT,4 , (48) k→∞ We then have that with −1 PT,1 = lim A T kIn − k2 O kOO + ΣV PT,2 = T −1 ∑ A j QA AT O k→∞ (49) j j =0 −1 PT,3 = − lim kA T O kOO + ΣV k→∞ PT,4 = − lim Σ Q kOO + ΣV −1 k→∞ ΣQ ΣQ Using the matrix inversion lemma, we have that PT,1 = A T lim k→∞ − k − In + O Σ V O −1 − = A T O ΣV O −1 AT (50) A T (51) It is straightforward to see that PT,3 can be written as PT,3 = − lim A T O k→∞ OO + ΣV k−1 −1 = − A T lim O ΣV k→∞ −1 −1 ΣQ −1 ΣV OO ΣV + k−1 In (52) −1 From (Ben-Israel & Greville, 2003, pp 115), it follows that limk→ ∞ X any matrix X By making X = −1 ΣV O, −1 ΣV Σ Q (53) XX + k−1 In = X † , for we have that −1 PT,3 = − A T ΣV O † −1 ΣV Σ Q (54) Using the matrix inversion lemma, we have PT,4 = − lim Σ Q k→∞ − − − Σ V − Σ V O O Σ V O + k − In − − − = Σ Q ΣV − ΣV O O ΣV O −1 − O ΣV Σ Q and the result follows by substituting (51), (54) and (55) in (48) −1 − O ΣV Σ Q (55) 78 Discrete Time Systems In order to keep the notation consistent with that of Section 3.1, with some abuse of notation we introduce the following definition: T P ( S m ), if O has full column rank ∞In , T φ(∞, Sm ) otherwise (56) where ∞In is an n × n diagonal matrix with ∞ on every entry of the main diagonal Then, we obtain a lower bound for F ( x ) as follows F T (x) = T −1 ∑ m =0 T T P Sm H x − Tr{φ(∞, Sm )} (57) 3.3 Monotonic approximation of the bounds to F ( x ) T In this section we show that the bounds F T ( x ) and F ( x ) in (24) approach monotonically F ( x ), as T tends to infinity This is stated in the following theorem Theorem 3.1 We have that F T +1 ( x ) ≥ F T ( x ) F T +1 (58) T ( x ) ≤ F ( x ) (59) T Moreover, the bounds F T ( x ) and F ( x ) approach monotonically the true CDF F ( x ) as T tends to ∞ T Proof: Let Sm be a sequence of length T From (17) and Lemma 3.1 and for any P0 > 0, we have T T T (60) φ( P0 , {Sm , 0}) = φ(Φ0 ( P0 ), Sm ) ≤ φ(∞, Sm ) T From the monotonicity of φ(·, Sm ) and Φ0 (·), stated in Lemma 3.1 we have T T T φ( P0 , {Sm , 0}) = φ(Φ0 ( P0 ), Sm ) ≥ φ( P0 , Sm ) (61) T T φ(∞, {Sm , 0}) ≥ φ(∞, Sm ) (62) which implies that From (60) and (62), we have T T φ(∞, {Sm , 0}) = φ(∞, Sm ) (63) T Sm Also, if the matrix O (defined in Lemma 3.2) resulting from the sequence has full column T rank, then so has the same matrix resulting from the sequence {Sm , 1} This implies that T T φ(∞, {Sm , 1}) ≤ φ(∞, Sm ) (64) Now, from Lemma 3.1, Φ0 ( P ) ≥ P, and therefore, T T φ( P, {Sm , 0}) = φ(Φ0 ( P ), Sm ) ≥ T φ( P, Sm ) (65) (66) ... Automatic Control, Vol 39 , No 6, pp 133 8– 134 1 Hou, M & R J Patton, R J (1998) Optimal filtering for systems with unknown inputs, IEEE Trans Automatic Control, Vol 43, No 3, pp 445–449 Kailath,... final part (iv) can be obtained from the last three equalities ⎤ ⎥ O O O⎥ ⎥ O O O⎥ ⎥ ⎥ ⎥ ⎥ ⎦ O O O 68 Discrete Time Systems Conclusion In this chapter, we considered discrete- time. .. inputs using state estimation of singular systems, Proc American Control Conference, pp 30 14? ?30 15 Desai, U B.; Weinert, H L & Yasypchuk, G (19 83) Discrete- time complementary models and smoothing

Ngày đăng: 20/06/2014, 01:20

Tài liệu cùng người dùng

  • Đang cập nhật ...

Tài liệu liên quan