Mô Hình Hóa Nhận Dạng và Mô Phỏng - random process 2

11 425 0
Mô Hình Hóa Nhận Dạng và Mô Phỏng - random process 2

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

Tài liệu tham khảo bài giảng mô hình hóa, Nhận dạng và mô phỏng bộ môn điều khiển tự động Khoa điện - điện tử

Lecture Stochastic Processes 2.1 Introduction Spectral analysis is the study of models of a class called stationary stochastic processes Stochastic processes {X(t) : t ∈ T } is a family or rv’s indexed by a variable t, where t is a subset of T which may be infinite  continuous → X(t)  These may be real, vector valued or  discrete → Xt complex with suitable added indices We will use the Riemann-Stieltjes notation in what follows because mixed continuous- discrete distributions are common for time series, and hence the R-S notation is standard in stochastic theory Let g(x) and H(x) be real valued functions on [L, U ] where L < U and L, U may be −∞, ∞ with suitable limiting processes Let PN be a partition of [L, U ] into N + intervals L = x0 < x · · · < x N = U Define the mesh fineness: |PN | = max {x1 − x0 , x2 − x1 , , xN − xN −1 } LECTURE STOCHASTIC PROCESSES Then N U g(x)dH(x) = lim |PN |→0 L g(x j )[H(xj ) − H(xj−1 )] j=1 where x j ∈ [xj−1 , xj ] There are cases If H(x) = x then we have the Riemann integral U L g(x)dx If H(x) is continuously differentiable on [L, U ] with h(x) = ∂x H(x), then U U g(x)dH(x) = g(x)h(x)dx L L If H(x) undergoes step changes of size bi at on [L, U ] so that c i + bi H(x) = ci L ≤ x < bi H(x) = ci + bi ≤ x ≤ U ci then N U g(x)dH(x) = bi g(ai ) L j=1 Example For a continuous process, we have f (x) = ∂x F (x) and hence: ∞ E[X] = ∞ xdF (x) = −∞ xf (x)dx −∞ Example For a discrete process where the cdf F (x) undergoes discrete jumps of size N at a set of values {xi }: ∞ E[X] = xdF (x) = −∞ N N xi j=1 Example For a fixed value of t, Xt is an rv and hence has a cdf, where Ft (a) = P [Xt ≤ a] 2.1 INTRODUCTION with ∞ E[X] = xdF (x) = µt −∞ ∞ var[X] = −∞ (x − µt )dF (x) = σt Note that the statistics become time dependent We also may need higher order cdf’s like the bivariate for two times: Ft1 ,t2 (a1 , a2 ) = P [Xt1 ≤ a1 , Xt2 ≤ a2 ] and the N dimensional generalization: Ft1 , ,tN (a1 , , aN ) = P [Xt1 ≤ a1 , , XtN ≤ aN ] The set of cdf’s from Ft to Ft1 , ,tN are a complete description of the stochastic process if we know them for all t and N However, the result is a mess and the distributions are unknowable in practice We can start to narrow this down by considering stationary processes: one whose statistical properties are independent of time, or a physical system which is steady state If {Xt } is a result of a stationary process, then each element must have the same cdf and Ft (x) → F (x) Any pair of elements in {Xt } must have the same bivariate distribution, etc In summary, the joint cdf of {Xt } for a set of N time points {ti } must be unaltered by time shifts There are several cases of stationarity: Complete stationarity If the joint cdf of {Xt1 , , XtN } is identical to that for {Xtk+1 , , Xtk+N } for any k, then it is completely stationary All of the statistical structure is unchanged under shifts in the time origin This is a severe requirement and rarely establishable in practice Stationarity of order LECTURE STOCHASTIC PROCESSES E[Xt ] = µ for ∀t No other stationarity is implied Stationarity of order E[Xt ] = µ and E[Xt2 ] = µ2 , so that the mean and the variance are time independent E[Xt Xs ] is a function of |t − s| only and hence cov[Xt , Xs ] is a function of |t − s| only This class is called weakly stationary or second order stationary, and is the most important type of stochastic process for our purposes For a second order stationary process, we define the autocovariance sequence by Sτ = cov[Xt , Xt+τ ] = cov[X0 , Xτ ] This is a measure of the covariance between members of the process separated by τ time units τ is called the lag We would expect Sτ to be largest at τ = and be symmetric about the origin 2.2 Properties of the acvs S0 = σ 2 S−τ = Sτ (even function) |Sτ | ≤ S0 for τ > Sτ is positive semidefinite N k=1 N j=1 Stj −tk aj ak ≥ for {a1 , , aN } ∈ ↔ or in matrix form aT Σa ≥ ↔ where Σ is the covariance matrix The autocorrelation sequence is the acvs normalized to S0 ρτ = Sτ S0 2.3 EXAMPLES OF STATIONARY PROCESSES and has properties: ρ0 = ρ−τ = ρτ for τ > |ρτ | ≤ for τ > ρτ is positive semidefinite Note that a completely stationary process is also second order stationary, but second order stationarity does not imply complete stationarity However, if the process is Gaussian (i.e, the joint cdfs of the rv’s are multivariate normal) then second order stationarity does imply complete stationarity because a Gaussian distribution is completely specified by its first and second moments All of this machinery extends to complex processes Let Zt = Xt,1 + iXt,2 This is second order stationary if all of the joint first and second order moments of Xt,1 and Xt,2 exist, are finite, and are invariant to shifts in time This implies that Xt,1 and Xt,2 are themselves second order stationary We have E[Zt ] = µ1 + iµ2 = µ cov[Zt1 , Zt2 ] = E[(Zt1 − µ)∗ (Zt2 − µ)] = Sτ ∗ and hence S−τ = Sτ for a complex process 2.3 Examples of stationary processes Let {Xt } be a sequence of uncorrelated rv’s such that E[Xt ] = µ var[Xt ] = σ cov[Xt , Xt+τ ] = (follows from uncorrelatedness) LECTURE STOCHASTIC PROCESSES Then {Xt } is stationary with acvs Sτ =   σ2 , τ = 0;  0, τ = Note that a sequence of uncorrelated rv’s are not necessarily independent, but independence does imply uncorrelatedness Independence implies that the joint cdf may be factored into the product of individual cdf’s, and we have not applied this condition The exception for these statements would be a Gaussian process where uncorrelatedness does imply independence A random or white noise process is a process without memory One datum does not depend on any other 2.4 First order Autoregressive process Example Consider a particle of unit mass moving in a straight line and subject to a random force Let Xt denote the particle velocity at time t and t denote the random force per unit mass acting on it Then ˙ Xt = t − αXt ˙ if the resistive force is proportional to velocity from Newton’s laws Xt ≈ Xt −Xt−1 and hence: Xt = = ( t + Xt−1 ) 1+α t + α Xt−1 This is a first order AR process where the value of the rv at the time t depends on that at time t − but not at earlier times Xt − aXt−1 = t 2.4 FIRST ORDER AUTOREGRESSIVE PROCESS where a is a constant and { t } is random This is analogous to linear regression with Xt depending linearly on Xt−1 and being the residual, hence the term t “autoregressive” The difference equation can be solved assuming X0 = yielding Xt = t +a t−1 + a2 t−2 + · · · + at−1 if E[ t ] = µ then: E[Xt ] = µ(1 + a + · · · + at−1 )   1−at µ , a = 1; 1−a =  µt, a = If µ = 0, this vanishes and Xt is first order stationary and otherwise is not However if |a| < then E[Xt ] ≈ µ 1−a (t → ∞) and hence Xt is asymptotically first order stationary If var[ t ] = σ and cov( t s) = 0, we have   1−a2t σ , a = 1; 1−a2 var[Xt ] =  σ t, a = cov[Xt Xt+r ] =   r σ a  σ t, 1−a2t 1−a2 , |a| = 1; |a| = This is not second order stationary unless σ = but it is asymptotically so if |a| < S τ τ LECTURE STOCHASTIC PROCESSES The AR process easily generalizes to an order p Xt + a1 Xt−1 + · · · + ap Xt−p = t Let z denote the unit delay operator Then (1 + a1 z + · · · + ap z p )Xt = t This is asymptotically stationary if the roots of the z polynomial lie inside a circle of radius one An AR process is a finite linear combination of its past values and the current value of a random process The present noise value t is drawn into the process and hence influences the present and all future values Xt + Xt−1 , This can be shown by recursively solving the AR(p) equation ∞ Xt = θj t−j , with θ0 = j=0 This shows why the acvs for an AR process dies out gradually with lag and never reaches zero 2.5 Moving Average Process An MA process is a linear combination of present and past values of a noise process with a finite extent X t = b0 t + b1 A given noise term t t−1 + · · · + bp t−p influences only p future values of X and hence the acvs for an MA process will vanish beyond some finite value of lag p p cov[Xt Xt+τ ] = p−τ bj bk E[ j=0 k=0 t−j t+τ −k ] =σ bj bj+τ j=0 2.6 ERGODIC PROPERTY where var[ t ] = σ Since cov[Xt Xt−τ ] = cov[Xt Xt+τ ], an MA process is stationary with acvs: Sτ =   σ p−|τ | j=0 bj bj+|τ | ,  0 |τ | ≤ p; (2.1) |τ | > p There are no restrictions on the size of bj It can be shown that an AR(p) process is equivalent to an infinite order MA process, and vice versa Mixed AR + MA processes, called ARMA processes are also in existence Spectral estimators exist which are based on AR, MA and ARMA models These are called parametric estimators because their result is dependent on the model, i.e AR of order p, etc AR models are also called maximum entropy None of these work satisfactorily with geophysical data except in pathological cases The problem is that no test exists to determine which model or what order is appropriate Failure to use the correct model/order gives wildly wrong answers, as shown on the next page Note: In the MATLAB online help is stated that the parametric methods give better results for the estimation of the spectrum That is based on an example that is shown there and that it represents an AR model, logically the parametric methods will be better in this case than the non-parametric More nonsense has been written about the superior resolving power of AR or MEM than anything else in geophysics (see any issue of JGR in the 1970’s) As an example see figure (2.1) 2.6 Ergodic Property Estimation of the mean or acvs using observations from a single realization are based on replacing ensemble averages with time averages Estimates which 10 LECTURE STOCHASTIC PROCESSES AR MA ARMA Figure 2.1: Example of AR (top figure), MA (middle) and ARMA (bottom) models (solid lines) and their approximation by AR, MA and ARMA models Note how without any knowledge of the process, this parametric methods fail to recover the real spectrum 2.6 ERGODIC PROPERTY 11 “converge” under this interchange are called ergodic The ergodic assumption is typically applied without justification in all of spectral analysis ... Xt ,2 are themselves second order stationary We have E[Zt ] = µ1 + i? ?2 = µ cov[Zt1 , Zt2 ] = E[(Zt1 − µ)∗ (Zt2 − µ)] = Sτ ∗ and hence S−τ = Sτ for a complex process 2. 3 Examples of stationary processes... Gaussian process where uncorrelatedness does imply independence A random or white noise process is a process without memory One datum does not depend on any other 2. 4 First order Autoregressive process. .. acvs for an AR process dies out gradually with lag and never reaches zero 2. 5 Moving Average Process An MA process is a linear combination of present and past values of a noise process with a

Ngày đăng: 15/10/2012, 15:43

Từ khóa liên quan

Tài liệu cùng người dùng

Tài liệu liên quan