Đang tải... (xem toàn văn)
Review: Linear prediction, projection in Hilbert space.. Partial autocorrelation function...[r]
(1)Introduction to Time Series Analysis Lecture 8.
1 Review: Linear prediction, projection in Hilbert space Forecasting and backcasting
3 Prediction operator
(2)Linear prediction
Given X1, X2, , Xn, the best linear predictor
Xn+mn = α0 +
n
X
i=1
αiXi
of Xn+m satisfies the prediction equations
E Xn+m − Xnn+m
=
E Xn+m − Xn+mn Xi
(3)
Projection theorem
If H is a Hilbert space,
M is a closed subspace of H, and y ∈ H,
then there is a point P y ∈ M
(the projection of y on M) satisfying
1 kP y − yk ≤ kw − yk
2 hy − P y, wi =
for w ∈ M
y y−Py
Py
(4)Projection theorem for linear forecasting
Given 1, X1, X2, , Xn ∈
r.v.s X : EX2 < ∞ , choose α0, α1, , αn ∈ R
so that Z = α0 + Pni=1 αiXi minimizes E(Xn+m − Z)2
Here, hX, Y i = E(XY ),
M = {Z = α0 + Pni=1 αiXi : αi ∈ R} = ¯sp{1, X1, , Xn}, and
(5)Projection theorem: Linear prediction
Let Xnn+m denote the best linear predictor:
kXn+mn − Xn+mk2 ≤ kZ − Xn+mk2 for all Z ∈ M
The projection theorem implies the orthogonality
hXnn+m − Xn+m, Zi = for all Z ∈ M
⇔ hXnn+m − Xn+m, Zi = for all Z ∈ {1, X1, , Xn}
⇔ E X
n
n+m − Xn+m
=
E Xnn+m − Xn+m
Xi
(6)
Linear prediction
That is, the prediction errors (Xnn+m − Xn+m) are orthogonal to the prediction variables (1, X1, , Xn).
(7)One-step-ahead linear prediction
Write Xnn+1 = φn1Xn + φn2Xn−1+ · · · + φnnX1
Prediction equations: E (Xn+1n − Xn+1)Xi
= 0, for i = 1, , n ⇔
n
X
j=1
φnjE (Xn+1−jXi) = E(Xn+1Xi)
⇔
n
X
j=1
φnjγ(i − j) = γ(i)
(8)One-step-ahead linear prediction
Prediction equations: Γnφn = γn
Γn =
γ(0) γ(1) · · · γ(n − 1)
γ(1) γ(0) γ(n − 2)
γ(n − 1) γ(n − 2) · · · γ(0)
,
φn = (φn1, φn2, , φnn) ′
, γn = (γ(1), γ(2), , γ(n)) ′
(9)Mean squared error of one-step-ahead linear prediction
Pnn+1 = E Xn+1 − Xnn+1
2
= E Xn+1 − Xn+1n Xn+1 − Xn+1n
= E Xn+1 Xn+1 − Xnn+1
= γ(0) − E(φ′nXXn+1)
= γ(0) − γn′ Γ−1 n γn,
where X = (Xn, Xn−1, , X1) ′
(10)Mean squared error of one-step-ahead linear prediction
Variance is reduced:
Pnn+1 = E Xn+1 − Xnn+1
2
= γ(0) − γn′ Γ−1 n γn
= Var(Xn+1) − Cov(Xn+1, X)Cov(X, X)
−1
Cov(X, Xn+1)
= E (Xn+1 − 0)2 − Cov(Xn+1, X)Cov(X, X)
−1
Cov(X, Xn+1),
where X = (Xn, Xn−1, , X1) ′
(11)Introduction to Time Series Analysis Lecture 8.
1 Review: Linear prediction, projection in Hilbert space Forecasting and backcasting
3 Prediction operator
(12)Backcasting: Predicting m steps in the past
Given X1, , Xn, we wish to predict X1−m for m >
That is, we choose Z ∈ M = ¯sp{X1, , Xn} to minimize kZ −X1−mk2
The prediction equations are
hX1n−m − X1−m, Zi = for all Z ∈ M
⇔ E X1n−m − X1−m
Xi
(13)
One-step backcasting
Write the least squares prediction of X0 given X1, , Xn as
X0n = φn1X1 + φn2X2 + · · · + φnnXn = φ ′ nX,
where the predictor vector is reversed: now X = (X1, , Xn) ′
The prediction equations are
E((X0n − X0) Xi) = for i = 1, , n
⇔ E n X j=1
φnjXj − X0
Xi
=
⇔
n
X
j=1
φnjγ(j − i) = γ(i)
(14)One-step backcasting
The prediction equations are
Γnφn = γn,
which is exactly the same as for forecasting, but with the indices of the predictor vector reversed: X = (X1, , Xn)
′
(15)Example: Forecasting AR(1)
AR(1) model: Xt = φ1Xt−1 + Wt
linear prediction of X2: X21 = φ11X1
Prediction equation: γ(0)φ11 = γ(1)
= Cov(X0, X1)
= φ1γ(0)
(16)Example: Backcasting AR(1)
AR(1) model: Xt = φ1Xt−1 + Wt
linear prediction of X0: X01 = φ11X1
Prediction equation: γ(0)φ11 = γ(1)
= Cov(X0, X1)
= φ1γ(0)
(17)Introduction to Time Series Analysis Lecture 8.
1 Review: Linear prediction, projection in Hilbert space Forecasting and backcasting
3 Prediction operator
(18)The prediction operator
For random variables Y, Z1, , Zn, define the
best linear prediction of Y given Z = (Z1, , Zn) ′
as the operator P(·|Z) applied to Y :
P(Y |Z) = µY + φ ′
(Z − µZ)
with Γφ = γ,
where γ = Cov(Y, Z)
(19)Properties of the prediction operator
1 E(Y − P(Y |Z)) = 0, E((Y − P(Y |Z))Z) =
2 E((Y − P(Y |Z))2) = Var(Y ) − φ′
γ
3. P(α1Y1 + α2Y2 + α0|Z) = α0 + α1P(Y1|Z) + α2P(Y2|Z)
4. P(Zi|Z) = Zi
(20)Example: predicting m steps ahead
Write Xn+mn = φ(nm1)Xn + φ( m)
n2 Xn−1 + · · · + φ( m) nn X1
Γnφ(nm) = γn(m),
with Γn = Cov(X, X),
γn(m) = Cov(Xn+m, X)
= (γ(m), γ(m + 1), , γ(m + n − 1))′
Also, E((Xn+m − Xnn+m)2) = γ(0) − φ(m) ′
(21)Introduction to Time Series Analysis Lecture 8.
1 Review: Linear prediction, projection in Hilbert space Forecasting and backcasting
3 Prediction operator
(22)Partial autocovariance function
AR(1) model: Xt = φ1Xt−1 + Wt
γ(1) = Cov(X0, X1) = φ1γ(0)
γ(2) = Cov(X0, X2)
= Cov(X0, φ1X1 + W2)
= Cov(X0, φ21X0 + φ1W1 + W2)
= φ21γ(0)
(23)Partial autocovariance function
For AR(1) model: X21 = φ1X1, X01 = φ1X1,
so Cov(X21 − X2, X01 − X0) = Cov(φ1X1 − X2, φ1X1 − X0)
= Cov(W2, φ1X1 − X0)
(24)Partial autocorrelation function
The Partial AutoCorrelation Function (PACF) of a stationary time series {Xt} is
φ11 = Corr(X1, X0) = ρ(1)
φhh = Corr(Xh − Xh −1
h , X0 − X h−1
0 ) for h = 2,3,
This removes the linear effects of X1, , Xh−1:
, X−1, X0, X1, X2, , Xh−1
| {z }
(25)Partial autocorrelation function
The PACF φhh is also the last coefficient in the best linear prediction of
Xh+1 given X1, , Xh:
Γhφh = γh Xhh+1 = φ
′ hX
(26)Example: Forecasting an AR(p)
For Xt = p
X
i=1
φiXt−i + Wt,
Xnn+1 = P(Xn+1|X1, , Xn)
= P
p
X
i=1
φiXn+1−i + Wn+1|X1, , Xn
!
=
p
X
i=1
φiP (Xn+1−i|X1, , Xn) p
(27)Example: PACF of an AR(p)
For Xt = p
X
i=1
φiXt−i + Wt,
Xn+1n =
p
X
i=1
φiXn+1−i
Thus, φhh =
φh if ≤ h ≤ p
(28)Example: PACF of an invertible MA(q)
For Xt = q
X
i=1
θiWt−i + Wt, Xt = − ∞
X
i=1
πiXt−i + Wt,
Xn+1n = P(Xn+1|X1, , Xn)
= P −
∞
X
i=1
πiXn+1−i + Wn+1|X1, , Xn
!
= −
∞
X
i=1
πiP (Xn+1−i|X1, , Xn)
= −
n
X
πiXn+1−i −
∞
X
(29)ACF of the MA(1) process
−100 −8 −6 −4 −2 10 0.2
0.4 0.6 0.8
θ/(1+θ2) MA(1): X
(30)ACF of the AR(1) process
0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
φ|h| AR(1): X
(31)PACF of the MA(1) process
0 10 −0.2
0 0.2 0.4 0.6 0.8
MA(1): X
(32)PACF of the AR(1) process
0.2 0.4 0.6 0.8
AR(1): X
(33)PACF and ACF
Model: ACF: PACF:
AR(p) decays zero for h > p
MA(q) zero for h > q decays
(34)Sample PACF
For a realization x1, , xn of a time series,
the sample PACF is defined by
ˆ
φ00 = ˆ
φhh = last component of φˆh,
(35)Introduction to Time Series Analysis Lecture 8.
1 Review: Linear prediction, projection in Hilbert space Forecasting and backcasting
3 Prediction operator