Econometric theory and methods, Russell Davidson - Chapter 14 pot

45 214 0
Econometric theory and methods, Russell Davidson - Chapter 14 pot

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

Chapter 14 Unit Roots and Cointegration 14.1 Introduction In this chapter, we turn our attention to models for a particular type of non- stationary time series. For present purposes, the usual definition of covariance stationarity is too strict. We consider instead an asymptotic version, which requires only that, as t → ∞, the first and second moments tend to fixed stationary values, and the covariances of the elements y t and y s tend to sta- tionary values that dep end only on |t−s|. Such a series is said to be integrated to order zero, or I(0), for a reason that will be clear in a moment. A nonstationary time series is said to be integrated to order one, or I(1), 1 if the series of its first differences, ∆y t ≡ y t − y t−1 , is I(0). More generally, a series is integrated to order d, or I(d), if it must b e differenced d times before an I(0) series results. A series is I(1) if it contains what is called a unit root, a concept that we will elucidate in the next section. As we will see there, using standard regression methods with variables that are I(1) can yield highly misleading results. It is therefore important to be able to test the hypothesis that a time series has a unit root. In Sections 14.3 and 14.4, we discuss a number of ways of doing so. Section 14.5 introduces the concept of cointegration, a phenomenon whereby two or more series with unit roots may be related, and discusses estimation in this context. Section 14.6 then discusses three ways of testing for the presence of cointegration. 14.2 Random Walks and Unit Roots The asymptotic results we have developed so far depend on various regularity conditions that are violated if nonstationary time series are included in the set of variables in a model. In such cases, specialized econometric methods must be employed that are strikingly different from those we have studied 1 In the literature, such series are usually described as being integrated of order one, but this usage strikes us as being needlessly ungrammatical. Copyright c  1999, Russell Davidson and James G. MacKinnon 595 596 Unit Roots and Cointegration so far. The fundamental building block for many of these methods is the standardized random walk process, which is defined as follows in terms of a unit-variance white-noise process ε t : w t = w t−1 + ε t , w 0 = 0, ε t ∼ IID(0, 1). (14.01) Equation (14.01) is a recursion that can easily be solved to give w t = t  s=1 ε s . (14.02) It follows from (14.02) that the unconditional expectation E(w t ) = 0 for all t. In addition, w t satisfies the martingale property that E(w t | Ω t−1 ) = w t−1 for all t, where as usual the information set Ω t−1 contains all information that is available at time t − 1, including in particular w t−1 . The martingale property often makes economic sense, especially in the study of financial markets. We use the notation w t here partly because “w” is the first letter of “walk” and partly because a random walk is the discrete-time analog of a continuous-time stochastic process called a Wiener process, which plays a very important role in the asymptotic theory of nonstationary time series. The clearest way to see that w t is nonstationary is to compute Var(w t ). Since ε t is white noise, we see directly that Var(w t ) = t. Not only does this variance depend on t, thus violating the stationarity condition, but, in addition, it actually tends to infinity as t → ∞, so that w t cannot be I(0). Although the standardized random walk process (14.01) is very simple, more realistic models are closely related to it. In practice, for example, an economic time series is unlikely to have variance 1. Thus the very simplest nonstationary time-series process for data that we might actually observe is the random walk process y t = y t−1 + e t , y 0 = 0, e t ∼ IID(0, σ 2 ), (14.03) where e t is still white noise, but with arbitrary variance σ 2 . This process, which is often simply referred to as a random walk, can be based on the process (14.01) using the equation y t = σw t . If we wish to relax the assumption that y 0 = 0, we can subtract y 0 from both sides of the equation so as to obtain the relationship y t − y 0 = y t−1 − y 0 + e t . The equation y t = y 0 + σw t then relates y t to a series w t generated by the standardized random walk process (14.01). The next obvious generalization is to add a constant term. If we do so, we obtain the model y t = γ 1 + y t−1 + e t . (14.04) Copyright c  1999, Russell Davidson and James G. MacKinnon 14.2 Random Walks and Unit Roots 597 This model is often called a random walk with drift, and the constant term is called a drift parameter. To understand this terminology, subtract y 0 + γ 1 t from both sides of (14.04). This yields y t − y 0 − γ 1 t = γ 1 + y t−1 − y 0 − γ 1 t + e t = y t−1 − y 0 − γ 1 (t − 1) + e t , and it follows that y t can be generated by the equation y t = y 0 + γ 1 t + σw t . The trend term γ 1 t is the drift in this process. It is clear that, if we take first differences of the y t generated by a process like (14.03) or (14.04), we obtain a time series that is I(0). In the latter case, for example, ∆y t ≡ y t − y t−1 = γ 1 + e t . Thus we see that y t is integrated to order one, or I(1). This property is the result of the fact that y t has a unit root. The term “unit root” comes from the fact that the random walk process (14.03) can be expressed as (1 − L)y t = e t , (14.05) where L denotes the lag operator. As we saw in Sections 7.6 and 13.2, an autoregressive process u t always satisfies an equation of the form  1 − ρ(L)  u t = e t , (14.06) where ρ(L) is a polynomial in the lag operator L with no constant term, and e t is white noise. The process (14.06) is stationary if and only if all the roots of the polynomial equation 1 − ρ(z) = 0 lie strictly outside the unit circle in the complex plane, that is, are greater than 1 in absolute value. A root that is equal to 1 is called a unit root. Any series that has precisely one such root, with all other roots outside the unit circle, is an I(1) process, as readers are asked to check in Exercise 14.2. A random walk process like (14.05) is a particularly simple example of an AR process with a unit root. A slightly more complicated example is y t = (1 + ρ 2 )y t−1 − ρ 2 y t−2 + u t , |ρ 2 | < 1, which is an AR(2) process with only one free parameter. In this case, the polynomial in the lag operator is 1 − (1 + ρ 2 )L + ρ 2 L 2 = (1 − L)(1 − ρ 2 L), and its roots are 1 and 1/ρ 2 > 1. Copyright c  1999, Russell Davidson and James G. MacKinnon 598 Unit Roots and Cointegration Same-Order Notation Before we can discuss models in which one or more of the regressors has a unit root, it is necessary to introduce the concept of the same-order relation and its associated notation. Almost all of the quantities that we encounter in econometrics depend on the sample size. In many cases, when we are using asymptotic theory, the only thing about these quantities that concerns us is the rate at which they change as the sample size changes. The same-order relation provides a very convenient way to deal with such cases. To begin with, let us suppose that f(n) is a real-valued function of the positive integer n, and p is a rational number. Then we say that f(n) is of the same order as n p if there exists a constant K, independent of n, and a positive integer N such that     f(n) n p     < K for all n > N. When f(n) is of the same order as n p , we can write f(n) = O(n p ). Of course, this equation does not express an equality in the usual sense. But, as we will see in a moment, this “big O” notation is often very convenient. The definition we have just given is appropriate only if f(n) is a deterministic function. However, in most econometric applications, some or all of the quan- tities with which we are concerned are stochastic rather than deterministic. To deal with such quantities, we need to make use of the stochastic same- order relation. Let {a n } be a sequence of random variables indexed by the positive integer n. Then we say that a n is of order n p in probability if, for all ε > 0, there exist a constant K and a positive integer N such that Pr     a n n p    > K  < ε for all n > N. (14.07) When a n is of order n p in probability, we can write a n = O p (n p ). In most cases, it is obvious that a quantity is stochastic, and there is no harm in writing O(n p ) when we really mean O p (n p ). The properties of the same-order relations are the same in the deterministic and stochastic cases. The same-order relations are useful because we can manipulate them as if they were simply powers of n. Suppose, for example, that we are dealing with two functions, f(n) and g(n), which are O(n p ) and O(n q ), respectively. Then f(n)g (n) = O(n p )O(n q ) = O(n p+q ), and f(n) + g(n) = O(n p ) + O(n q ) = O(n max(p,q) ). (14.08) Copyright c  1999, Russell Davidson and James G. MacKinnon 14.2 Random Walks and Unit Roots 599 In the first line here, we see that the order of the product of the two functions is just n raised to the sum of p and q. In the second line, we see that the order of the sum of the functions is just n raised to the maximum of p and q. Both these properties of the same-order relations are often very useful in asymptotic analysis. Let us see how the same-order relations can be applied to a linear regression model that satisfies the standard assumptions for consistency and asymptotic normality. We start with the standard result, from equations (3.05), that ˆ β = β 0 + (X  X) −1 X  u. In Chapters 3 and 4, we made the assumption that n −1 X  X has a probability limit of S X  X , which is a finite, positive definite, deterministic matrix; recall equations (3.17) and (4.49). It follows readily from the definition (3.15) of a probability limit that each element of the matrix n −1 X  X is O p (1). Simi- larly, in order to apply a central limit theorem, we supposed that n −1/2 X  u has a probability limit which is a normally distributed random variable with expectation zero and finite variance; recall equation (4.53). This implies that n −1/2 X  u = O p (1). The definition (14.07) lets us rewrite the above results as X  X = O p (n) and X  u = O p (n 1/2 ). (14.09) From equations (14.09) and the first of equations (14.08), we see that n 1/2 ( ˆ β − β 0 ) = n 1/2 (X  X) −1 X  u = n 1/2 O p (n −1 )O p (n 1/2 ) = O p (1). This result is not at all new; in fact, it follows from equation (6.38) specialized to a linear regression. But it is clear that the O p notation provides a simple way of seeing why we have to multiply ˆ β − β 0 by n 1/2 , rather than some other power of n, in order to find its asymptotic distribution. As this example illustrates, in the asymptotic analysis of econometric models for which all variables satisfy standard regularity conditions, p is generally −1, − 1 2 , 0, 1 2 , or 1. For models in which some or all variables have a unit root, however, we will encounter several other values of p. Regressors with a Unit Root Whenever a variable with a unit root is used as a regressor in a linear regression model, the standard assumptions that we have made for asymptotic analysis are violated. In particular, we have assumed up to now that, for the linear regression model y = Xβ + u, the probability limit of the matrix n −1 X  X is the finite, positive definite matrix S X  X . But this assumption is false whenever one or more of the regressors have a unit root. Copyright c  1999, Russell Davidson and James G. MacKinnon 600 Unit Roots and Cointegration To see this, consider the simplest case. Whenever w t is one of the regressors, one element of X  X is  n t=1 w 2 t , which by equation (14.02) is equal to n  t=1  t  r=1 t  s=1 ε r ε s  . (14.10) The expectation of ε r ε s is zero for r = s. Therefore, only terms with r = s contribute to the expectation of (14.10), which, since E(ε 2 r ) = 1, is n  t=1 t  r=1 E(ε 2 r ) = n  t=1 t = 1 − 2 n(n + 1). (14.11) Here we have used a result concerning the sum of the first n positive inte- gers that readers are asked to demonstrate in Exercise 14.3. Let w denote the n vector with typical element w t . Then the expectation of n −1 w  w is (n + 1)/2, which is evidently O(n). It is therefore impossible that n −1 w  w should have a finite probability limit. This fact has extremely serious consequences for asymptotic analysis. It im- plies that none of the results on consistency and asymptotic normality that we have discussed up to now is applicable to models where one or more of the regressors have a unit root. All such results have been based on the assump- tion that the matrix n −1 X  X, or the analogs of this matrix for nonlinear regression models, models estimated by IV and GMM, and models estimated by maximum likelihood, tends to a finite, positive definite matrix. It is con- sequently very important to know whether or not an economic variable has a unit root. A few of the many techniques for answering this question will be discussed in the next section. In the next subsection, we investigate some of the phenomena that arise when the usual regularity conditions for linear regression models are not satisfied. Spurious Regressions If x t and y t are time series that are entirely independent of each other, we might hope that running the simple linear regression y t = β 1 + β 2 x t + v t (14.12) would usually produce an insignificant estimate of β 2 and an R 2 near 0. How- ever, this is so only under quite restrictive conditions on the nature of the x t and y t . In particular, if x t and y t are independent random walks, the t statis- tic for β 2 = 0 does not follow the Student’s t or standard normal distribution, even asymptotically. Instead, its absolute value tends to become larger and larger as the sample size n increases. Ultimately, as n → ∞, it rejects the null hypothesis that β 2 = 0 with probability 1. Moreover, the R 2 does not converge to 0 but to a random, positive number that varies from sample to Copyright c  1999, Russell Davidson and James G. MacKinnon 14.2 Random Walks and Unit Roots 601 0.00 0.10 0.20 0.30 0.40 0.50 0.60 0.70 0.80 0.90 1.00 20 80 300 1,200 5,000 20,000 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Spurious regression, random walk Spurious regression, AR(1) process Valid regression, random walk Valid regression, AR(1) process n Figure 14.1 Rejection frequencies for spurious and valid regressions sample. When a regression model like (14.12) appears to find relationships that do not really exist, it is called a spurious regression. We have not as yet developed the theory necessary to understand spurious regression with I(1) series. It is therefore worthwhile to illustrate the phe- nomenon with some computer simulations. For a large number of sample sizes between 20 and 20, 000, we generated one million series of (x t , y t ) pairs independently from the random walk model (14.03) and then ran the spurious regression (14.12). The dotted line near the top in Figure 14.1 shows the pro- portion of the time that the t statistic for β 2 = 0 rejected the null hypothesis at the .05 level as a function of n. This proportion is very high even for small sample sizes, and it is clearly tending to unity as n increases. Upon reflection, it is not entirely surprising that tests based on the spurious regression model (14.12) do not yield sensible results. Under the null hypo- thesis that β 2 = 0, this model says that y t is equal to a constant plus an IID error term. But in fact y t is a random walk generated by the DGP (14.03). Thus the null hypothesis that we are testing is false, and it is very common for a test to reject a false null hypothesis, even when the alternative is also false. We saw an example of this in Section 7.9; for an advanced discussion, see Davidson and MacKinnon (1987). It might seem that we could obtain sensible results by running the regression y t = β 1 + β 2 x t + β 3 y t−1 + v t , (14.13) since, if we set β 1 = 0, β 2 = 0, and β 3 = 1, regression (14.13) reduces to the random walk (14.03), which is in fact the DGP for y t in our simulations, with Copyright c  1999, Russell Davidson and James G. MacKinnon 602 Unit Roots and Cointegration v t = e t being white noise. Thus it is a valid regression model to estimate. The lower dotted line in Figure 14.1 shows the proportion of the time that the t statistic for β 2 = 0 in regression (14.13) rejected the null hypothesis at the .05 level. Although this proportion no longer tends to unity as n increases, it clearly tends to a number substantially larger than 0.05. This overrejection is a consequence of running a regression that involves I(1) variables. Both y t and y t−1 are I(1) in this case, and, as we will see in Section 14.5, this implies that the t statistic for β 2 = 0 does not have its usual asymptotic distribution, as one might suspect given that the n −1 X  X matrix does not have a finite plim. The results in Figure 14.1 show clearly that spurious regressions actually involve at least two different phenomena. The first is that they involve testing false null hypotheses, and the second is that standard asymptotic results do not hold whenever at least one of the regressors is I(1), even when a model is correctly specified. As Granger (2001) has stressed, spurious regression can occur even when all variables are stationary. To illustrate this, Figure 14.1 also shows results of a second set of simulation experiments. These are similar to the original ones, except that x t and y t are now generated from independent AR(1) processes with mean zero and autoregressive parameter ρ 1 = 0.8. The higher solid line shows that, even for these data, which are stationary as well as independent, running the spurious regression (14.12) results in the null hypothesis being rejected a very substantial proportion of the time. In contrast to the previous results, however, this proportion does not keep increasing with the sample size. Moreover, as we see from the lower solid line, running the valid regres- sion (14.13) leads to approximately correct rejection frequencies, at least for larger sample sizes. Readers are invited to explore these issues further in Exercises 14.5 and 14.6. It is of interest to see just what gives rise to spurious regression with two independent AR(1) series that are stationary. In this case, the n −1 X  X matrix does have a finite, deterministic, positive definite plim, and so that regularity condition at least is satisfied. However, because neither the constant nor x t has any explanatory power for y t in (14.12), the true error term for observation t is v t = y t , which is not white noise, but rather an AR(1) process. This suggests that the problem can be made to go away if we do not use the inappropriate OLS covariance matrix estimator, but instead use a HAC estimator that takes suitable account of the serial correlation of the errors. This is true asymptotically, but overrejection remains very significant until the sample size is of the order of several thousand; see Exercise 14.7. The use of HAC estimators is explored further in Exercises 14.8 and 14.9. As the results in Figure 14.1 illustrate, there is a serious risk of appearing to find relationships between economic time series that are actually independent. Although the risk can be far from negligible with stationary series which ex- hibit substantial serial correlation, it is particularly severe with nonstationary Copyright c  1999, Russell Davidson and James G. MacKinnon 14.3 Unit Root Tests 603 ones. The phenomenon of spurious regressions was brought to the attention of econometricians by Granger and Newbold (1974), who used simulation meth- ods that were very crude by today’s standards. Subsequently, Phillips (1986) and Durlauf and Phillips (1988) proved a number of theoretical results about spurious regressions involving nonstationary time series. Granger (2001) pro- vides a brief overview and survey of the literature. 14.3 Unit Root Tests For a number of reasons, it can be important to know whether or not an econ- omic time series has a unit root. As Figure 14.1 illustrates, the distributions of estimators and test statistics associated with I(1) regressors may well dif- fer sharply from those associated with regressors that are I(0). Moreover, as Nelson and Plosser (1982) were among the first to point out, nonstationarity often has important economic implications. It is therefore very important to be able to detect the presence of unit roots in time series, normally by the use of what are called unit root tests. For these tests, the null hypothesis is that the time series has a unit root and the alternative is that it is I(0). Dickey-Fuller Tests The simplest and most widely-used tests for unit roots are variants of ones developed by Dickey and Fuller (1979). These tests are therefore referred to as Dickey-Fuller tests, or DF tests. Consider the simplest imaginable AR(1) model, y t = βy t−1 + σε t , (14.14) where ε t is white noise with variance 1. When β = 1, this model has a unit root and becomes a random walk process. If we subtract y t−1 from both sides, we obtain ∆y t = (β − 1)y t−1 + σε t . (14.15) Thus, in order to test the null hypothesis of a unit root, we can simply test the hypothesis that the co efficient of y t−1 in equation (14.15) is equal to 0 against the alternative that it is negative. Regression (14.15) is an example of what is sometimes called an unbalanced regression because, under the null hypothesis, the regressand is I(0) and the sole regressor is I(1). Under the alternative hypothesis, both variables are I(0), and the regression becomes balanced again. The obvious way to test the unit root hypothesis is to use the t statistic for the hypothesis β − 1 = 0 in regression (14.15), testing against the alternative that this quantity is negative. This implies a one-tailed test. In fact, this statistic is referred to, not as a t statistic, but as a τ statistic, because, as we will see, its distribution is not the same as that of an ordinary t statistic, even asymptotically. Another possible test statistic is n times the OLS estimate Copyright c  1999, Russell Davidson and James G. MacKinnon 604 Unit Roots and Cointegration of β − 1 from (14.15). This statistic is called a z statistic. Precisely why the z statistic is valid will become clear in the next subsection. Since the z statistic is a little easier to analyze than the τ statistic, we focus on it for the moment. The z statistic from the test regression (14.15) is z = n  n t=1 y t−1 ∆y t  n t=1 y 2 t−1 , where, for ease of notation in summations, we suppose that y 0 is observed. Under the null hypothesis, the data are generated by a DGP of the form y t = y t−1 + σε t , (14.16) or, equivalently, y t = y 0 + σw t , where w t is a standardized random walk defined in terms of ε t by (14.01). For such a DGP, a little algebra shows that the z statistic becomes z = n σ 2  n t=1 w t−1 ε t + σy 0 w n σ 2  n t=1 w 2 t−1 + 2y 0 σ  n t=1 w t−1 + ny 2 0 . (14.17) Since the right-hand side of this equation dep ends on y 0 and σ in a nontrivial manner, the z statistic is not pivotal for the model (14.16). However, when y 0 = 0, z no longer depends on σ, and it becomes a function of the random walk w t alone. In this special case, the distribution of z can be calculated, perhaps analytically and certainly by simulation, provided we know the dis- tribution of the ε t . In most cases, we do not wish to assume that y 0 = 0. Therefore, we must look further for a suitable test statistic. Subtracting y 0 from both y t and y t−1 in equation (14.14) gives ∆y t = (1 − β)y 0 + (β − 1)y t−1 + σε t . Unlike (14.15), this regression has a constant term. This suggests that we should replace (14.15) by the test regression ∆y t = γ 0 + (β − 1)y t−1 + e t . (14.18) Since y t = y 0 + σw t , we may write y = y 0 ι + σ w , where the notation should be obvious. The z statistic from (14.18) is still n( ˆ β − 1), and so, by application of the FWL theorem, it can be written under the null as z = n  n t=1 (M ι y) t−1 ∆y t  n t=1 (M ι y) 2 t−1 = n  n t=1 (M ι y) t−1 σε t  n t=1 (M ι y) 2 t−1 , (14.19) Copyright c  1999, Russell Davidson and James G. MacKinnon [...]... regression (14. 15) contains no deterministic regressors, (14. 18) has one, (14. 21) two, and (14. 22) three In the last three cases, the test regression always contains one deterministic regressor that does not appear under the null hypothesis Dickey-Fuller tests of the null hypothesis that there is a unit root may be based on any of regressions (14. 15), (14. 18), (14. 21), or (14. 22) In practice, regressions (14. 18)... from equations (14. 45) that the denominator of the right-hand side of equation (14. 46) is n x2 21 n 2 vt1 t=1 + 2x21 x22 n 2 vt2 vt1 vt2 + t=1 t=1 Copyright c 1999, Russell Davidson and James G MacKinnon (14. 47) 618 Unit Roots and Cointegration Since Var(et1 ) = σ11 , the element in the first row and column of the covariance matrix Σ of the innovations et1 and et2 , we see that the random walk vt1 can... 1999, Russell Davidson and James G MacKinnon (14. 52) 620 Unit Roots and Cointegration where the new parameter α is the short-run multiplier, δ1 = λ2 − 1, and δ2 = (1 − λ2 )η2 Since (14. 52) is just a linear regression, the parameter ˆ ˆ of interest, which is η2 , can be estimated by η2 ≡ − δ2 /δ1 , using the OLS ˆ estimates of δ1 and δ2 Equation (14. 52) is without doubt an unbalanced regression, and. .. series, yt1 and yt2 , generated by equations (14. 40), with λ1 = 1 and |λ2 | < 1 By use of (14. 41), we have yt1 = x11 vt1 + x12 vt2 , and yt2 = x21 vt1 + x22 vt2 , (14. 45) where vt1 is a random walk, and vt2 is I(0) For simplicity, suppose that Xt is empty in regression (14. 44), yt = yt1 , and Yt2 has the single element yt2 Then we have n yt2 yt1 η2 = t=1 2 , ˆ (14. 46) n t=1 yt2 where η2 is the OLS estimator... model (14. 16) If we wish to test the unit root hypothesis in a model where the random walk has a drift, the appropriate test regression is ∆yt = γ0 + γ1 t + (β − 1)yt−1 + et , (14. 21) and if we wish to test the unit root hypothesis in a model where the random walk has both a drift and a trend, the appropriate test regression is ∆yt = γ0 + γ1 t + γ2 t2 + (β − 1)yt−1 + et ; (14. 22) see Exercise 14. 10... that the expression (14. 24) is O(n), we can show that the middle term in (14. 47) is O(n); see Exercise 14. 18 In like manner, we see that the numerator of the right-hand side of (14. 46) is n n 2 vt1 x11 x21 + (x11 x22 + x12 x21 ) t=1 n 2 vt2 vt1 vt2 + x12 x22 t=1 (14. 48) t=1 The first term here is O(n2 ), and the other two are O(n) Thus, if we divide both numerator and denominator in (14. 46) by n2, only... (1993, Chapter 4), Hamilton (1994, Chapter 17), Fuller (1996), Hayashi (2000, Chapter 9), and Bierens (2001) Results for the other six test statistics are more complicated For zc and τc , the limiting random variables can be expressed in terms of a centered Wiener process Similarly, for zct and τct , one needs a Wiener process that has been centered and detrended, and so on For details, see Phillips and. .. 1999, Russell Davidson and James G MacKinnon 14. 5 Cointegration 621 Inference in Regressions with I(1) Variables From what we have said so far, it might seem that standard asymptotic results never apply when a regression contains one or more regressors that are I(1) This is true for spurious regressions like (14. 12), for unit root test regressions like (14. 18), and for error-correction models like (14. 52)... bi , this pair of equations can be deduced from the model (14. 36), with π21 = φ12 , π12 = φ21 , and πii = φii − 1, i = 1, 2 We saw in connection with the system (14. 36) that, if yt1 and yt2 are cointegrated, then the matrix Φ of (14. 37) has Copyright c 1999, Russell Davidson and James G MacKinnon 14. 5 Cointegration 625 one unit eigenvalue and the other eigenvalue less than 1 in absolute value This... 1999, Russell Davidson and James G MacKinnon 630 Unit Roots and Cointegration Under the alternative hypothesis of cointegration, an ECM test is more likely to reject the false null than an EG test Consider equation (14. 52) Subtracting η2 ∆yt2 from both sides and rearranging, we obtain ∆(yt1 − η2 yt2 ) = δ1 (yt−1,1 − η2 yt−1,2 ) + (α − η2 )∆yt2 + et (14. 68) If we replace η2 by its estimate (14. 46) and . hypothesis. Dickey-Fuller tests of the null hypothesis that there is a unit root may be based on any of regressions (14. 15), (14. 18), (14. 21), or (14. 22). In practice, regressions (14. 18) and. Russell Davidson and James G. MacKinnon 14. 2 Random Walks and Unit Roots 597 This model is often called a random walk with drift, and the constant term is called a drift parameter. To understand. detrended, and so on. For details, see Phillips and Perron (1988) and Bierens (2001). Exercise 14. 14 looks in more detail at the limit of z c . Unfortunately, although the quantities (14. 29) and (14. 30)

Ngày đăng: 04/07/2014, 15:20

Từ khóa liên quan

Tài liệu cùng người dùng

  • Đang cập nhật ...

Tài liệu liên quan