Econometric theory and methods, Russell Davidson - Chapter 8 pot

41 446 0
Econometric theory and methods, Russell Davidson - Chapter 8 pot

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

Chapter 8 Instrumental Variables Estimation 8.1 Introduction In Section 3.3, the ordinary least squares estimator ˆ β was shown to be consis- tent under condition (3.10), according to which the expectation of the error term u t associated with observation t is zero conditional on the regressors X t for that same observation. As we saw in Section 4.5, this condition can also be expressed either by saying that the regressors X t are predetermined or by saying that the error terms u t are innovations. When condition (3.10) does not hold, the consistency proof of Section 3.3 is not applicable, and the OLS estimator will, in general, be biased and inconsistent. It is not always reasonable to assume that the error terms are innovations. In fact, as we will see in the next section, there are commonly encountered situations in which the error terms are necessarily correlated with some of the regressors for the same observation. Even in these circumstances, however, it is usually possible, although not always easy, to define an information set Ω t for each observation such that E(u t | Ω t ) = 0. (8.01) Any regressor of which the value in period t is correlated with u t cannot belong to Ω t . In Section 6.2, method of moments (MM) estimators were discussed for both linear and nonlinear regression models. Such estimators are defined by the moment conditions (6.10) in terms of a matrix W of variables, with one row for each observation. They were shown to be consistent provided that the t th row W t of W belongs to Ω t , and provided that an asymptotic identification condition is satisfied. In econometrics, these MM estimators are usually called instrumental variables estimators, or IV estimators. Instrumental variables estimation is introduced in Section 8.3, and a number of important results are discussed. Then finite-sample properties are discussed in Section 8.4, hy- pothesis testing in Section 8.5, and overidentifying restrictions in Section 8.6. Next, Section 8.7 introduces a procedure for testing whether it is actually necessary to use IV estimation. Bootstrap testing is discussed in Section 8.8. Finally, in Section 8.9, IV estimation of nonlinear regression models is dealt Copyright c  1999, Russell Davidson and James G. MacKinnon 309 310 Instrumental Variables Estimation with briefly. A more general class of MM estimators, of which both OLS and IV are special cases, will be the subject of Chapter 9. 8.2 Correlation Between Error Terms and Regressors We now briefly discuss two common situations in which the error terms will be correlated with the regressors and will therefore not have mean zero con- ditional on them. The first one, usually referred to by the name errors in variables, occurs whenever the independent variables in a regression model are measured with error. The second situation, often simply referred to as simultaneity, occurs whenever two or more endogenous variables are jointly determined by a system of simultaneous equations. Errors in Variables For a variety of reasons, many economic variables are measured with error. For example, macroeconomic time series are often based, in large part, on surveys, and they must therefore suffer from sampling variability. Whenever there are measurement errors, the values economists observe inevitably differ, to a greater or lesser extent, from the true values that economic agents presumably act upon. As we will see, measurement errors in the dependent variable of a regression model are generally of no great consequence, unless they are very large. However, measurement errors in the independent variables cause the error terms to be correlated with the regressors that are measured with error, and this causes OLS to b e inconsistent. The problems caused by errors in variables can be seen quite clearly in the context of the simple linear regression model. Consider the model y ◦ t = β 1 + β 2 x ◦ t + u ◦ t , u ◦ t ∼ IID(0, σ 2 ), (8.02) where the variables x ◦ t and y ◦ t are not actually observed. Instead, we observe x t ≡ x ◦ t + v 1t , and y t ≡ y ◦ t + v 2t . (8.03) Here v 1t and v 2t are measurement errors which are assumed, perhaps not realistically in some cases, to be IID with variances ω 2 1 and ω 2 2 , respectively, and to be independent of x ◦ t , y ◦ t , and u ◦ t . If we suppose that the true DGP is a special case of (8.02) along with (8.03), we see from (8.03) that x ◦ t = x t − v 1t and y ◦ t = y t − v 2t . If we substitute these into (8.02), we find that y t = β 1 + β 2 (x t − v 1t ) + u ◦ t + v 2t = β 1 + β 2 x t + u ◦ t + v 2t − β 2 v 1t = β 1 + β 2 x t + u t , (8.04) Copyright c  1999, Russell Davidson and James G. MacKinnon 8.2 Correlation Between Error Terms and Regressors 311 where u t ≡ u ◦ t + v 2t − β 2 v 1t . Thus Var(u t ) is equal to σ 2 + ω 2 2 + β 2 2 ω 2 1 . The effect of the measurement error in the dependent variable is simply to increase the variance of the error terms. Unless the increase is substantial, this is generally not a serious problem. The measurement error in the independent variable also increases the variance of the error terms, but it has another, much more severe, consequence as well. Because x t = x ◦ t + v 1t , and u t depends on v 1t , u t will be correlated with x t whenever β 2 = 0. In fact, since the random part of x t is v 1t , we see that E(u t | x t ) = E ( u t | v 1t ) = −β 2 v 1t , (8.05) because we assume that v 1t is independent of u ◦ t and v 2t . From (8.05), we can see, using the fact that E(u t ) = 0 unconditionally, that Cov(x t , u t ) = E(x t u t ) = E  x t E(u t | x t )  = −E  (x ◦ t + v 1t )β 2 v 1t  = −β 2 ω 2 1 . This covariance is negative if β 2 > 0 and positive if β 2 < 0, and, since it does not depend on the sample size n, it will not go away as n becomes large. An exactly similar argument shows that the assumption that E(u t | X t ) = 0 is false whenever any element of X t is measured with error. In consequence, the OLS estimator will be biased and inconsistent. Errors in variables are a potential problem whenever we try to estimate a consumption function, especially if we are using cross-section data. Many economic theories (for example, Friedman, 1957) suggest that household con- sumption will depend on “permanent” income or “life-cycle” income, but sur- veys of household behavior almost never measure this. Instead, they typically provide somewhat inaccurate estimates of current income. If we think of y t as measured consumption, x ◦ t as permanent income, and x t as estimated current income, then the above analysis applies directly to the consumption function. The marginal propensity to consume is β 2 , which must be positive, causing the correlation between u t and x t to be negative. As readers are asked to show in Exercise 8.1, the probability limit of ˆ β 2 is less than the true value β 20 . In consequence, the OLS estimator ˆ β 2 is biased downward, even asymptotically. Of course, if our objective is simply to estimate the relationship between the observed dependent variable y t and the observed independent variable x t , there is nothing wrong with using ordinary least squares to estimate equation (8.04). In that case, u t would simply be defined as the difference between y t and its expectation conditional on x t . But our analysis shows that the OLS estimators of β 1 and β 2 in equation (8.04) are not consistent for the corresponding parameters of equation (8.02). In most cases, it is parameters like these that we want to estimate on the basis of economic theory. There is an extensive literature on ways to avoid the inconsistency caused by errors in variables. See, among many others, Hausman and Watson (1985), Copyright c  1999, Russell Davidson and James G. MacKinnon 312 Instrumental Variables Estimation Leamer (1987), and Dagenais and Dagenais (1997). The simplest and most widely-used approach is just to use an instrumental variables estimator. Simultaneous Equations Economic theory often suggests that two or more endogenous variables are determined simultaneously. In this situation, as we will see shortly, all of the endogenous variables will necessarily be correlated with the error terms in all of the equations. This means that none of them may validly appear in the regression functions of models that are to b e estimated by least squares. A classic example, which well illustrates the econometric problems caused by simultaneity, is the determination of price and quantity for a commodity at the partial equilibrium of a competitive market. Suppose that q t is quantity and p t is price, both of which would often be in logarithms. A linear (or loglinear) model of demand and supply is q t = γ d p t + X d t β d + u d t (8.06) q t = γ s p t + X s t β s + u s t , (8.07) where equation (8.06) is the demand function and equation (8.07) is the supply function. Here X d t and X s t are row vectors of observations on exogenous or predetermined variables that appear, respectively, in the demand and supply functions, β d and β s are corresponding vectors of parameters, γ d and γ s are scalar parameters, and u d t and u s t are the error terms in the demand and supply functions. Economic theory predicts that, in most cases, γ d < 0 and γ s > 0, which is equivalent to saying that the demand curve slopes downward and the supply curve slopes upward. Equations (8.06) and (8.07) are a pair of linear simultaneous equations for the two unknowns p t and q t . For that reason, these equations constitute what is called a linear simultaneous equations model. In this case, there are two dependent variables, quantity and price. For estimation purposes, the key feature of the model is that quantity depends on price in both equations. Since there are two equations and two unknowns, it is straightforward to solve equations (8.06) and (8.07) for p t and q t . This is most easily done by rewriting them in matrix notation as  1 −γ d 1 −γ s  q t p t  =  X d t β d X s t β s  +  u d t u s t  . (8.08) The solution to (8.08), which will exist whenever γ d = γ s , so that the matrix on the left-hand side of (8.08) is nonsingular, is  q t p t  =  1 −γ d 1 −γ s  −1   X d t β d X s t β s  +  u d t u s t   . (8.09) Copyright c  1999, Russell Davidson and James G. MacKinnon 8.3 Instrumental Variables Estimation 313 It can be seen from this solution that p t and q t will depend on both u d t and u s t , and on every exogenous and predetermined variable that appears in either the demand function, the supply function, or both. Therefore, p t , which appears on the right-hand side of equations (8.06) and (8.07), must be correlated with the error terms in both of those equations. If we rewrote one or both equations so that p t was on the left-hand side and q t was on the right-hand side, the problem would not go away, because q t is also correlated with the error terms in both equations. It is easy to see that, whenever we have a linear simultaneous equations model, there will be correlation between all of the error terms and all of the endo- genous variables. If there are g endogenous variables and g equations, the solution will look very much like (8.09), with the inverse of a g × g matrix premultiplying the sum of a g vector of linear combinations of the exogenous and predetermined variables and a g vector of error terms. If we want to esti- mate the full system of equations, there are many options, some of which will be discussed in Chapter 12. If we simply want to estimate one equation out of such a system, the most p opular approach is to use instrumental variables. We have discussed two important situations in which the error terms will necessarily be correlated with some of the regressors, and the OLS estimator will consequently be inconsistent. This provides a strong motivation to employ estimators that do not suffer from this type of inconsistency. In the remainder of this chapter, we therefore discuss the method of instrumental variables. This method can be used whenever the error terms are correlated with one or more of the explanatory variables, regardless of how that correlation may have arisen. 8.3 Instrumental Variables Estimation For most of this chapter, we will focus on the linear regression model y = Xβ + u, E(uu  ) = σ 2 I, (8.10) where at least one of the explanatory variables in the n × k matrix X is assumed not to be predetermined with respect to the error terms. Suppose that, for each t = 1, . . . , n, condition (8.01) is satisfied for some suitable information set Ω t , and that we can form an n × k matrix W with typical row W t such that all its elements belong to Ω t . The k variables given by the k columns of W are called instrumental variables, or simply instruments. Later, we will allow for the possibility that the number of instruments may exceed the number of regressors. Instrumental variables may be either exogenous or predetermined, and, for a reason that will be explained later, they should always include any columns of X that are exogenous or predetermined. Finding suitable instruments may Copyright c  1999, Russell Davidson and James G. MacKinnon 314 Instrumental Variables Estimation be quite easy in some cases, but it can be extremely difficult in others. Many empirical controversies in economics are essentially disputes about whether or not certain variables constitute valid instruments. The Simple IV Estimator For the linear model (8.10), the moment conditions (6.10) simplify to W  (y − Xβ) = 0. (8.11) Since there are k equations and k unknowns, we can solve equations (8.11) directly to obtain the simple IV estimator ˆ β IV ≡ (W  X) −1 W  y. (8.12) This well-known estimator has a long history (see Morgan, 1990). Whenever W t ∈ Ω t , E(u t | W t ) = 0, (8.13) and W t is seen to b e predetermined with respect to the error term. Given (8.13), it was shown in Section 6.2 that ˆ β IV is consistent and asymptotically normal under an identification condition. For asymptotic identification, this condition can be written as S W  X ≡ plim n→∞ 1 − n W  X is deterministic and nonsingular. (8.14) For identification by any given sample, the condition is just that W  X should be nonsingular. If this condition were not satisfied, equations (8.11) would have no unique solution. It is easy to see directly that the simple IV estimator (8.12) is consistent, and, in so doing, to see that condition (8.13) can be weakened slightly. If the model (8.10) is correctly specified, with true parameter vector β 0 , then it follows that ˆ β IV = (W  X) −1 W  Xβ 0 + (W  X) −1 W  u = β 0 + (n −1 W  X) −1 n −1 W  u. (8.15) Given the assumption (8.14) of asymptotic identification, it is clear that ˆ β IV is consistent if and only if plim n→∞ 1 − n W  u = 0, (8.16) which is precisely the condition (6.16) that was used in the consistency pro of in Section 6.2. We usually refer to this condition by saying that the error terms are asymptotically uncorrelated with the instruments. Condition (8.16) follows from condition (8.13) by the law of large numbers, but it may hold even if condition (8.13) does not. The weaker condition (8.16) is what is required for the consistency of the IV estimator. Copyright c  1999, Russell Davidson and James G. MacKinnon 8.3 Instrumental Variables Estimation 315 Efficiency Considerations If the mo del (8.10) is correctly specified with true parameter vector β 0 and true error variance σ 2 0 , the results of Section 6.2 show that the asymptotic covariance matrix of n 1/2 ( ˆ β IV − β 0 ) is given by (6.25) or (6.26): Var  plim n→∞ n 1/2 ( ˆ β IV − β 0 )  = σ 2 0 (S W  X ) −1 S W  W (S  W  X ) −1 = σ 2 0 plim n→∞ (n −1 X  P W X) −1 , (8.17) where S W  W ≡ plim n −1 W  W. If we have some choice over what instru- ments to use in the matrix W, it makes sense to choose them so as to minimize the above asymptotic covariance matrix. First of all, notice that, since (8.17) depends on W only through the orthogo- nal projection matrix P W , all that matters is the space S(W ) spanned by the instrumental variables. In fact, as readers are asked to show in Exercise 8.2, the estimator ˆ β IV itself depends on W only through P W . This fact is closely related to the result that, for ordinary least squares, fitted values and residuals depend only on the space S(X) spanned by the regressors. Suppose first that we are at lib erty to choose for instruments any variables at all that satisfy the predeterminedness condition (8.13). Then, under reason- able and plausible conditions, we can characterize the optimal instruments for IV estimation of the model (8.10). By this, we mean the instruments that minimize the asymptotic covariance matrix (8.17), in the usual sense that any other choice of instruments leads to an asymptotic covariance matrix that differs from the optimal one by a positive semidefinite matrix. In order to determine the optimal instruments, we must know the data- generating process. In the context of a simultaneous equations model, a single equation like (8.10), even if we know the values of the parameters, cannot be a complete description of the DGP, because at least some of the variables in the matrix X are endogenous. For the DGP to be fully specified, we must know how all the endogenous variables are generated. For the demand-supply model given by equations (8.06) and (8.07), both of those equations are needed to specify the DGP. For a more complicated simultaneous equations model with g endogenous variables, we would need g equations. For the simple errors-in- variables model discussed in Section 8.2, we need equations (8.03) as well as equation (8.02) in order to sp ecify the DGP fully. Quite generally, we can suppose that the explanatory variables in (8.10) satisfy the relation X = ¯ X + V, E(V t | Ω t ) = 0, (8.18) where the t th row of ¯ X is ¯ X t = E(X t | Ω t ), and X t is the t th row of X. Thus equation (8.18) can be interpreted as saying that ¯ X t is the expectation of X t conditional on the information set Ω t . It turns out that the n × k matrix Copyright c  1999, Russell Davidson and James G. MacKinnon 316 Instrumental Variables Estimation ¯ X provides the optimal instruments for (8.10). Of course, in practice, this matrix is never observed, and we will need to replace ¯ X by something that estimates it consistently. To see that ¯ X provides the optimal matrix of instruments, it is, as usual, easier to reason in terms of precision matrices rather than covariance matrices. For any valid choice of instruments, the precision matrix corresponding to (8.17) is σ 2 0 times plim n→∞ 1 − n X  P W X = plim n→∞  n −1 X  W (n −1 W  W ) −1 n −1 W  X  . (8.19) Using (8.18) and a law of large numbers, we see that plim n→∞ n −1 X  W = lim n→∞ n −1 E(X  W ) = lim n→∞ n −1 E( ¯ X  W ) = plim n→∞ n −1 ¯ X  W. (8.20) The second equality holds because E(V  W ) = O, since, by the construction in (8.18), V t has mean zero conditional on W t . The last equality is just a LLN in reverse. Similarly, we find that plim n −1 W  X = plim n −1 W  ¯ X. Thus (8.19) becomes plim n→∞ 1 − n ¯ X  P W ¯ X. (8.21) If we make the choice W = ¯ X, then (8.21) reduces to plim n −1 ¯ X  ¯ X. The difference between this and (8.21) is just plim n −1 ¯ X  M W ¯ X, which is a pos- itive semidefinite matrix. This shows that ¯ X is indeed the optimal choice of instrumental variables by the criterion of asymptotic variance. We mentioned earlier that all the explanatory variables in (8.10) that are exo- genous or predetermined should be included in the matrix W of instrumental variables. It is now clear why this is so. If we denote by Z the submatrix of X containing the exogenous or predetermined variables, then ¯ Z = Z, be- cause the row Z t is already contained in Ω t . Thus Z is a submatrix of the matrix ¯ X of optimal instruments. As such, it should always be a submatrix of the matrix of instruments W used for estimation, even if W is not actually equal to ¯ X. The Generalized IV Estimator In practice, the information set Ω t is very frequently specified by providing a list of l instrumental variables that suggest themselves for various reasons. Therefore, we now drop the assumption that the number of instruments is equal to the number of parameters and let W denote an n×l matrix of instru- ments. Often, l is greater than k, the number of regressors in the model (8.10). In this case, the model is said to be overidentified, because, in general, there is more than one way to formulate moment conditions like (8.11) using the Copyright c  1999, Russell Davidson and James G. MacKinnon 8.3 Instrumental Variables Estimation 317 available instruments. If l = k, the model (8.10) is said to be just identified or exactly identified, because there is only one way to formulate the moment conditions. If l < k, it is said to be underidentified, because there are fewer moment conditions than parameters to be estimated, and equations (8.11) will therefore have no unique solution. If any instruments at all are available, it is normally possible to generate an arbitrarily large collection of them, because any deterministic function of the l comp onents of the t th row W t of W can be used as the t th component of a new instrument. 1 If (8.10) is underidentified, some such procedure is necessary if we wish to obtain consistent estimates of all the elements of β. Alternatively, we would have to impose at least k − l restrictions on β so as to reduce the number of independent parameters that must be estimated to no more than the number of instruments. For models that are just identified or overidentified, it is often desirable to limit the set of potential instruments to deterministic linear functions of the instruments in W, rather than allowing arbitrary deterministic functions. We will see shortly that this is not only reasonable but optimal for linear simult- aneous equation models. This means that the IV estimator is unique for a just identified model, because there is only one k dimensional linear space S(W ) that can be spanned by the k = l instruments, and, as we saw earlier, the IV estimator for a given model depends only on the space spanned by the instruments. We can always treat an overidentified model as if it were just identified by choosing exactly k linear combinations of the l columns of W. The challenge is to choose these linear combinations optimally. Formally, we seek an l × k matrix J such that the n × k matrix WJ is a valid instrument matrix and such that the use of J minimizes the asymptotic covariance matrix of the estimator in the class of IV estimators obtained using an n × k instrument matrix of the form WJ ∗ with arbitrary l × k matrix J ∗ . There are three requirements that the matrix J must satisfy. The first of these is that it should have full column rank of k. Otherwise, the space spanned by the columns of WJ would have rank less than k, and the model would be underidentified. The second requirement is that J should be at least asymptotically deterministic. If not, it is possible that condition (8.16) applied to WJ could fail to hold. The last requirement is that J be chosen to minimize the asymptotic covariance matrix of the resulting IV estimator, and we now explain how this may be achieved. If the explanatory variables X satisfy (8.18), then it follows from (8.17) and (8.20) that the asymptotic covariance matrix of the IV estimator computed 1 This procedure would not work if, for example, all of the original instruments were binary variables. Copyright c  1999, Russell Davidson and James G. MacKinnon 318 Instrumental Variables Estimation using WJ as instrument matrix is σ 2 0 plim n→∞ (n −1 ¯ X  P WJ ¯ X) −1 . (8.22) The t th row ¯ X t of ¯ X belongs to Ω t by construction, and so each element of ¯ X t is a deterministic function of the elements of W t . However, the deterministic functions are not necessarily linear with respect to W t . Thus, in general, it is impossible to find a matrix J such that ¯ X = WJ, as would be needed for WJ to constitute a set of truly optimal instruments. A natural second-best solution is to project ¯ X orthogonally on to the space S(W ). This yields the matrix of instruments WJ = P W ¯ X = W (W  W ) −1 W  ¯ X, (8.23) which implies that J = (W  W ) −1 W  ¯ X. (8.24) We now show that these instruments are indeed optimal under the constraint that the instruments should be linear in W t . By substituting P W ¯ X for WJ in (8.22), the asymptotic covariance matrix becomes σ 2 0 plim n→∞ (n −1 ¯ X  P P W ¯ X ¯ X) −1 . If we write out the projection matrix P P W ¯ X explicitly, we find that ¯ X  P P W ¯ X ¯ X = ¯ X  P W ¯ X( ¯ X  P W ¯ X) −1 ¯ X  P W ¯ X = ¯ X  P W ¯ X. (8.25) Thus, the precision matrix for the estimator that uses instruments P W ¯ X is proportional to ¯ X  P W ¯ X. For the estimator with WJ as instruments, the precision matrix is proportional to ¯ X  P WJ ¯ X. The difference between the two precision matrices is therefore proportional to ¯ X  (P W − P WJ ) ¯ X. (8.26) The k dimensional subspace S(WJ), which is the image of the orthogonal projection P WJ , is a subspace of the l dimensional space S(W ), which is the image of P W . Thus, by the result in Exercise 2.16, the difference P W −P WJ is itself an orthogonal projection matrix. This implies that the difference (8.26) is a positive semidefinite matrix, and so we can conclude that (8.23) is indeed the optimal choice of instruments of the form WJ. At this point, we come up against the same difficulty as that encountered at the end of Section 6.2, namely, that the optimal instrument choice is infeasible, because we do not know ¯ X. But notice that, from the definition (8.24) of the matrix J, we have that plim n→∞ J = plim n→∞ (n −1 W  W ) −1 n −1 W  ¯ X = plim n→∞ (n −1 W  W ) −1 n −1 W  X, (8.27) Copyright c  1999, Russell Davidson and James G. MacKinnon [...]... in Section 8. 5, where the null and alternative hypotheses are given by (8. 51) Copyright c 1999, Russell Davidson and James G MacKinnon 8. 8 Bootstrap Tests 341 and (8. 52), respectively For concreteness, we consider the test implemented by use of the IVGNRs (8. 53) and (8. 54), although the same principles apply to other forms of test, such as the asymptotic t and Wald tests (8. 47) and (8. 48) , or tests... Bound, Jaeger, and Baker (1995), Dufour (1997), Staiger and Stock (1997), Wang and Zivot (19 98) , Zivot, Startz, and Nelson (19 98) , Angrist, Imbens, and Krueger (1999), Blomquist and Dahlberg (1999), Donald and Newey (2001), Hahn and Hausman (2002), Kleibergen (2002), and Stock, Wright, and Yogo (2002) There remain many unsolved problems Copyright c 1999, Russell Davidson and James G MacKinnon 3 28 Instrumental... elements of β and the l − k elements of γ, and there are precisely l instruments Copyright c 1999, Russell Davidson and James G MacKinnon 8. 6 Testing Overidentifying Restrictions 335 To see why testing (8. 68) against (8. 69) also tests whether the quantity (8. 67) is significantly different from zero, consider the numerator of the artificial IVGNR F test for (8. 68) against (8. 69) Under the null hypothesis,... covariance matrix is ˆ ˆ Var(β NLIV ) = σ 2 (X PW X)−1, ˆ ˆ (8. 88) ˆ ˆ where X ≡ X(β NLIV ), and σ 2 is 1/n times the SSR from IV estimation ˆ of regression (8. 83) Readers may find it instructive to compare (8. 88) with expression (8. 34), the covariance matrix of the generalized IV estimator for a linear regression model Copyright c 1999, Russell Davidson and James G MacKinnon ... as [u V2 ] Note that this matrix V is not the same as the one used in Section 8. 4 If Vt denotes a typical row of V, then we will assume that (8. 81) E(Vt Vt ) = Σ, where Σ is a (k2 +1)×(k2 +1) covariance matrix, the upper left-hand element of which is σ 2, the variance of the error terms in u Together, (8. 38) , (8. 80), and (8. 81) constitute a model that, although not quite fully specified (because the distribution... estimation of (8. 51) and (8. 52) However, such a “real” F statistic is not valid, even asymptotically This can be seen by evaluating the ˜ ˜ IVGNRs (8. 53) and (8. 54) at the restricted estimates β, where β is a k vector ˜ with the first k1 components equal to the IV estimates β1 from (8. 51) and the last k2 components zero The residuals from the IVGNR (8. 53) are then Copyright c 1999, Russell Davidson and James... − PPW X1 )y, ´ which does not depend in any way on β Copyright c 1999, Russell Davidson and James G MacKinnon (8. 57) 8. 5 Hypothesis Testing 331 It is important to note that, although the difference between the SSRs of (8. 53) ´ and (8. 54) does not depend on β, the same is not true of the individual SSRs ´ were used for (8. 53) and (8. 54), we would get a Thus, if different values of β wrong answer Similarly,... under the null hypothesis Otherwise, the denominator of the test statistic (8. 55) will not estimate σ 2 consistently, and (8. 55) will not follow the F (k2 , n − k) distribution asymptotically If (8. 53) and (8. 54) are correctly formulated, ´ with the same β and the same instrument matrix W, it can be shown that k2 times the artificial F statistic (8. 55) is equal to the Wald statistic (8. 48) with β20 = 0,... y PW − PW X(X PW X)−1X PW y (8. 61) = y (PW − PPW X )y ˜ If Q is now evaluated at the restricted estimates β, an exactly similar calculation shows that ˜ Q(β, y) = y (PW − PPW X1 )y (8. 62) The difference between (8. 62) and (8. 61) is thus ˜ ˆ Q(β, y) − Q(β, y) = y (PPW X − PPW X1 )y (8. 63) This is precisely the difference (8. 57) between the SSRs of the two IVGNRs (8. 53) and (8. 54) Thus we can obtain an... IVGNRs, does not depend on X2 The artificial F statistic is F = (SSR0 − SSR1 )/k2 , SSR1 /(n − k) (8. 55) where SSR0 and SSR1 denote the sums of squared residuals from (8. 53) and (8. 54), respectively ´ Because both H0 and H1 are linear models, the value of β used to evaluate the regressands of (8. 53) and (8. 54) has no effect on the difference between the SSRs of the two regressions, which, when divided . introduced in Section 8. 3, and a number of important results are discussed. Then finite-sample properties are discussed in Section 8. 4, hy- pothesis testing in Section 8. 5, and overidentifying restrictions. (or loglinear) model of demand and supply is q t = γ d p t + X d t β d + u d t (8. 06) q t = γ s p t + X s t β s + u s t , (8. 07) where equation (8. 06) is the demand function and equation (8. 07) is the supply function equations (8. 06) and (8. 07), must be correlated with the error terms in both of those equations. If we rewrote one or both equations so that p t was on the left-hand side and q t was on the right-hand

Ngày đăng: 04/07/2014, 15:20

Từ khóa liên quan

Tài liệu cùng người dùng

  • Đang cập nhật ...

Tài liệu liên quan