bootstrap methods for markov processes

44 221 0
bootstrap methods for markov processes

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

BOOTSTRAP METHODS FOR MARKOV PROCESSES by Joel L. Horowitz Department of Economics Northwestern University Evanston, IL 60208-2600 October 2002 ABSTRACT The block bootstrap is the best known method for implementing the bootstrap with time- series data when the analyst does not have a parametric model that reduces the data generation process to simple random sampling. However, the errors made by the block bootstrap converge to zero only slightly faster than those made by first-order asymptotic approximations. This paper describes a bootstrap procedure for data that are generated by a (possibly higher-order) Markov process or by a process that can be approximated by a Markov process with sufficient accuracy. The procedure is based on estimating the Markov transition density nonparametrically. Bootstrap samples are obtained by sampling the process implied by the estimated transition density. Conditions are given under which the errors made by the Markov bootstrap converge to zero more rapidly than those made by the block bootstrap. ______________________________________________________________________________ I thank Kung-Sik Chan, Wolfgang Härdle, Bruce Hansen, Oliver Linton, Daniel McFadden, Whitney Newey, Efstathios Paparoditis, Gene Savin, and two anonymous referees for many helpful comments and suggestions. Research supported in part by NSF Grant SES-9910925. BOOTSTRAP METHODS FOR MARKOV PROCESSES 1. INTRODUCTION This paper describes a bootstrap procedure for data that are generated by a (possibly higher-order) Markov process. The procedure is also applicable to non-Markov processes, such as finite-order MA processes, that can be approximated with sufficient accuracy by Markov processes. Under suitable conditions, the procedure is more accurate than the block bootstrap, which is the leading nonparametric method for implementing the bootstrap with time-series data. The bootstrap is a method for estimating the distribution of an estimator or test statistic by resampling one’s data or a model estimated from the data. Under conditions that hold in a wide variety of econometric applications, the bootstrap provides approximations to distributions of statistics, coverage probabilities of confidence intervals, and rejection probabilities of tests that are more accurate than the approximations of first-order asymptotic distribution theory. Monte Carlo experiments have shown that the bootstrap can spectacularly reduce the difference between the true and nominal probabilities that a test rejects a correct null hypothesis (hereinafter the error in the rejection probability or ERP). See Horowitz (1994, 1997, 1999) for examples. Similarly, the bootstrap can greatly reduce the difference between the true and nominal coverage probabilities of a confidence interval (the error in the coverage probability or ECP). The methods that are available for implementing the bootstrap and the improvements in accuracy that it achieves relative to first-order asymptotic approximations depend on whether the data are a random sample from a distribution or a time series. If the data are a random sample, then the bootstrap can be implemented by sampling the data randomly with replacement or by sampling a parametric model of the distribution of the data. The distribution of a statistic is estimated by its empirical distribution under sampling from the data or parametric model (bootstrap sampling). To summarize important properties of the bootstrap when the data are a random sample, let n be the sample size and T be a statistic that is asymptotically distributed as N(0,1) (e.g., a t statistic for testing a hypothesis about a slope parameter in a linear regression model). Then the following results hold under regularity conditions that are satisfied by a wide variety of econometric models. See Hall (1992) for details. n 1. The error in the bootstrap estimate of the one-sided probability is O( n PT z≤ ) p (n -1 ), whereas the error made by first order asymptotic approximations is O(n -1/2 ). 2. The error in the bootstrap estimate of the symmetrical probability is O (| | ) n PT z≤ p (n -3/2 ), whereas the error made by first-order approximations is O(n -1 ). 1 3. When the critical value of a one-sided hypothesis test is obtained by using the bootstrap, the ERP of the test is O(n -1 ), whereas it is O(n -1/2 ) when the critical value is obtained from first-order approximations. The same result applies to the ECP of a one-sided confidence interval. In some cases, the bootstrap can reduce the ERP of a one-sided test to O(n -3/2 ) (Hall 1992, p. 178; Davidson and MacKinnon 1999). 4. When the critical value of a symmetrical hypothesis test is obtained by using the bootstrap, the ERP of the test is O(n -2 ), whereas it is O(n -1 ) when the critical value is obtained from first-order approximations. The same result applies to the ECP of a symmetrical confidence interval. The practical consequence of these results is that the ERP’s of tests and ECP’s of confidence intervals based on the bootstrap are often substantially smaller than ERP’s and ECP’s based on first-order asymptotic approximations. These benefits are available with samples of the sizes encountered in applications (Horowitz 1994, 1997, 1999). The situation is more complicated when the data are a time series. To obtain asymptotic refinements, bootstrap sampling must be carried out in a way that suitably captures the dependence structure of the data generation process (DGP). If a parametric model is available that reduces the DGP to independent random sampling (e.g., an ARMA model), then the results summarized above continue to hold under appropriate regularity conditions. See, for example, Andrews (1999), Bose (1988), and Bose (1990). If a parametric model is not available, then the best known method for generating bootstrap samples consists of dividing the data into blocks and sampling the blocks randomly with replacement. This is called the block bootstrap. The blocks, whose lengths increase with increasing size of the estimation data set, may be non-overlapping (Carlstein 1986, Hall 1985) or overlapping (Hall 1985, Künsch 1989). Regardless of the method that is used, blocking distorts the dependence structure of the data and, thereby, increases the error made by the bootstrap. The main results are that under regularity conditions and when the block length is chosen optimally: 1. The errors in the bootstrap estimates of one-sided and symmetrical probabilities are almost surely O p (n -3/4 ) and O p (n -6/5 ), respectively (Hall et al., 1995). 2. The ECP’s (ERP’s) of one-sided and symmetrical confidence intervals (tests) are O(n -3/4 ) and O(n -5/4 ), respectively (Zvingelis 2000). Thus, the errors made by the block bootstrap converge to zero at rates that are slower than those of the bootstrap based on data that are a random sample. Monte Carlo results have confirmed this disappointing performance of the block bootstrap (Hall and Horowitz 1996). 2 The relatively poor performance of the block bootstrap has led to a search for other ways to implement the bootstrap with dependent data. Bühlmann (1997, 1998), Choi and Hall (2000), Kreiss (1992), and Paparoditis (1996) have proposed a sieve bootstrap for linear processes (that is, AR, vector AR, or invertible MA processes of possibly infinite order). In the sieve bootstrap, the DGP is approximated by an AR(p) model in which p increases with increasing sample size. Bootstrap samples are generated by the estimated AR(p) model. Choi and Hall (2000) have shown that the ECP of a one-sided confidence interval based on the sieve bootstrap is On 1 () ε −+ for any ε > 0, which is only slightly larger than the ECP of 1 (On ) − that is available when the data are a random sample. This result is encouraging, but its practical utility is limited. If a process has a finite-order ARMA representation, then the ARMA model can be used to reduce the DGP to random sampling from some distribution. Standard methods can be used to implement the bootstrap, and the sieve bootstrap is not needed. Sieve methods have not been developed for nonlinear processes such as nonlinear autoregressive, ARCH, and GARCH processes. The bootstrap procedure described in this paper applies to a linear or nonlinear DGP that is a (possibly higher-order) Markov process or can be approximated by one with sufficient accuracy. The procedure is based on estimating the Markov transition density nonparametrically. Bootstrap samples are obtained by sampling the process implied by the estimated transition density. This procedure will be called the Markov conditional bootstrap (MCB). Conditions are given under which: 1. The errors in the MCB estimates of one-sided and symmetrical probabilities are almost surely 1 (On ) ε −+ and On 3/2 () ε −+ , respectively, for any ε > 0. 2. The ERP’s (ECP’s) of one sided and symmetrical tests (confidence intervals) based on the MCB are 1 (On ) ε −+ and 3/2 (On ) ε −+ , respectively, for any ε > 0. Thus, under the conditions that are given here, the errors made by the MCB converge to zero more rapidly than those made by the block bootstrap. Moreover for one-sided probabilities, symmetrical probabilities, and one-sided confidence intervals and tests, the errors made by the MCB converge only slightly less rapidly than those made by the bootstrap for data that are sampled randomly from a distribution. The conditions required to obtain these results are stronger than those required to obtain asymptotic refinements with the block bootstrap. If the required conditions are not satisfied, then the errors made by the MCB may converge more slowly than those made by the block bootstrap. Moreover, as will be explained in Section 3.2, the MCB suffers from a form of the curse of dimensionality of nonparametric estimation. A large data set (e.g., high-frequency financial data) 3 is likely to be needed to obtain good performance if the DGP is a high-dimension vector process or a high-order Markov process. Thus, the MCB is not a replacement for the block bootstrap. The MCB is, however, an attractive alternative to the block bootstrap when the conditions needed for good performance of the MCB are satisfied. There have been several previous investigations of the MCB. Rajarshi (1990) gave conditions under which the MCB consistently estimates the asymptotic distribution of a statistic. Datta and McCormick (1995) gave conditions under which the error in the MCB estimator of the distribution function of a normalized sample average is almost surely on . Hansen (1999) proposed using an empirical likelihood estimator of the Markov transition probability but did not prove that the resulting version of the MCB is consistent or provides asymptotic refinements. Chan and Tong (1998) proposed using the MCB in a test for multimodality in the distribution of dependent data. Paparoditis and Politis (2001a, 2001b) proposed estimating the Markov transition probability by resampling the data in a suitable way. No previous authors have evaluated the ERP or ECP of the MCB or compared its accuracy to that of the block bootstrap. Thus, the results presented here go well beyond those of previous investigators. 1/2 ( − ) ) The MCB is described informally in Section 2 of this paper. Section 3 presents regularity conditions and formal results for data that are generated by a Markov process. Section 4 extends the MCB to generalized method of moments (GMM) estimators and approximate Markov processes. Section 5 presents the results of a Monte Carlo investigation of the numerical performance of the MCB. Section 6 presents concluding comments. The proofs of theorems are in the Appendix. 2. INFORMAL DESCRIPTION OF THE METHOD This section describes the MCB procedure for data that are generated by a Markov process and provides an informal summary of the main results of the paper. For any integer j, let be a continuously distributed random variable. Let { d j X ∈R (1d ≥ : 1,2, , } j X jn= be a realization of a strictly stationary, q’th order Markov process. Thus, 112 2 11 ( | , , ) ( | , , ) jjj j j j jjj j jqjq XxX xX x XxX x X x −−−− −− − − ≤= ==≤= =PP almost surely for d-vectors and some finite integer q ≥ 1. It is assumed that q is known. Cheng and Tong (1992) show how to estimate q. In addition, for technical reasons that are discussed further in Section 3.1, it is assumed that 12 , , , jj j xx x −− j X has bounded support and that if for some cov( , ) 0 jjk XX + = k> M M < ∞ . Define 1 () X µ = E and . 1 1 n j j mn X − = = ∑ 4 2.1. Statement of the Problem The problem addressed in the remainder of this section and in Section 3 is to carry out inference based on a Studentized statistic, T , whose form is n (2.1) , 1/2 [() ()]/ nn TnHmH s µ =− where H is a sufficiently smooth, scalar-valued function, 2 n s is a consistent estimator of the variance of the asymptotic distribution of nH 1/2 [() ]mH() µ − ( ) n Tt , and T as . The objects of interest are (1) the probabilities (0,1) d n N→ n →∞ ≤ P and (| n T| )t ≤ P for any finite, scalar t, (2) the probability that a test based on T rejects the correct hypothesis H n 0 : [ (HX)] ( )H µ =E , and (3) the coverage probabilities of confidence intervals for ( )H µ that are based on T . To avoid repetitive arguments, only probabilities and symmetrical hypothesis tests are treated explicitly. An n α -level symmetrical test based on T rejects H n 0 if | | nn Tz α > , where n z α is the α -level critical value. Arguments similar to those made in this section and Section 4 can be used to obtain the results stated in the introduction for one-sided tests and for confidence intervals based on the MCB. The focus on statistics of the form (2.1) with a continuously distributed X may appear to be restrictive, but this appearance is misleading. A wide variety of statistics that are important in applications can be approximated with negligible error by statistics of the form (2.1). In particular, as will be explained in Section 4.1, t statistics for testing hypotheses about parameters estimated by GMM can be approximated this way. 1 2.2. The MCB Procedure Consider the problem of estimating ( ) n Tz ≤ P , (| | ) n Tz ≤ P , or n z α . For any integer , define YX . Let jq> 1 ( , , ) jj jq X −− ′′ = ′ y p denote the probability density function of . Let f denote the probability density function of 1 (, qq YX ′ 1 )X ′′ + ≡ j X conditional on Y . If f and p were known, then and j ( ) n Tz≤P (| n | )Tz ≤ P could be estimated as follows: 1. Draw YX from the distribution whose density is 1 ( , , ) qq X + ′ ≡  1 ′′ y p . Draw 1q X +  from the distribution whose density is 1 (| ) q fY + ⋅  . Set YX 21 ( , , ) qq X ++2 ′ ′′ =   . 2. Having obtained YX 1 ( , , ) jj jq X −− ′ ′ ′ ≡   for any , draw 2jq≥+ j X  from the distribution whose density is ( | ) j f Y⋅  ( jj . Set YX 11 , , j X +− ) q+ ′ ′′ =  . 5 3. Repeat step 2 until a simulated data series { : 1, , } j X j=  n has been obtained. Compute µ as (say) 11 ( , , ) yq q 1 x px xdx dx ∫ . Then compute a simulated test statistic T by substituting the simulated data into (2.1). n  4. Estimate ( ( ) n Tz≤P (| | ) n Tz ≤ P ) from the empirical distribution of T (| ) that is obtained by repeating steps 1-3 many times. Estimate n  | n T  n z α by the 1 - α quantile of the empirical distribution of | . | n T  This procedure cannot be implemented in an application because f and p y are unknown. The MCB replaces f and y p with kernel nonparametric estimators. To obtain the estimators, let f K be a kernel function (in the sense of nonparametric density estimation) of a ( 1)dq + - dimensional argument. Let p K be a kernel function of a dq -dimensional argument. Let be a sequence of positive constants (bandwidths) such that as . Conditions that { : n 1,2, } hn= 0 n h → n →∞ f K , p K , and { must satisfy are given in Section 3. For , , and , define } n h d x ∈ dq y ∈ ( , )zxy= (1) 1 1 (, ) , () n jj nz f dq nn n jq x XyY pxy K hh nqh + =+ −−  =  −  ∑ and 1 1 () () n j ny p dq n n jq yY py K h nqh =+ −  =  −  ∑ . The estimators of y p and f, respectively, are ny p and (2.2) ( | ) ( , )/ ( ) nnzny f xy p xy p y= . The MCB estimates , ( ) n Tz≤P (| | ) n Tz ≤ P , and n z α by repeatedly sampling the Markov process generated by the transition density ( n | ) f xy. However, ( | ) n f xy is an inaccurate estimator of ( | ) f xy in regions where ( ) y p y is close to zero. To obtain the asymptotic refinements described in Section 1, it is necessary to avoid such regions. Here, this is done by truncating the MCB sample. To carry out the truncation, let { : nn Cy ( ) } y py n λ = ≥ , where 0 n λ > for each and 1n = ,2, , 0 n λ → ˆˆ , , XX as at a rate that is specified in Section 3.1. Having obtained realizations from the Markov process induced by n →∞ 11 ( j jq − ≥+ 1) 6 (|) n f xy, the MCB retains a realization of ˆ j X only if ( 1 ˆˆ , , ) jjq n X X −+ ∈C ′′ . Thus, MCB proceeds as follows: 1 1 ˆ q Y + , , − 1jq≥+ ˆˆ ( jj + = ˆˆ ( , , X ˆˆ , , n ˆ : 1, , } j X jn= ˆˆ )]/ nn s 1 ˆ mn − = [( H | )z ≤ n z α ˆ n z , } X n MCB 1. Draw from the distribution whose density is 1 ˆˆˆ ( , , ) qq YXX + ′ ≡ ny p . Retain if Y . Otherwise, discard the current Y 1 ˆ q C + ∈ n 1 ˆ q + and draw a new one. Continue this process until a Y is obtained. 1q ∈ ˆ + n C MCB 2. Having obtained YX 1 ˆˆ ˆ ( ) jj j X − ′ ′ ≡ ′ for any , draw ˆ j X from the distribution whose density is ˆ (| ) nj f Y⋅ ˆ . Retain j X and set YX 11 ˆ , ) jq X , −+ ′′ if 1 ) jjq X −+ ∈C. Otherwise, discard the current ˆ j X and draw a new one. Continue this process until an ˆ j X is obtained for which 1 ( ) jjq X XC −+ ∈ . q ′ n MCB 3. Repeat step 2 until a bootstrap data series { has been obtained. Compute the bootstrap test statistic Tn , where , 1/2 ˆ ˆ ) ( mH µ ≡− 1 ˆ n j j X = ∑ ˆ µ is the mean of X relative to the distribution induced by the sampling procedure of steps MCB 1 and MCB 2 (bootstrap sampling), and 2 ˆ n s is an estimator of the variance of the asymptotic distribution of 1/2 ˆ [() ()] nHmH ˆ µ − under bootstrap sampling. MCB 4. Estimate (( ) n Tz≤P (| n TP ) from the empirical distribution of T (||) that is obtained by repeating steps 1-3 many times. Estimate ˆ n ˆ n T by the 1 - α quantile of the empirical distribution of | . Denote this estimator by ˆ T | n ˆ n z α . A symmetrical test of H 0 based on T and the bootstrap critical value n α rejects at the nominal α level if || ˆ nn Tz α ≥ . 2.3. Properties of the MCB This section presents an informal summary of the main results of the paper and of the arguments that lead to them. The results are stated formally in Section 3. Let ⋅ denote the Euclidean norm. Let ˆ P denote the probability measure induced by the MCB sampling procedure (steps MCB 1 – MCB 2) conditional on the data { : 1, j j = . Let any ε > 0 be given. The main results are that under regularity conditions stated in Section 3.1: (2.3) 1 ˆˆ sup ( ) ( ) | ( ) nn z Tz TzOn ε −+ ≤− ≤ =|P P 7 almost surely, (2.4) 3/2 ˆˆ sup(|| ) (|| )| ( nn z Tz TzOn ) ε −+ ≤− ≤ =|P P almost surely, and (2.5) 3/2 ˆ (| | ) ( ) nn Tz On ε α α −+ >=+P . These results may be contrasted with the analogous ones for the block bootstrap. The block bootstrap with optimal block lengths yields On , , and for the right-hand sides of (2.3)-(2.5), respectively (Hall et al. 1995, Zvingelis 2000). Therefore, the MCB is more accurate than the block bootstrap under the regularity conditions of Section 3.1. 3/4 () p − 6/5 ( p On − ) ) ) 5/4 (On − These results are obtained by carrying out Edgeworth expansions of and . Additional notation is needed to describe the expansions. Let ( ) n Tz≤P { : 1, j ˆˆ ( n Tz≤P , } X jn χ ≡= denote the data. Let and Φ φ , respectively, denote the standard normal distribution function and density. The j’th cumulant of T (4 n )j ≤ has the form if j is odd and if j is even, where 1/2 1 ( j no κ − + /2 )n − 1 j n o κ − + 1 (2)Ij=+ (n ) − j κ is a constant (Hall 1992, p. 46). Define . Conditional on 1 ( , , κκ = 4 ) κ ′ χ , the j’th cumulant of almost surely has the form if j is odd and ˆ n T 1 ) 1/2 ˆ j n κ − + 1/2 ( on − ) 1 ) ( j on ˆ n κ (2 Ij − − =+ + if j is even. The quantities ˆ j κ depend on χ . They are nonstochastic relative to bootstrap sampling but are random variables relative to the stochastic process that generates χ . Define 14 ˆˆ ˆ ( , , ) κ κκ ′ = . Under the regularity conditions of Section 3.1, ( ) n Tz ≤ P has the Edgeworth expansion (2.6) 2 /2 3/2 1 ()() (,)()( j nj j Tz z n z zOn πκφ −− = ≤=Φ+ + ∑ P ) uniformly over z, where ( , ) j z π κ is a polynomial function of z for each , a continuously differentiable function of the components of κ κ for each z, an even function of z if j = 1, and an odd function of z if j = 2. Moreover, (| | n Tz) ≤ P has the expansion (2.7) 13 2 (| | ) 2 ( ) 1 2 ( , ) ( ) ( ) n Tz z n z zOn πκφ −− ≤=Φ−+ +P /2 uniformly over z. Conditional on χ , the bootstrap probabilities ˆˆ () n Tz ≤ P and have the expansions ˆˆ (| | ) n T≤P z ) (2.8) 2 /2 3/2 1 ˆˆ ˆ ()() (,)()( j nj j Tz z n z zOn πκφ −− = ≤=Φ+ + ∑ P 8 and (2.9) 13 2 ˆˆ ˆ (| | ) 2 ( ) 1 2 ( , ) ( ) ( ) n Tz z n z zOn πκφ −− ≤=Φ−+ +P /2 uniformly over z almost surely. Therefore, (2.10) ( ) 1/ 2 1 ˆˆ ˆ |( ) ( )| ( ) nn Tz Tz On On κκ − − ≤− ≤ = − +PP and (2.11) ( ) 13 ˆˆ ˆ |(| | ) (| | )| ( ) nn Tz TzOn On κκ −− ≤− ≤ = − +PP /2 almost surely uniformly over z. Under the regularity conditions of Section 3.1, (2.12) 1/2 ˆ () On ε κκ −+ −= almost surely for any 0 ε > . Results (2.3)-(2.4) follow by substituting (2.12) into (2.10)-(2.11). To obtain (2.5), observe that ˆˆ ˆ (| | ) (| | ) 1 nn nn Tz Tz αα α ≤ =≤=PP − /2 /2 . It follows from (2.7) and (2.10) that (2.13) 13 2 2( )12 ( ,)( ) 1 ( ) nnn znzz On ααα πκφ α −− Φ−+ =−+ and (2.14) 13 2 ˆ ˆˆˆ 2( ) 12 ( ,)( ) 1 ( ) nnn znzz On ααα πκφ α −− Φ−+ =−+ almost surely. Let v α denote the 1 / 2 α − quantile of the N(0,1) distribution. Then Cornish- Fisher inversions of (2.13) and (2.14) (e.g., Hall 1992, p. 88-89) give (2.15) 13 2 (,) ( ) n zvn v On αα α πκ −− =− + /2 /2 and (2.16) 13 2 ˆ ˆ (,) ( ) n zvn v On αα α πκ −− =− + almost surely. Therefore, () 13 22 13/2 ˆ ˆ (2.17) (| | ) {| | [ ( , ) ( , )] ( )} ˆ (2.18) [| | ( )]. nn nn nn Tz Tz n v v On TzOn On αααα α πκπκ κκ −− −− ≤= ≤+ − + =≤+ −+ PP P /2 . Result (2.5) follows by applying (2.12) to the right-hand side of (2.18). 3. MAIN RESULTS This section presents theorems that formalize results (2.3)-(2.5). 9 [...]... converge to zero more rapidly than those made by the block bootstrap if the DGP is a Markov or approximate Markov process and certain other conditions are satisfied These conditions are stronger than those required by the block bootstrap Therefore, the MCB is not a substitute for the block bootstrap, but the MCB is an attractive alternative to the block bootstrap when the MCB’s stronger regularity conditions... 1, 4(i), 4(v), and 5-10 hold For any α ∈ (0,1) let znα ˆ ˆ ˆ ˆ satisfy P (| tnr | > znα ) = α Then (3.2)-(3.4) hold with tnr and tˆnr in place of Tn and Tn 4.2 Approximate Markov Processes This section extends the results of Section 3.2 to approximate Markov processes As in Sections 2-3, the objective is to carry out inference based on the statistic Tn defined in (2.1) For an arbitrary random vector... stronger regularity conditions are satisfied Further research could usefully investigate the possibility of developing bootstrap methods that are more accurate than the block bootstrap but impose less a priori structure on the DGP than do the MCB or the sieve bootstrap for linear processes 21 FOOTNOTES 1 Statistics with asymptotic chi-square distributions are not treated explicitly in this paper However,... block bootstrap requires selecting the block length Data-based methods for selecting block lengths in hypothesis testing are not available, so results are reported here for three different block lengths, (2, 5, 10) The experiments were carried out in GAUSS using GAUSS random number generators The sample size is n = 50 There are 5000 Monte Carlo replications in each experiment MCB and block bootstrap. .. lacks the higher-order moments needed to obtain good accuracy with the bootstrap even with iid data 6 CONCLUSIONS The block bootstrap is the best known method for implementing the bootstrap with time series data when one does not have a parametric model that reduces the DGP to simple random sampling However, the errors made by the block bootstrap converge to zero only slightly faster than those made by... recentering Recentering is unnecessary if LG = Lθ , but it simplifies the technical analysis and, therefore, is done here.8 ˆ To form the bootstrap version of tnr , let θˆn denote the bootstrap estimator of θ Let Dn ˆ be the quantity that is obtained by replacing Xi with Xi and θ n with θˆn in the expression for ˆ ˆ Dn Define Dn = E ∂G (X1+τ ,θ n ) / ∂θ , n    ˆ ˆ ˆ ˆ G (Xi ,θ )G (Xi ,θ )′ + Ω n (θ... extension to approximate Markov processes 4.1 Tests Based on GMM Estimators This section gives conditions under which (3.2)-(3.4) hold for the t statistic for testing a hypothesis about a parameter that is estimated by GMM The main task is to show that the probability distribution of the GMM t statistic can be approximated with sufficient accuracy by the distribution of a statistic of the form (2.1) Hall and... than the errors made by the block bootstrap 4 Götze and Künsch (1996) have given conditions under which Tn with a kernel-type variance estimator has an Edgeworth expansion up to O(n −1/ 2 ) Analogous conditions are not yet known for expansions through O (n −1 ) A recent paper by Inoue and Shintani (2000) gives expansions for statistics based on the block bootstrap for linear models with kernel-type... sλn ) a.s By a result of Nagaev (1961), it follows from (A5) and Assumption 4(ii) that for some δ < 1 , the second term is a.s O (δ s / k −1 ) , where k is as in Assumption 4(ii) (Nagaev 1961) The third term is O ( ρ s ) for some ρ < 1 because { X j } is GSM and, therefore, uniformly ergodic (Doukhan 1995, p 21) For b sufficiently large, the second two terms are o(λn ) Q.E.D Lemma 9: Define  k ...     ∏ ∏ 33 ≤ (1 − ω n )l − j − q − 2 f ( xl − q −1 | yl ) f ( x j | y j + q +1 ) (1) Therefore, τ nl 2 ≤ O (λn log n) a.s uniformly over ν Similar arguments can be used to show that (2) (3) τ nl 2 = O(λn log n) and τ nl 2 = o(λn log n) a.s uniformly over ν Therefore, τ n 2 = O[λn (log n) 2 ] a.s uniformly over ν Now consider τ n 4 Given w , let w be the smallest value of i such that wi = 1 . referees for many helpful comments and suggestions. Research supported in part by NSF Grant SES-9910925. BOOTSTRAP METHODS FOR MARKOV PROCESSES 1. INTRODUCTION This paper describes a bootstrap. BOOTSTRAP METHODS FOR MARKOV PROCESSES by Joel L. Horowitz Department of Economics Northwestern University Evanston, IL 60208-2600 October 2002 ABSTRACT The block bootstrap. some distribution. Standard methods can be used to implement the bootstrap, and the sieve bootstrap is not needed. Sieve methods have not been developed for nonlinear processes such as nonlinear

Ngày đăng: 08/04/2014, 12:27

Từ khóa liên quan

Tài liệu cùng người dùng

Tài liệu liên quan