Solution manual mathematical statistics with applications 7th edition, wackerly chapter16

8 10 0
  • Loading ...
1/8 trang

Thông tin tài liệu

Ngày đăng: 13/09/2018, 13:40

Chapter 16: Introduction to Bayesian Methods of Inference 16.1 Refer to Table 16.1 a β (10,30) b n = 25 c β (10,30) , n = 25 d Yes e Posterior for the β (1,3) prior 16.2 a.-d Refer to Section 16.2 16.3 a.-e Applet exercise, so answers vary 16.4 a.-d Applex exercise, so answers vary 16.5 It should take more trials with a beta(10, 30) prior 16.6 ⎛n⎞ Here, L( y | p ) = p( y | p ) = ⎜⎜ ⎟⎟ p y (1 − p ) n− y , where y = 0, 1, …, n and < p < So, ⎝ y⎠ ⎛n⎞ Γ( α + β) α −1 f ( y , p ) = ⎜⎜ ⎟⎟ p y (1 − p ) n − y × p (1 − p )β−1 Γ(α)Γ(β) ⎝ y⎠ so that ⎛ n ⎞ Γ(α + β) y + α −1 Γ(α + β) Γ( y + α )Γ( n − y + β) m( y ) = ∫ ⎜⎜ ⎟⎟ p (1 − p ) n − y +β −1 dp = y Γ(α )Γ(β) Γ( α )Γ(β) Γ( n + α + β) 0⎝ ⎠ The posterior density of p is then Γ( n + α + β) g * ( p | y) = p y + α −1 (1 − p ) n − y +β−1 , < p < Γ( y + α)Γ( n − y + β) This is the identical beta density as in Example 16.1 (recall that the sum of n i.i.d Bernoulli random variables is binomial with n trials and success probability p) 16.7 a The Bayes estimator is the mean of the posterior distribution, so with a beta posterior with α = y + and β = n – y + in the prior, the posterior mean is Y +1 Y = + pˆ B = n+4 n+4 n+4 E (Y ) + np + V (Y ) np(1 − p ) b E ( pˆ B ) = = = ≠ p , V ( pˆ ) = n+4 n+4 ( n + 4) ( n + 4) 16.8 a From Ex 16.6, the Bayes estimator for p is pˆ B = E ( p | Y ) = Y +1 n+2 b This is the uniform distribution in the interval (0, 1) c We know that pˆ = Y / n is an unbiased estimator for p However, for the Bayes estimator, 326 Chapter 16: Introduction to Bayesian Methods of Inference 327 Instructor’s Solutions Manual E ( pˆ B ) = E (Y ) + np + V (Y ) np(1 − p ) = and V ( pˆ B ) = = n+2 n+2 ( n + 2) ( n + 2) 2 np(1 − p ) ⎛ np + np(1 − p ) + (1 − p ) ⎞ +⎜ − p⎟ = ( n + 2) ⎝ n + ( n + 2) ⎠ d For the unbiased estimator pˆ , MSE( pˆ ) = V( pˆ ) = p(1 – p)/n So, holding n fixed, we must determine the values of p such that np(1 − p ) + (1 − p ) p(1 − p ) < n ( n + 2) The range of values of p where this is satisfied is solved in Ex 8.17(c) Thus, MSE ( pˆ B ) = V ( pˆ B ) + [ B( pˆ B )]2 = 16.9 a Here, L( y | p ) = p( y | p ) = (1 − p ) y −1 p , where y = 1, 2, … and < p < So, Γ( α + β) α −1 f ( y , p ) = (1 − p ) y −1 p × p (1 − p )β −1 Γ(α)Γ(β) so that Γ(α + β) Γ( α + 1)Γ( y + β − 1) Γ(α + β) α m( y ) = ∫ p (1 − p )β+ y − dp = Γ ( α ) Γ ( β ) Γ ( y + α + β ) Γ ( α ) Γ ( β ) The posterior density of p is then Γ(α + β + y ) g * ( p | y) = pα (1 − p) β + y − , < p < Γ(α + 1)Γ( β + y − 1) This is a beta density with shape parameters α* = α + and β* = β + y – b The Bayes estimators are α +1 (1) pˆ B = E ( p | Y ) = , α +β +Y ( 2) [ p(1 − p )] B = E( p | Y ) − E( p | Y ) = = (α + 2)(α + 1) α +1 − α + β + Y (α + β + Y + 1)(α + β + Y ) (α + 1)(β + Y − 1) , (α + β + Y + 1)(α + β + Y ) where the second expectation was solved using the result from Ex 4.200 (Alternately, the answer could be found by solving E[ p(1 − p ) | Y ] = ∫ p(1 − p ) g * ( p | Y )dp 16.10 a The joint density of the random sample and θ is given by the product of the marginal densities multiplied by the gamma prior: 328 Chapter 16: Introduction to Bayesian Methods of Inference Instructor’s Solutions Manual f ( y , … , y n , θ) = = [∏ n i =1 θ exp( −θy i ) ]Γ(α1)β α θ α −1 exp(−θ / β) ⎞ ⎛ n θ n + α −1 θ n + α −1 β ⎟ ⎜− θ exp y / exp − θ − θ β = ∑ n α α i =1 i ⎟ ⎜ Γ(α )β Γ(α )β ⎜ β∑i =1 y i + ⎟⎠ ⎝ ( ) ∞ ⎛ ⎞ β ⎜ ⎟dθ , but this integral resembles n + α −1 θ exp − θ b m( y1 ,…, y n ) = α ∫ n ⎜ Γ( α)β β∑i =1 y i + ⎟⎠ ⎝ β that of a gamma density with shape parameter n + α and scale parameter n β∑i =1 y i + ⎞ ⎛ β ⎟ ⎜ Thus, the solution is m( y1 ,…, y n ) = ( n ) Γ + β ⎜ β n y +1⎟ Γ(α )β α ⎠ ⎝ ∑i =1 i n+α c The solution follows from parts (a) and (b) above d Using the result in Ex 4.111, ⎡ ⎤ β ⎥ ( ) μˆ B = E (μ | Y ) = E (1 / θ | Y ) = * * =⎢ n + α − β (α − 1) ⎢ β∑n Yi + ⎥ i =1 ⎣ ⎦ β∑i =1Yi + n = e The prior mean for 1/θ is E (1 / θ) = β(n + α − 1) = ∑ n Y −1 n + α − β(n + α − 1) i =1 i + (again by Ex 4.111) Thus, μˆ B can be β( α − 1) written as n ⎛ α −1 ⎞ ⎛ ⎞ μˆ B = Y ⎜ ⎟+ ⎜ ⎟, ⎝ n + α − ⎠ β(α − 1) ⎝ n + α − ⎠ which is a weighted average of the MLE and the prior mean f We know that Y is unbiased; thus E(Y ) = μ = 1/θ Therefore, n ⎛ α −1 ⎞ ⎛ n ⎛ α −1 ⎞ ⎞ ⎛ ⎞ E (μˆ B ) = E (Y )⎜ ⎟+ ⎜ ⎟= ⎜ ⎟+ ⎜ ⎟ ⎝ n + α − ⎠ β(α − 1) ⎝ n + α − ⎠ θ ⎝ n + α − ⎠ β(α − 1) ⎝ n + α − ⎠ Therefore, μˆ B is biased However, it is asymptotically unbiased since E (μˆ B ) − / θ → Also, Chapter 16: Introduction to Bayesian Methods of Inference 329 Instructor’s Solutions Manual 2 ⎛ n n n ⎞ ⎞ ⎛ →0 V (μˆ B ) = V (Y )⎜ ⎟ = ⎟ = ⎜ θ (n + α − 1)2 θ n ⎝ n + α −1⎠ ⎝ n + α −1⎠ p So, μˆ B ⎯ ⎯→ / θ and thus it is consistent 16.11 a The joint density of U and λ is ( nλ ) u exp( − nλ ) × λα −1 exp(−λ / β) α u! Γ(α )β u n = λu + α−1 exp(− nλ − λ / β) α u!Γ(α)β f ( u, λ ) = p( u | λ ) g ( λ ) = ⎡ nu u + α −1 exp = λ ⎢− λ u!Γ(α)β α ⎣ ⎛ β ⎞⎤ ⎜⎜ ⎟⎟⎥ ⎝ nβ + ⎠ ⎦ ⎛ β ⎞⎤ ⎜⎜ ⎟⎟⎥ dλ , but this integral resembles that of a ⎝ nβ + ⎠ ⎦ β Thus, the gamma density with shape parameter u + α and scale parameter nβ + b m(u ) = ∞ ⎡ nu λu + α −1 exp⎢− λ α ∫ u!Γ( α )β ⎣ ⎛ β ⎞ nu ⎟⎟ Γ(u + α )⎜⎜ solution is m(u ) = α u! Γ(α )β ⎝ nβ + ⎠ u+α c The result follows from parts (a) and (b) above ⎛ β ⎞ ⎟⎟ d λˆ B = E (λ | U ) = α *β * = (U + α)⎜⎜ ⎝ nβ + ⎠ e The prior mean for λ is E(λ) = αβ From the above, ⎛ β ⎞ ⎛ nβ ⎞ ⎛ ⎞ n ⎟⎟ = Y ⎜⎜ ⎟⎟ + αβ⎜⎜ ⎟⎟ , λˆ B = ∑i =1 Yi + α ⎜⎜ ⎝ nβ + ⎠ ⎝ nβ + ⎠ ⎝ nβ + ⎠ which is a weighted average of the MLE and the prior mean ( ) f We know that Y is unbiased; thus E(Y ) = λ Therefore, ⎛ ⎞ ⎛ nβ ⎞ ⎛ ⎞ ⎛ nβ ⎞ ⎟⎟ ⎟⎟ + αβ⎜⎜ ⎟⎟ = λ⎜⎜ ⎟⎟ + αβ⎜⎜ E (λˆ B ) = E (Y )⎜⎜ ⎝ nβ + ⎠ ⎝ nβ + ⎠ ⎝ nβ + ⎠ ⎝ nβ + ⎠ So, λˆ is biased but it is asymptotically unbiased since B E (λˆ B ) – λ → Also, 330 Chapter 16: Introduction to Bayesian Methods of Inference Instructor’s Solutions Manual 2 ⎛ nβ ⎞ λ ⎛ nβ ⎞ nβ ⎟⎟ = λ ⎟⎟ = ⎜⎜ V (λˆ B ) = V (Y )⎜⎜ → n ⎝ nβ + ⎠ (nβ + 1)2 ⎝ nβ + ⎠ p So, λˆ B ⎯ ⎯→ λ and thus it is consistent 16.12 First, it is given that W = vU = v ∑i =1 (Yi − μ ) is chi–square with n degrees of freedom n Then, the density function for U (conditioned on v) is given by 1 (uv )n / 2−1 e −uv / = f U (u | v ) = v fW ( uv ) = v u n / 2−1 v n / e − uv / n/2 n/2 Γ( n / 2)2 Γ( n / 2)2 a The joint density of U and v is then 1 f ( u, v ) = f U ( u | v ) g ( v ) = u n / 2−1 v n / exp(−uv / 2) × v α −1 exp(− v / β) n/2 Γ( n / 2)2 Γ( α)β α u n / 2−1 v n / 2+ α−1 exp( −uv / − v / β) = Γ( n / 2)Γ(α )2 n / β α = ⎡ u n / 2−1 v n / 2+ α−1 exp⎢− v n/2 α Γ( n / 2)Γ(α )2 β ⎣ ⎛ 2β ⎞⎤ ⎜⎜ ⎟⎟⎥ ⎝ uβ + ⎠ ⎦ ∞ ⎡ ⎛ 2β ⎞⎤ n / −1 ⎟⎟⎥ dv , but this integral u v n / 2+ α−1 exp⎢− v ⎜⎜ n/2 α ∫ u β + Γ( n / 2)Γ( α )2 β ⎝ ⎠⎦ ⎣ resembles that of a gamma density with shape parameter n/2 + α and scale parameter b m( u ) = ⎛ 2β ⎞ u n / −1 2β ⎟⎟ Γ( n / + α)⎜⎜ Thus, the solution is m(u ) = n/2 α uβ + Γ( n / 2)Γ( α)2 β ⎝ uβ + ⎠ n / 2+ α c The result follows from parts (a) and (b) above d Using the result in Ex 4.111(e), σˆ 2B = E (σ | U ) = E (1 / v | U ) = ⎛ Uβ + ⎞ 1 Uβ + ⎟⎟ = ⎜⎜ = * β ( α − 1) n / + α − ⎝ 2β ⎠ β(n + 2α − ) * From the above, β( α − 1) Uβ + U⎛ n ⎛ 2(α − 1) ⎞ ⎞ σˆ 2B = = ⎜ ⎟+ ⎜ ⎟ β(n + 2α − ) n ⎝ n + 2α − ⎠ β(α − 1) ⎝ n + 2α − ⎠ e The prior mean for σ = / v = 16.13 a (.099, 710) b Both probabilities are 025 Chapter 16: Introduction to Bayesian Methods of Inference 331 Instructor’s Solutions Manual c P(.099 < p < 710) = 95 d.-g Answers vary h The credible intervals should decrease in width with larger sample sizes 16.14 a.-b Answers vary 16.15 With y = 4, n = 25, and a beta(1, 3) prior, the posterior distribution for p is beta(5, 24) Using R, the lower and upper endpoints of the 95% credible interval are given by: > qbeta(.025,5,24) [1] 0.06064291 > qbeta(.975,5,24) [1] 0.3266527 16.16 With y = 4, n = 25, and a beta(1, 1) prior, the posterior distribution for p is beta(5, 22) Using R, the lower and upper endpoints of the 95% credible interval are given by: > qbeta(.025,5,22) [1] 0.06554811 > qbeta(.975,5,22) [1] 0.3486788 This is a wider interval than what was obtained in Ex 16.15 16.17 With y = and a beta(10, 5) prior, the posterior distribution for p is beta(11, 10) Using R, the lower and upper endpoints of the 80% credible interval for p are given by: > qbeta(.10,11,10) [1] 0.3847514 > qbeta(.90,11,10) [1] 0.6618291 16.18 With n = 15, ∑ n i =1 y i = 30.27, and a gamma(2.3, 0.4) prior, the posterior distribution for θ is gamma(17.3, 030516) Using R, the lower and upper endpoints of the 80% credible interval for θ are given by > qgamma(.10,shape=17.3,scale=.0305167) [1] 0.3731982 > qgamma(.90,shape=17.3,scale=.0305167) [1] 0.6957321 The 80% credible interval for θ is (.3732, 6957) To create a 80% credible interval for 1/θ, the end points of the previous interval can be inverted: 3732 < θ < 6957 1/(.3732) > 1/θ > 1/(.6957) Since 1/(.6957) = 1.4374 and 1/(.3732) = 2.6795, the 80% credible interval for 1/θ is (1.4374, 2.6795) 332 Chapter 16: Introduction to Bayesian Methods of Inference Instructor’s Solutions Manual 16.19 With n = 25, ∑ n i =1 y i = 174, and a gamma(2, 3) prior, the posterior distribution for λ is gamma(176, 0394739) Using R, the lower and upper endpoints of the 95% credible interval for λ are given by > qgamma(.025,shape=176,scale=.0394739) [1] 5.958895 > qgamma(.975,shape=176,scale=.0394739) [1] 8.010663 16.20 With n = 8, u = 8579, and a gamma(5, 2) prior, the posterior distribution for v is gamma(9, 1.0764842) Using R, the lower and upper endpoints of the 90% credible interval for v are given by > qgamma(.05,shape=9,scale=1.0764842) [1] 5.054338 > qgamma(.95,shape=9,scale=1.0764842) [1] 15.53867 The 90% credible interval for v is (5.054, 15.539) Similar to Ex 16.18, the 90% credible interval for σ2 = 1/v is found by inverting the endpoints of the credible interval for v, given by (.0644, 1979) 16.21 From Ex 6.15, the posterior distribution of p is beta(5, 24) Now, we can find P * ( p ∈ Ω ) = P * ( p < 3) by (in R): > pbeta(.3,5,24) [1] 0.9525731 Therefore, P * ( p ∈ Ω a ) = P * ( p ≥ 3) = – 9525731 = 0474269 Since the probability associated with H0 is much larger, our decision is to not reject H0 16.22 From Ex 6.16, the posterior distribution of p is beta(5, 22) We can find P * ( p ∈ Ω ) = P * ( p < 3) by (in R): > pbeta(.3,5,22) [1] 0.9266975 Therefore, P * ( p ∈ Ω a ) = P * ( p ≥ 3) = – 9266975 = 0733025 Since the probability associated with H0 is much larger, our decision is to not reject H0 16.23 From Ex 6.17, the posterior distribution of p is beta(11, 10) Thus, P * ( p ∈ Ω ) = P * ( p < 4) is given by (in R): > pbeta(.4,11,10) [1] 0.1275212 Therefore, P * ( p ∈ Ω a ) = P * ( p ≥ 4) = – 1275212 = 8724788 Since the probability associated with Ha is much larger, our decision is to reject H0 16.24 From Ex 16.18, the posterior distribution for θ is gamma(17.3, 0305) To test H0: θ > vs Ha: θ ≤ 5, * * we calculate P (θ ∈ Ω ) = P (θ > 5) as: Chapter 16: Introduction to Bayesian Methods of Inference 333 Instructor’s Solutions Manual > - pgamma(.5,shape=17.3,scale=.0305) [1] 0.5561767 Therefore, P * (θ ∈ Ω a ) = P * (θ ≥ 5) = – 5561767 = 4438233 The probability associated with H0 is larger (but only marginally so), so our decision is to not reject H0 16.25 From Ex 16.19, the posterior distribution for λ is gamma(176, 0395) Thus, P * (λ ∈ Ω ) = P * (λ > 6) is found by > - pgamma(6,shape=176,scale=.0395) [1] 0.9700498 Therefore, P * (λ ∈ Ω a ) = P * ( λ ≤ 6) = – 9700498 = 0299502 Since the probability associated with H0 is much larger, our decision is to not reject H0 16.26 From Ex 16.20, the posterior distribution for v is gamma(9, 1.0765) To test: H0: v < 10 vs Ha: v ≥ 10, * * we calculate P ( v ∈ Ω ) = P ( v < 10) as > pgamma(10,9, 1.0765) [1] 0.7464786 Therefore, P * (λ ∈ Ω a ) = P * ( v ≥ 10) = – 7464786 = 2535214 Since the probability associated with H0 is larger, our decision is to not reject H0 ... Instructor’s Solutions Manual c P(.099 < p < 710) = 95 d.-g Answers vary h The credible intervals should decrease in width with larger sample sizes 16.14 a.-b Answers vary 16.15 With y = 4, n... gamma density with shape parameter n + α and scale parameter n β∑i =1 y i + ⎞ ⎛ β ⎟ ⎜ Thus, the solution is m( y1 ,…, y n ) = ( n ) Γ + β ⎜ β n y +1⎟ Γ(α )β α ⎠ ⎝ ∑i =1 i n+α c The solution follows... 2.6795) 332 Chapter 16: Introduction to Bayesian Methods of Inference Instructor’s Solutions Manual 16.19 With n = 25, ∑ n i =1 y i = 174, and a gamma(2, 3) prior, the posterior distribution
- Xem thêm -

Xem thêm: Solution manual mathematical statistics with applications 7th edition, wackerly chapter16 , Solution manual mathematical statistics with applications 7th edition, wackerly chapter16

Gợi ý tài liệu liên quan cho bạn

Nhận lời giải ngay chưa đến 10 phút Đăng bài tập ngay