A Course in Mathematical Statistics phần 9 pps

67 376 0
A Course in Mathematical Statistics phần 9 pps

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

440 17 Analysis of Variance 440 Chapter 17 Analysis of Variance The Analysis of Variance techniques discussed in this chapter can be used to study a great variety of problems of practical interest. Below we mention a few such problems. Crop yields corresponding to different soil treatment. Crop yields corresponding to different soils and fertilizers. Comparison of a certain brand of gasoline with and without an additive by using it in several cars. Comparison of different brands of gasoline by using them in several cars. Comparison of the wearing of different materials. Comparison of the effect of different types of oil on the wear of several piston rings, etc. Comparison of the yields of a chemical substance by using different cata- lytic methods. Comparison of the strengths of certain objects made of different batches of some material. Identification of the melting point of a metal by using different thermometers. Comparison of test scores from different schools and different teachers, etc. Below, we discuss some statistical models which make the comparisons mentioned above possible. 17.1 One-way Layout (or One-way Classification) with the Same Number of Observations Per Cell The models to be discussed in the present chapter are special cases of the general model which was studied in the previous chapter. In this section, we consider what is known as a one-way layout, or one-way classification, which we introduce by means of a couple of examples. EXAMPLE 1 Consider I machines, each one of which is manufactured by I different compa- nies but all intended for the same purpose. A purchaser who is interested in acquiring a number of these machines is then faced with the question as to which brand he should choose. Of course his decision is to be based on the productivity of each one of the I different machines. To this end, let a worker run each one of the I machines for J days each and always under the same conditions, and denote by Y ij his output the jth day he is running the ith machine. Let μ i be the average output of the worker when running the ith machine and let e ij be his “error” (variation) the jth day when he is running the ith machine. Then it is reasonable to assume that the r.v.’s e ij are normally distributed with mean 0 and variance σ 2 . It is further assumed that they are independent. Therefore the Y ij ’s are r.v.’s themselves and one has the follow- ing model. Ye e N i I jJ ij i ij ij =+ () =≥ () =≥ () μσ where are independent for , , 012 121 2 , , ; , . ( ) EXAMPLE 2 For an agricultural example, consider I ·J identical plots arranged in an I × J orthogonal array. Suppose that the same agricultural commodity (some sort of a grain, tomatoes, etc.) is planted in all I· J plots and that the plants in the ith row are treated by the ith kind of I available fertilizers. All other conditions assumed to be the same, the problem is that of comparing the I different kinds of fertilizers with a view to using the most appropriate one on a large scale. Once again, we denote by μ i the average yield of each one of the J plots in the ith row, and let e ij stand for the variation of the yield from plot to plot in the ith row, i = 1, . . . , I. Then it is again reasonable to assume that the r.v.’s e ij , i = 1, , I; j = 1, . . . , J are independent N(0, σ 2 ), so that the yield Y ij of the jth plot treated by the ith kind of fertilizer is given by (1). One may envision the I objects (machines, fertilizers, etc.) as being repre- sented by the I spaces between I + 1 horizontal (straight) lines and the J objects (days, plots, etc.) as being represented by the J spaces between J + 1 vertical (straight) lines. In such a case there are formed IJ rectangles in the resulting rectangular array which are also referred to as cells (see also Fig. 17.1). The same interpretation and terminology is used in similar situations throughout this chapter. In connection with model (1), there are three basic problems we are interested in: Estimation of μ i , i = 1, , I; testing the hypothesis: H : μ 1 = ··· = μ I (= μ , unspecified) (that is, there is no difference between the I machines, or the I kind of fertilizers) and estimation of σ 2 . Set Y e = () = () = () YYYY YY eeee ee JJII JJII I 11 1 21 2 11 1 21 2 1 , , ; , , ; ; , , , , ; , , ; ; , , , , ′ ′ ′ββ μμ 17.1 One-way Layout (or One-way Classification) 441 442 17 Analysis of Variance (i, jth) cell I Ϫ 1 1 I i 2 1 2 jJ Ϫ 1 J ′ = ⋅⋅ ⋅ ⋅⋅⋅⋅⋅⋅⋅ ⋅⋅ ⋅ ⋅⋅ ⋅ ⋅ ⋅⋅⋅⋅⋅ ⋅⋅⋅ ⎫ ⎬ ⎪ ⎭ ⎪ ⋅ ⋅⋅⋅⋅⋅⋅⋅ ⋅ ⋅⋅⋅⋅⋅⋅⋅ ⋅ ⋅⋅⋅⋅⋅⋅⋅ ⋅⋅⋅ ⋅ ⋅⋅⋅⋅⋅⋅ ⋅⋅⋅ ⎫ ⎬ ⎪ ⎭ ⎪ ⎛ ⎝ ⎜ ⎜ ⎜ X 100 0 100 0 010 0 0 010 0 00 01 00 01 I J J 67444 8444 ⎜⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎞ ⎠ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ Then it is clear that Y = X′ ββ ββ β+e. Thus we have the model described in (6) of Chapter 16 with n = IJ and p = I. Next, the I vectors (1, 0, · · · , 0)′, (0, 1, 0, , 0)′, . . . , (0, 0, . . . , 0, 1)′ are, clearly, independent and any other row vector in X′ is a linear combination of them. Thus rank X′=I (= p), that is, X · J Figure 17.1 is of full rank. Then by Theorem 2, Chapter 16, μ i = 1, , I have uniquely determined LSE’s which have all the properties mentioned in Theorem 5 of the same chapter. In order to determine the explicit expression of them, we observe that SXX I= ′ = ⋅⋅⋅ ⋅⋅⋅ ⋅ ⋅ ⋅⋅⋅⋅ ⋅ ⋅⋅⋅ ⎛ ⎝ ⎜ ⎜ ⎜ ⎜ ⎞ ⎠ ⎟ ⎟ ⎟ ⎟ = J J J Jp 00 0 00 0 00 0 and XY = ⎛ ⎝ ⎜ ⎞ ⎠ ⎟ ′ === ∑∑∑ YY jj IJ jj J j J 12 111 , , , ,Y J so that, by (9), Chapter 16, ˆ , , , .ββ= = ⎛ ⎝ ⎜ ⎞ ⎠ ⎟ ′ − === ∑∑∑ SXY 1 11 1 12 111 J Y J Y J Y jj IJ j J j J j J Therefore the LSE’s of the μ ’s are given by ˆ , , , , . μ ii i ij j J YY J Yi I=== = ∑ where 1 1 1 (2) Next, one has ηη= = ⎛ ⎝ ⎜ ⎜ ⎜ ⎞ ⎠ ⎟ ⎟ ⎟ EY μμμμ μμ 1122 , , ; , , ; ; , , , JJ J II 674846748467484 ′ so that, under the hypothesis H : μ 1 =···= μ I (= μ , unspecified), ηη ηη η ∈ V 1 . That is, r − q = 1 and hence q = r − 1 = p − 1 = I − 1. Therefore, according to (31) in Chapter 16, the F statistic for testing H is given by F SS S SS S = −− = − () − −nr q IJ I cC C cC C 1 1 . (3) Now, under H, the model becomes Y ij = μ + e ij and the LSE of μ is obtained by differentiating with respect to μ the expression Y −= − () == ∑∑ ηη cij j J i I Y 2 2 11 μ . One has then the (unique) solution ˆ ,. μ == == ∑∑ YY IJ Y ij j J i I where 1 11 (4) Therefore relations (28) and (29) in Chapter 16 give 17.1 One-way Layout (or One-way Classification) 443 444 17 Analysis of Variance S C C ij ij C j J ij i j J i I i I YYY=− = − () =− () ==== ∑∑∑∑ Y ˆˆ ,. ηη 2 2 1 2 111 η and S c c ij ij c j J ij j J i I i I YYY=− = − () =− () ==== ∑∑∑∑ Y ˆˆ . , ηη 2 2 1 2 111 η But for each fixed i, YY Y JY ij i j J ij j J i − () =− == ∑∑ , 2 1 2 1 2 so that S Ce e iji j J ij j J i i I i I i I SS SS Y Y Y J Y==− () =− ===== ∑∑∑∑∑ ,. where 2 1 2 1 2 111 (5) Likewise, S cT T ij j J ij j J i I i I SS SS Y Y Y IJY==− () =− ==== ∑∑∑∑ ,, where 2 1 2 1 2 11 (6) so that, by means of (5) and (6), one has SS cC i i I i i I i i I J Y IJY J Y IY J Y Y−= − = − ⎛ ⎝ ⎜ ⎞ ⎠ ⎟ =− () === ∑∑∑ , 2 1 22 1 2 2 1 since Y IJ Y I Y ij j J i I i i I . .= ⎛ ⎝ ⎜ ⎞ ⎠ ⎟ = === ∑∑∑ 11 1 111 That is, S c − S C = SS H , (7) where SS J Y Y J Y IJY Hi i I i i I =− () =− == ∑∑ . . 2 1 2 1 2 Therefore the F statistic given in (3) becomes as follows: F = − () − = IJ I SS SS MS MS H e H e 1 1 , (8) where MS SS I MS SS IJ H H e e = − = − () 11 , and SS H and SS e are given by (7) and (5), respectively. These expressions are also appropriate for actual calculations. Finally, according to Theorem 4 of Chapter 16, the LSE of σ 2 is given by ˜ . σ 2 1 = − () SS IJ e (9) Table 1 Analysis of Variance for One-Way Layout source of degrees of variance sums of squares freedom mean squares between groups SS J Y Y Hi i I =− ( ) = ∑ 2 1 I − 1 MS SS I H H = −1 within groups SS Y Y eiji j J i I =− ( ) == ∑∑ . 2 11 IJ− ( ) 1 MS SS IJ e e = − ( ) 1 total SS Y Y Tij j J i I =− ( ) == ∑∑ 2 11 IJ − 1— 17.1 One-way Layout (or One-way Classification) 445 REMARK 1 From (5), (6) and (7) it follows that SS T = SS H + SS e . Also from (6) it follows that SS T stands for the sum of squares of the deviations of the Y ij ’s from the grand (sample) mean Y . Next, from (5) we have that, for each i, ∑ J j=1 (Y ij − Y i. ) 2 is the sum of squares of the deviations of Y ij , j = 1, , J within the ith group. For this reason, SS e is called the sum of squares within groups. On the other hand, from (7) we have that SS H represents the sum of squares of the deviations of the group means Y i. from the grand mean Y (up to the factor J). For this reason, SS H is called the sum of squares between groups. Finally, SS T is called the total sum of squares for obvious reasons, and as mentioned above, it splits into SS H and SS e . Actually, the analysis of variance itself derives its name because of such a split of SS T . Now, as follows from the discussion in Section 5 of Chapter 16, the quantities SS H and SS e are independently distributed, under H, as σ 2 χ 2 I−1 and σ 2 χ 2 I(J−1) , respectively. Then SS T is σ 2 χ 2 IJ−1 distributed, under H. We may summarize all relevant information in a table (Table 1) which is known as an Analysis of Variance Table. EXAMPLE 3 For a numerical example, take I = 3, J = 5 and let Y 11 = 82 Y 21 = 61 Y 31 = 78 Y 12 = 83 Y 22 = 62 Y 31 = 72 Y 13 = 75 Y 23 = 67 Y 33 = 74 Y 14 = 79 Y 24 = 65 Y 34 = 75 Y 15 = 78 Y 25 = 64 Y 35 = 72 We have then ˆ ., ˆ ., ˆ . μμμ 123 79 4 63 8 74 2=== and MS H = 315.5392, MS e = 7.4, so that F = 42.6404. Thus for α = 0.05, F 2,12;0.05 , = 3.8853 and the hypothesis H: μ 1 = μ 2 = μ 3 is rejected. Of course, ˜ σ 2 = MS e = 7.4. 446 17 Analysis of Variance Exercise 17.1.1 Apply the one-way layout analysis of variance to the data given in the table below. ABC 10.0 9.1 9.2 11.5 10.3 8.4 11.7 9.4 9.4 17.2 Two-way Layout (Classification) with One Observation Per Cell The model to be employed in this paragraph will be introduced by an appro- priate modification of Examples 1 and 2. EXAMPLE 4 Referring to Example 1, consider the I machines mentioned there and also J workers from a pool of available workers. Each one of the J workers is assigned to each one of the I machines which he runs for one day. Let μ ij be the daily output of the jth worker when running the ith machine and let e ij be his “error.” His actual daily output is then an r.v. Y ij such that Y ij = μ ij + e ij . At this point it is assumed that each μ ij is equal to a certain quantity μ , the grand mean, plus a contribution α i due to the ith row (ith machine), and called the ith row effect, plus a contribution β j due to the jth worker, and called the jth column effect. It is further assumed that the I row effects and also the J column effects cancel out each other in the sense that αβ ij j J i I == == ∑∑ 11 0. Finally, it is assumed, as is usually the case, that the r. errors e ij , i = 1, . . . , I; j = 1, , J are independent N(0, σ 2 ). Thus the assumed model is then Ye ij i j ij i i I j j J =+ + + = = == ∑∑ μα β α β , where 11 0 (10) and e ij , i = 1, . . . , I (≥ 2); j = 1, , J (≥ 2) are independent N(0, σ 2 ). EXAMPLE 5 Consider the identical I · J plots described in Example 2, and suppose that J different varieties of a certain agricultural commodity are planted in each one of the I rows, one variety in each plot. Then all J plots in the ith row are treated by the ith of I different kinds of fertilizers. Then the yield of the jth variety of the commodity in question treated by the ith fertilizer is an r.v. Y ij which is assumed again to have the structure described in (10). Here the ith row effect is the contribution of the ith fertilized and the jth column effect is the contri- bution of the jth variety of the commodity in question. From the preceding two examples it follows that the outcome Y ij is af- fected by two factors, machines and workers in Example 4 and fertilizers and varieties of agricultural commodity in Example 5. The I objects (machines or fertilizers) and the J objects (workers or varieties of an agricultural commod- ity) associated with these factors are also referred to as levels of the factors. The same interpretation and terminology is used in similar situations through- out this chapter. In connection with model (10), there are the following three problems to be solved: Estimation of μ ; α i , i = 1, , I; β j , j = 1, , J; testing the hypothesis H A : α 1 =···= α I = 0 (that is, there is no row effect), H B : β 1 =···= β J = 0 (that is, there is no column effect) and estimation of σ 2 . We first show that model (10) is a special case of the model described in (6) of Chapter 16. For this purpose, we set Y e = () = () = () YYYY YJ eeee ee JJIIJ JJIIJ IJ 11 1 21 2 1 11 1 21 2 1 11 , , ; , , ; ; , , , , ; , , ; ; , , ; , ; , , ′ ′ ′ββ μα α β β , and X′ = 1100 0 100 0 1100 0 010 0 1100 0 000 01 1010 0 100 0 1010 0 010 0 ⋅⋅⋅ ⋅⋅ ⋅ ⋅⋅⋅ ⋅⋅ ⋅ ⋅⋅⋅⋅⋅⋅⋅⋅ ⋅⋅⋅ ⋅⋅⋅⋅ ⋅⋅⋅ ⋅⋅ ⋅⋅⋅ ⋅⋅ ⋅ ⋅⋅ ⋅ ⋅⋅ ⋅ ⋅⋅⋅⋅⋅ IJ 67444 8444 6 7444484444 ⋅⋅⋅⋅ ⋅⋅⋅⋅⋅⋅⋅ ⋅⋅⋅ ⋅⋅ ⎫ ⎬ ⎪ ⎪ ⎭ ⎪ ⎪ ⋅⋅⋅⋅⋅⋅⋅⋅ ⋅⋅⋅⋅⋅⋅⋅ ⋅⋅⋅⋅⋅⋅⋅⋅ ⋅⋅⋅⋅⋅⋅⋅ ⋅⋅⋅⋅⋅⋅⋅⋅ ⋅⋅⋅⋅⋅⋅⋅ ⋅⋅ ⋅⋅ ⋅ ⋅⋅ ⋅⋅ ⋅ ⋅⋅⋅⋅⋅⋅⋅⋅ 1010 0 000 01 1000 01 100 0 1000 01 010 0 J ⋅⋅⋅⋅⋅⋅⋅⋅ ⋅⋅ ⋅⋅ ⎫ ⎬ ⎪ ⎪ ⎭ ⎪ ⎪ ⎛ ⎝ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎞ ⎠ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ 1000 01 000 01 J · J 17.2 Two-way Layout (Classification) with One Observation Per Cell 447 448 17 Analysis of Variance and then we have YX e= ′ += =++ββ with andnIJ pIJ1. It can be shown (see also Exercise 17.2.1) that X′ is not of full rank but rank X′=r = I + J − 1. However, because of the two independent restrictions αβ i i I j j J == ∑∑ == 11 0, imposed on the parameters, the normal equations still have a unique solution, as is found by differentiation. In fact, SSYY,, . ββββ () =−−− () () = == ∑∑ Y ij i j j J i I μα β ∂ ∂μ 2 11 0and implies μ ˆ = Y , where Y is again given by (4); ∂ ∂ a i S Y,ββ () = 0 implies α ˆ i = Y i . − Y , where Y i. is given by (2) and (∂/∂ β j )S(Y, ββ ββ β) = 0 implies ββ ββ β ˆ j = Y .j − Y , where Y I Y jij i I . .= = ∑ 1 1 Summarizing these results, we have then that the LSE’s of μ , α i and β j are, respectively, ˆ , ˆ , , , , ˆ , , , . . μα β ==−= =−=YYYi IYYj J ii j j. . . . . 11 (11) where Y i . , i = 1, , I are given by (2), Y is given by (4) and Y I Yj J jij i I . , , , .== = ∑ 1 1 1 (12) Now we turn to the testing hypotheses problems. We have EVrIJ Jr YX== ′ () ′ ∈=+−ηη μα αβ β : , , ; , , . 11 1 I , where Consider the hypothesis H A : α 1 = ···= α I = 0. Then, under H A , ηη ηη η∈V r−q , where r − q A = J, so that q A = I − 1. Next, under H A again, S(Y, ββ ββ β) becomes Y ij j j J i I −− () == ∑∑ μβ 2 11 from where by differentiation, we determine the LSE’s of μ and β j , to be denoted by ˆ μ A and ˆ β j,A , respectively. That is, one has ˆˆ , ˆˆ , , , . . μμβ β AjAjj YYYjJ== =−= = . . , . . 1 (13) Therefore relations (28) and (29) in Chapter 16 give by means of (11) and (12) S C C ij ij C j J ij i j j J i I i I YYYYY=− = − () =−−+ () ==== ∑∑∑∑ Y ˆˆ ηη 2 2 1 2 111 η , and S c c ij ij c j J ij j j J i I i I AA A YYY=− = − () =− () ==== ∑∑∑∑ Y ˆˆ . . ηη 2 2 1 2 111 η , Now S C can be rewritten as follows: S Ce ijji j J i I ij j j J i I i i I SS Y Y Y Y YY J YY == − () −− () [] =− () −− () == === ∑∑ ∑∑∑ . . 2 11 2 11 2 1 (14) because YYYY YY YY JYY ij j i j J i I i i I ij j j J i i I − () − () =− () − () =− () ==== = ∑∑∑∑ ∑ . . . 1111 2 1 Therefore SS cC A A i i I i i I i i I A SS SS J J Y Y J Y IJY −= = = − () =− == = ∑∑ ∑ , ˆ . where α 2 1 2 1 2 1 2 . (15) It follows that for testing H A , the F statistic, to be denoted here by F A , is given by F A A e A e IJ I SS SS MS MS = − () − () − = 11 1 , (16) where MS SS I MS SS IJ A A e e = − = − () − () 1 11 , and SS A , SS e are given by (15) and (14), respectively. (However, for an expres- sion of SS e to be used in actual calculations, see (20) below.) 17.2 Two-way Layout (Classification) with One Observation Per Cell 449 [...]... Apply the two-way layout with one observation per cell analysis of variance to the data given in the following table (take α = 0.05) 452 17 Analysis of Variance 3 7 5 4 −1 2 0 2 1 2 4 0 17.3 Two-way Layout (Classification) with K (≥ 2) Observations Per Cell ≥ In order to introduce the model of this section, consider Examples 4 and 5 and suppose that K (≥ 2) observations are taken in each one of the... Properties of Multivariate Normal Distributions In this section we establish some of the basic properties of a Multivariate Normal distribution THEOREM 3 Let X = (X1, , Xk)′ be N(μ, Σ) (not necessarily non-singular) Then for any μ / m × k constant matrix A = (αij), the r vector Y defined by Y = AX has the m-Variate Normal distribution with mean A and covariance matrix A A / μ Σ In particular, if m = 1,... Exercise 17.1.1 and construct confidence intervals for all contrasts of the μ’s (take 1 − α = 0 .95 ) 18.1 Introduction 463 Chapter 18 The Multivariate Normal Distribution 18.1 Introduction In this chapter, we introduce the Multivariate Normal distribution and establish some of its fundamental properties Also, certain estimation and independence testing problems closely connected with it are discussed... SSA + SSB + SSAB, where SSe, SSA, SSB, SSAB and SST are given by (27), (28), (31), (33) and (34), respectively 17.3.4 Apply the two-way layout with two observations per cell analysis of variance to the data given in the table below (take α = 0.05) 110 128 48 123 19 95 117 60 138 94 214 183 115 114 1 29 217 187 127 156 125 208 183 130 225 114 1 19 195 164 194 1 09 17.4 A Multicomparison Method Consider again... since ( k ( ⎤ )⎥ dx, ⎥ ⎦ −k / Σ / = 1 − 2it Σ 1 − 2it Now the integrand in the last integral above can be looked upon as the p.d.f of a k-Variate Normal with mean μ and covariance matrix Σ/(1 − 2it) Hence / ( ) 18.3 Estimation of μ and ∑ and Test of Independence / 18.1 Introduction 4 69 the integral is equal to one and we conclude that φQ (t) = (1 − 2it)−k/2 which is the ch.f of χ 2 ▲ k Notice that... corollary is that noncorrelation plus normality implies independence, since independence implies noncorrelation in any case It is also to be noted that noncorrelation without normality need not imply independence, as it has been seen elsewhere REMARK 3 Exercises ˆ 18.1.1 Use Definition 1 herein in order to conclude that the LSE β of β in (9) of Chapter 16 has the n-Variate Normal distribution with mean... means μij, i = 1, , I; j = 1, , J need not be additive any longer In other words, except for the grand mean μ and the row and column effects αi and βj, respectively, which in the previous section added up to make μij, we may now allow interactions γij among the various factors involved, such as fertilizers and varieties of agricultural commodities, or workers and machines It is not unreasonable... is a linear combination of the X’s, Y = α′X, say, and Y has the Univariate Normal distribution with mean α′μ and variance μ / Σ α′Σα PROOF For t ∈ ‫ ޒ‬m, we have ′ ⎤ ⎡ φ Y t = E exp t ′Y = E exp t ′AX = E ⎢exp A ′t X ⎥ = φ X A ′t , ⎣ ⎦ so that by means of (6), we have () [ ( )] ( ) ( ) ( ) ′ ′ ⎡ ⎤ ⎡ ⎤ 1 1 / / φ Y t = exp⎢i A ′t μ − A ′t Σ A ′t ⎥ = exp⎢it ′ A − t ′ A A ′ t ⎥ 2 2 ⎣ ⎦ ⎣ ⎦ and this last... Xk)′, which has the k-Variate Normal distribution with mean μ and covariance matrix Σ, is given by / ⎛ ⎞ 1 / (6) φ x t = exp⎜ it ′μ − t ′Σt ⎟ 2 ⎝ ⎠ From (6) it follows that φx, and therefore the distribution of X, is completely determined by means of its mean μ and covariance matrix Σ, a fact analogous / to that of a Univariate Normal distribution This fact justifies the following notation: () ( )... have F1,18;0.05 = 4.41 39 and F2,18;0.05 = 3.5546; we accept HA, reject HB and accept HAB Finally, we have σ 2 = 183.0230 ˜ The models analyzed in the previous three sections describe three experimental designs often used in practice There are many others as well Some of them are taken from the ones just described by allowing different numbers of observations per cell, by increasing the number of factors, . distributed, under H. We may summarize all relevant information in a table (Table 1) which is known as an Analysis of Variance Table. EXAMPLE 3 For a numerical example, take I = 3, J = 5 and let Y 11 =. 440 17 Analysis of Variance 440 Chapter 17 Analysis of Variance The Analysis of Variance techniques discussed in this chapter can be used to study a great variety of problems of practical interest of a grain, tomatoes, etc.) is planted in all I· J plots and that the plants in the ith row are treated by the ith kind of I available fertilizers. All other conditions assumed to be the same,

Ngày đăng: 23/07/2014, 16:21

Từ khóa liên quan

Tài liệu cùng người dùng

Tài liệu liên quan