Modeling of Combustion Systems A Practical Approach 11 doc

101 421 0
Modeling of Combustion Systems A Practical Approach 11 doc

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

191 3 Experimental Design and Analysis Chapter Overview How do fuel composition, excess oxygen, and furnace tempera- ture fluctuations affect some important response such as process conversion rate or emissions? What factors affect NOx and CO? Are burners in similar process units behaving differently or alike? What is the statistical uncertainty that the proposed model is correct? To score models for these kinds of investigations, we need some understanding of statistics. This chapter begins with some elementary statistics and distri- butions, and progresses to its seminal tool for separating informa- tion from noise — the analysis of variance. Next, the chapter covers factorial designs — the foundation of statistically cognizant exper- iments. We find that simple rules produce fractional factorials and reduce the number of required experiments. We show that by modifying the alias structure, we can clear factors of certain biases. We then discuss the importance of replication in obtaining an independent estimate of statistical error, and we show how block- ing can reduce it further. Further discussion shows how orthogo- nality eliminates mutual factor biases. The chapter moves on to consider how to mute certain adulterating effects, including hys- teresis and lurking factors, and how to validate analytical integrity with residuals plots. For looking at many factors with few exper- iments, we introduce screening designs such as simplex and highly fractionated designs. The reader then learns how random and fixed effects differ, and how they affect the analysis. To show how one may assess curvature in factor space, a discussion of second-order designs follows. The chapter concludes by considering the sequen- tial assembly of the various experimental designs. © 2006 by Taylor & Francis Group, LLC 192 Modeling of Combustion Systems: A Practical Approach 3.1 Some Statistics A statistic is a descriptive measure that summarizes an important property of a collection of data. For example, consider the group of numbers in braces: {1, 10, 100}. Though there are only three data values, we could define an unlimited number of statistics related to them. Here are a few: • The maximum, 100, is a statistic because it summarizes a property of the data, namely, that all data are equal to or below a certain value, 100. • The minimum, 1, is also a statistic, defining the magnitude that all data meet or exceed. • The half range, 49.5, that is, (100 – 1)/2, is a statistic. It is a measure of the dispersion of the data. • The count, 3, tells us the number of data points. If the data were repeated measures of the same quantity differing only by measure- ment error, the count would relate to a measure of certainty. Intu- itively, we would expect that the more replicates we measure, the more certain we become of the true value. • The median, 10, is the middle value of an ordered data set. It measures central tendency. Presuming the data comprise replicate observations, one intuitively expects the true value to be closer to the middle than the extremes of the observations. There are ever more statistics, but let us pause here to answer some inter- esting questions: • Can we really describe three observations with five or more statis- tics? Yes. • How can we have five statistics for only three observations? Not all of the statistics are independent. In fact, no more than three can be independent if we are deriving our statistics from these particular data — the number of independent statistics cannot exceed the num- ber of data points. The reason for so many statistics is that we have so many questions that we want to ask of our data; for example: – What limit represents a safe upper bound for a NOx prediction? – How likely are we to exceed this upper limit? – What confidence do we have that our predicted value represents the true value? – How precisely does our model fit the data? – Is a particular point within or beyond the pale of the data? – How large a margin should we state if we want 99.9% of all future data to be below it? © 2006 by Taylor & Francis Group, LLC Experimental Design and Analysis 193 For every important question, it seems someone or several have invented one or more statistics. In this chapter, we shall describe important statistics that relate to modeling in general and combustion modeling in particular. 3.1.1 Statistics and Distributions Suppose we wish to measure a response (y) contaminated by a randomly distributed error term (e). We would like to separate the information (μ) from the noise (e). One option would be to repeatedly measure the response at the same condition and average the results. In summation notation we have , or , where n is the number of replicate measurements. We may divide by the total number of measurements to give Here we will designate it with an overbar. That is, (3.1) Now if e (the error vector) were truly random, we would expect the long- run average to be zero. This occurs when n → ∞. We refer to long-run results as expected values and designate them with the expectation operator, E( ). We will refer to the true value for y as μ. Therefore, E(y) = μ and is an unbiased estimator for μ. Intuitively, we would expect all the y values to distribute about the true value, differing only by e. Since our best estimate of μ from the data is , then is a measure of central tendency — the inclination of the average of repeated measures to converge to the true value. The mean is an important statistic — one might dare say the most impor- tant statistic — but it is insufficient to characterize certain aspects of some populations. For example, suppose the average height of 100 adult male humans is 1.80 m (5.9 ft). How many will be 0.01 m (<1 in.) tall? How many will be 3.59 m (11.8 ft) tall? We know from experience that there are no adult male humans at either of these extremes. Yet the average of these two num- bers is 1.80 m. Therefore, as important as central tendency statistics are, we are also interested in other measures. That is, we would also like some measure of dispersion. Dispersion indicates how values differ from the mean. One statistic that quantifies dispersion is the variance. Let us define the variance (V) of a sample (y) as follows: (3.2) y =+ ∑∑∑ μ e yn n ∑ =+μ e y n n n n n ∑ =+ μ e y y n = ∑ y y y Vyy()y =− () ∑ 2 © 2006 by Taylor & Francis Group, LLC But in Chapter 1 we defined this as the arithmetic mean (Equation 1.55). 194 Modeling of Combustion Systems: A Practical Approach Then the following equation gives the mean variance: (3.3) The long run average of the mean variance is (3.4) However, if we are using the sample mean derived from a finite data set to estimate the variance, tends to overestimate σ 2 unless n is large. The reason for the overestimation is that we have already used the data to determine . Therefore, plus n – 1 data points exactly determine the n th data value; i.e., is not a completely independent measure of dispersion. So the proper denominator to estimate σ 2 in Equation 3.3 is n – 1, not n. In other words, we use up (lose) one degree of freedom when we use a finite data set to estimate . Thus, n – 1 are the degrees of freedom for the estimated variance. We shall use the symbol s 2 to denote this quantity: (3.5a) This is also called the sample-adjusted variance. Obviously, Equation 3.5a and Equation 3.3 become identical as n → ∞. One problem with Equation 3.5a is that the units differ from y because the variance uses the squared value of the response. For this reason, we define the sample standard deviation as (3.6) It has the same units as the response and it is an unbiased estimator for the true standard deviation, σ. Now s will tell us something about the dispersion of possible values about the true mean. To find out what, we need to know something about how values distribute themselves in the long run. 3.1.2 The Normal, Chi-Squared ( χχ χχ 2 ), F, and t Distributions To develop the idea of distributions further, let us consider Figure 3.1. 1 V yy n ()y = − () ∑ 2 V () σ 2 2 = →∞ = →∞ − () ∑ lim () lim n V n yy n y V()y y y V()y y s yy n 2 2 1 = − () − ∑ s yy n = − () − ∑ 2 1 © 2006 by Taylor & Francis Group, LLC Experimental Design and Analysis 195 Galton’s board looks a bit like a pinball machine. It comprises a vertical slate with pegs arranged in a triangle pattern that widens from top to bottom. A ball dropped onto the topmost peg may fall either to the left or to the right, whereupon it strikes the next peg and again may fall to either the left or the right. The ball continues in this fashion until it ultimately drops into one of the bins below. What is the probability that a dropped ball will fill any particular bin? To answer the question, we begin by calculating the distribution of the possibilities. 3.1.2.1 The Normal Distribution Most of us are familiar with the normal distribution — the so-called bell- shaped curve — perhaps it is more nearly cymbal shaped. At any rate, Equation 3.7 gives the mathematical representation: (3.7) Here, N(y) is the frequency of y and e is Euler’s constant, e = 2.71828.… The probability of finding y between two limits, – ∞ < a and b < + ∞, is given by (3.8) FIGURE 3.1 The Galton board. The Galton board comprises a vertical arrangement of pegs such that a ball may take one of two possible paths at each peg, finally arriving at a bin below. The numbers between the pegs show the number of paths leading through each space. The numbers follow Pascal’s triangle (superimposed numbers). The total number of paths for this Galton board sums to 256. Thus, for the ball shown arriving at the bin, the probability is 56/256 = 21.9%. One of 56 possible paths leading to that bin is shown (dotted line). The distribution approaches the normal probability distribution as the number of rows in the Galton board increases. Bin Peg Ball 82856705628 8 1 1 1 1 7 7 6 6 6 55 44 33 2 21 2135 35 15 1520 1 10 10 1 1 1 1 1 1 1 1 1 1 1 Ny e y ()= − − ⎛ ⎝ ⎜ ⎞ ⎠ ⎟ 1 2 2 1 2 2 πσ μ σ Py e d y y a b () ;= − ⎛ ⎝ ⎜ ⎞ ⎠ ⎟ − − ⎛ ⎝ ⎜ ⎞ ⎠ ⎟ ⌠ ⌡ ⎮ ⎮ ⎮ 1 2 1 2 2 π μ σ μ σ 001<<Py() © 2006 by Taylor & Francis Group, LLC 196 Modeling of Combustion Systems: A Practical Approach The statistics μ and σ 2 completely characterize the normal distribution. One may standardize the normal distribution using the coding . (3.9) Figure 3.2 depicts the above equation. The normal distribution has the following properties: • It is symmetrical about its center at z = 0. • It has an inflection point where the curve changes from concave down to concave up (at z = ±1). • The area under the curve sums to unity. 3.1.2.2 Probability Distribution for Galton’s Board Galton’s board is a very good simulator of random error even though New- tonian physics dictate the ball’s motion. Yet, we have no way of predicting what bin the ball will fall into on any given trial because very small variations affect the path of the ball. Such variations include: • The elasticity and roundness of the ball’s surface • The elasticity, angle, and roundness of each peg • The mutual interactions among balls At each peg, the distribution is a binary one: the ball will fall either to the left or to the right. In no way can we consider this a normal distribution. It is an equiprobable binary distribution. Notwithstanding, statistical consid- erations allow us to do the following: • Calculate the ultimate distribution of the balls into the slots • Calculate the probability of any given ball falling into a particular slot • Show that the ultimate distribution is a normal probability distribution FIGURE 3.2 The normal distribution. The so-called bell-shaped curve has a maximum at z = 0, y = 1 and points of inflection at z = ±1, y = . The area under the curve sums to unity. zy=−()μσ Nz e z ()= − 1 2 2 2 π 12/ πe © 2006 by Taylor & Francis Group, LLC Experimental Design and Analysis 197 To derive the probability distribution for Galton’s board, we proceed as follows. First, we count the total number of paths through each space. At the first peg, we have one path to the left and one path to the right. So the possible paths from left to right are distributed as {1, 1}. At the second row of pegs, we may take one path to the left and fall outside the far left peg. But if the ball jumps left and then right, it will fall between the two pegs on the second row. Likewise, if the ball falls to the right of the first peg and to the left of the second peg, it will also fall between the two pegs of the second row; therefore, there are two paths leading between the two pegs of the second row. Finally, if the ball takes a right jump at the first peg and then a right jump at the second peg, it will fall to the right of the right peg. There- fore, the number of paths from left to right at this level is {1, 2, 1}. Now the total number of paths between any two pegs will be the sum of the paths communicating with it. For Galton’s board there are two such paths over- head and to the left and right. Thus, the distribution of paths for the next row of pegs is {1, 3, 3, 1}. We may continue in this fashion all the way down the board. 3.1.2.3 Pascal’s Triangle We know the pattern {1}, {1 1}, {1 2 1}, {1 3 3 1} … as Pascal’s triangle (Figure 3.3). Pascal’s triangle is a numerical triangle having outside edges of 1. The FIGURE 3.3 Pascal’s triangle. Each term is calculated by adding the two terms above it. In some versions, the second row of ones (1 1) is omitted, but we include it here for consistency. Horizontal rows (f) are numbered starting with zero at top and incrementing by 1. Entries (k) in each row are numbered starting from 0 at left and incrementing by 1. Thus, the coordinates (f, k) = (4,2) correspond to the value 6. f also indicates the number of factors in a factorial design discussed presently. The sum of any horizontal row equals the number of terms in the saturated factorial (2 f ). k indicates the overall order of the term. One may calculate the value of entry k in row f directly using the formula f !/[k! (f – k)!], e.g., 4!/[2!(4 – 2)!] = 6. ⠄ ⠄ ⠄ ⠄ ⠄ ⠄ ⠄ ⠄ ⠄ ⠄ ⠄ ⠄ ⠄ ⠄ ⠄ ⠄ ⠄ ⠄ ⠄ ⠄ ⠄ ⠄ ⠄ ⠄ ⠄ ⠄ ⠄ ⠄ ⠄ ⠄ ⠄ ⠄ ⠄ ⠄ ⠄ ⠄ ⠄ ⠄ ⠄ ⠄ ⠄ ⠄ ⠄ ⠄ ⠄ ⠄ 1 2 4 8 16 32 64 128 256 1 8 28 56 70 56 28 8 1 11 1 1 1 1 1 11 1 12 133 1446 1551010 16615 1520 7721353521 © 2006 by Taylor & Francis Group, LLC 198 Modeling of Combustion Systems: A Practical Approach sum of the two numbers immediately above forms each lower entry. The sum of the numbers in a horizontal row is always n = 2 f (3.10) where f is the number of the row starting from top down; all counting for rows and entries begins with zero, i.e., 0, 1, 2, … . Equation 3.11 gives the k th entry in row f directly: (3.11) where m is the number contained in the k th entry of the f th row. For reference, we have superimposed Pascal’s triangle onto Galton’s board in Figure 3.1. Each number represents the number of possible paths travers- ing through the interstice. As shown, the board has eight rows of pegs. At the bottom, we have nine slots and the distribution of paths is {1, 8, 28, 56, 70, 56, 28, 8, 1}. The total number of paths is 1 + 8 + 28 + 56 + 70 + 56 + 28 + 8 + 1 = 256 = 2 8 . So, the probabilities for a ball falling into any given slot from left to right are 1/256, 8/256, 28/256, 56/256, 70/256, 28/256, 8/256, and 1/256, whose fractions sum to 1. This is a binomial frequency distribution. We may find this directly by the ratio of Equations 3.10 and 3.11: (3.12) where B(f, k) is the probability of the ball finding its way to the k th interstice (counting from zero) under the f th peg. For reference, we have superimposed a bar graph in Figure 3.1 for each bin. The bar is proportional to the proba- bility of a ball finding its way to that particular bin. The distribution approaches the normal probability distribution as f → ∞. But even after several rows, the resemblance to Equation 3.7 is unmistakable. In fact, (3.13) where and and y = k. This is part of an even broader concept known as the central limit theorem: as the number of independent and identically distributed random variables increases, the aggregate distri- bution approaches the normal probability distribution. With the following substitution, (3.14) m f kfk = − () ! !! Bfk f kf k f (,) ! !! = − () 1 2 lim ! !! , f f kf k Ne f →∞ − () ⎡ ⎣ ⎢ ⎢ ⎤ ⎦ ⎥ ⎥ = () = − 1 2 1 2 1 2 μσ πσ yy− ⎛ ⎝ ⎜ ⎞ ⎠ ⎟ μ σ 2 μ= f 2 σ= f 4 z y = − ⎛ ⎝ ⎜ ⎞ ⎠ ⎟ μ σ © 2006 by Taylor & Francis Group, LLC Experimental Design and Analysis 199 Letting and σ = 1, Equation 3.13 reduces to Equation 3.9: (3.9) We call Equation 3.9 the probability density function. Then the cumulative probability function for –a < z < a is (3.15) We call the variable, z, the standard unit variate, and when the limits of the integration are from – ∞ to + ∞, the integral attains unity. Equation 3.15 implies a two-tail test because we are asking for the proba- bility for z being between –a and a. If we were only interested in P[N(z)] being greater than –a or less than a, we would only be interested in one tail of the distribution. Since the distribution is symmetrical, the probability of the one-tailed test is exactly half that of the two-tailed test. Most computer spreadsheets have functions to calculate this. Excel™ has several related functions. The function normdist(x,m,s,TRUE) evaluates Equation 3.8, where x is a particular value, m the mean, and s the standard deviation. The function normdist(z,m,s,FALSE) evaluates 1 – normdist(z,m,s,TRUE). The function normsdist(z)— note the s in the middle of this function name — evaluates Equation 3.15. For example, normsdist(1.96) = Many statistical tests are strictly valid only for normally distributed errors. However, according to the central limit theorem, even if the parent distribu- tion is not normally distributed, the accumulation of several levels of ran- domly distributed deviations will tend toward a normal distribution. This was the case with Galton’s board. The parent distribution was binary (highly nonnormal), yet the final distribution approached the normal distribution quite closely. So we expect estimates of the mean to distribute around the arithmetic mean, approaching the normal distribution. In fact, according to Equation 3.13, μ and σ completely determine the shape of the normal prob- ability curve. Thus, from these two statistics alone, we can derive any other property or statistic for normally distributed data. If data do not distribute normally, often one can transform them to such. For example, emissions data such as CO and NOx are always nonnegative, and therefore they do not μ= f 2 Nz e z ()= − 1 2 2 2 π PNz e dz z a a () ⎡ ⎣ ⎤ ⎦ = − − ⌠ ⌡ ⎮ 1 2 2 2 π 1 2 0 975 2 2 196 196 π edz z − − ⌠ ⌡ ⎮ ≈ . . . © 2006 by Taylor & Francis Group, LLC 200 Modeling of Combustion Systems: A Practical Approach distribute normally. However, the logarithms of these quantities are normally distributed (e.g., ln(NOx)). As we shall see, this will permit us to estimate: • How likely we are to exceed some upper limit • What limit represents a safe upper bound for a prediction • What confidence we have that our predicted value represents the true value 3.1.2.4 The Chi-Squared Distribution Another important distribution is the distribution of variance. The variation will never be negative because variance is a squared quantity. Thus, variance cannot distribute normally. In fact, it distributes as a chi-squared distribution. Knowing something about this distribution will allow us to develop a list of additional statistics related to the model, such as: • The goodness of fit • The confidence that a particular factor belongs in the model • The probability that we can accurately predict future values The chi-squared distribution has the following form: (3.16) where n is the degrees of freedom (an integer); z is the standard variate, defined in Equation 3.14; and is the gamma function, defined as Excel has several related functions. The function gammaln(z) will return the natural log of the gamma function for positive arguments of z. To obtain the gamma function itself, one uses exp(gammaln(z)). Equation 3.17 gives the cumulative probability function for the chi-squared distribution: (3.17) χ 2 2 22 2 2 2 (,)nz ze n nz n = ⎛ ⎝ ⎜ ⎞ ⎠ ⎟ − − Γ Γ()n Γ ntedt nt () = −− ∞ ⌠ ⌡ ⎮ ⎮ 1 0 Pzn ze n nz n a χ 2 2 22 2 0 2 2 (, ) ⎡ ⎣ ⎤ ⎦ = ⎛ ⎝ ⎜ ⎞ ⎠ ⎟ − − ⌠ ⌡ ⎮ ⎮ ⎮ ⎮ ⎮ Γ ⎮⎮ dz © 2006 by Taylor & Francis Group, LLC [...]... hierarchical strategies may allow us to reduce the search,* this will require dedicated software Example 3.5 Results for 2 3 Factorial Data Set Problem statement: Calculate the value of the coefficients for the factorial data set given in Table 3.8 The data show the dependence of NOx on excess oxygen (O2), air preheat temperature (APH), and bridgewall temperature (BWT) for a particular burner and furnace... one-factor-at -a- time approach, with better statistical properties (such as better estimates for the factor’s effect on the response) For example, let us contrast classical and factorial designs for two factors with some hypothetical data Table 3.7 gives the factor patterns TABLE 3.7 Contrast of Classical and Factorial Designs Classical Design ξ1 ξ2 y Factorial Design x1 x2 y 0 0 0 1 2 – – + + © 2006 by Taylor... factorial approach The investigator could not come to the proper conclusions because his experimental strategy was flawed His analysis of the data is correct as far as it goes, but the distorted factor space compromised the results © 2006 by Taylor & Francis Group, LLC Experimental Design and Analysis 219 3.3.4.1 Statistical Properties of Classical Experimentation Generally, classical experimental strategies... strategies have poor statistical properties for the following reasons: • The information is concentrated along a few axes rather than spread over the entire factor space We will always have more certainty of information near the design points or for interpolations among them, compared to distant and extrapolated regions Box and Draper present an information function for gauging the certainty of estimating a. .. function TDIST(z,n,2) gives the two-tailed test of Equation 3.26 3.2 The Analysis of Variance (ANOVA) The F distribution allows us to estimate probabilities for ratios of variances We use it in an important technique known as the analysis of variance (ANOVA) ANOVA is one of the most important concepts in statistical experimental design (SED) It is based on an amazing identity: ∑( y − y ) 2 ≡ ≡ SST ˆ ∑( y... effect Statistical programs will give the P value directly and obviate the need for the table Spreadsheets will also do the same In Excel, the command FDIST(F,m,n) gives the P value for F1–P Let us consider an example © 2006 by Taylor & Francis Group, LLC 208 Modeling of Combustion Systems: A Practical Approach Example 3.1 ANOVA for a Single-Factor Investigation Problem statement: Derive the ANOVA for... 3.3.4.2 How Factorial Designs Estimate Coefficients At this point, it may seem like a minor miracle that one can vary several factors at once and come to any conclusions, let alone sound ones If all the factors vary at the same time, how can one know which factor or factors have changed the response? To see, let us compare the classical and factorial designs and note their similarities rather than their... 2006 by Taylor & Francis Group, LLC 8 8 ⎞ ⎛ a0 ⎞ ⎟ a ⎟ ⎟ ⎜ 1⎟ ; ⎟ ⎜ a2 ⎟ ⎟⎜ ⎟ 8⎠ ⎝ a3 ⎠ ⎛ a0 ⎞ ⎛ 3.83⎞ ⎜ a ⎟ ⎜ 1.26 ⎟ ⎜ 1⎟ = ⎜ ⎟ ⎜ a2 ⎟ ⎜ 0.33⎟ ⎜ ⎟ ⎜ ⎟ ⎝ a3 ⎠ ⎝ 0.63⎠ 212 Modeling of Combustion Systems: A Practical Approach From the ANOVA, we may also derive a statistic to measure overall goodness of fit We shall call it the coefficient of determination and represent it with the symbol r2 It has the following... (overall) term (k = 2) Since the factorial has only high and low values of each factor, no term may contain factors having an individual order above 1 Therefore, factorial terms that overall are second order are of the form xjxk, terms that overall are third order have the form xhxjxk, and so forth If we want to know the number of third-order terms for the 25 factorial design, we can use Equation 3 .11. .. the variance over and above the mean; the residual variance (SSR) is the total variance (SST) minus the model variance (SSR = SST – SSM) If we add these two contributions (SSM + SSR), we obtain the total variance (SST = SSM + SSR) — the variance of the actual data over and above the mean Perhaps vector notation is more straightforward and easier to remember: ˆ ˆ SSM = y T y − y T y (model – mean) (3.30) . LLC 192 Modeling of Combustion Systems: A Practical Approach 3.1 Some Statistics A statistic is a descriptive measure that summarizes an important property of a collection of data. For example,. statistic because it summarizes a property of the data, namely, that all data are equal to or below a certain value, 100. • The minimum, 1, is also a statistic, defining the magnitude that all data meet. as the arithmetic mean (Equation 1.55). 194 Modeling of Combustion Systems: A Practical Approach Then the following equation gives the mean variance: (3.3) The long run average of the mean variance

Ngày đăng: 13/08/2014, 05:22

Từ khóa liên quan

Mục lục

  • Table of Contents

  • Chapter 3: Experimental Design and Analysis

    • Chapter Overview

    • 3.1 Some Statistics

      • 3.1.1 Statistics and Distributions

      • 3.1.2 The Normal, Chi-Squared (X2), F, and t Distributions

        • 3.1.2.1 The Normal Distribution

        • 3.1.2.2 Probability Distribution for Galton’s Board

        • 3.1.2.3 Pascal’s Triangle

        • 3.1.2.4 The Chi-Squared Distribution

        • 3.1.2.5 The F Distribution

        • 3.1.2.6 The t Distribution

        • 3.2 The Analysis of Variance (ANOVA)

          • 3.2.1 Use of the F Distribution

          • 3.3 Two-Level Factorial Designs

            • 3.3.1 ANOVA for Several Model Effects

            • 3.3.2 General Features of Factorial Designs

            • 3.3.3 Construction Details of the Two-Level Factorial

            • 3.3.4 Contrast of Factorial and Classical Experimentation

              • 3.3.4.1 Statistical Properties of Classical Experimentation

              • 3.3.4.2 How Factorial Designs Estimate Coefficients

              • 3.3.4.3 The Sneaky Farmer

              • 3.3.5 Interpretation of the Coefficients

              • 3.3.6 Using Higher-Order Effects to Estimate Experimental Error

                • 3.3.6.1 Normal Probability Plots for Estimating Residual Effects

                • 3.4 Correspondence of Factor Space and Equation Form

                • 3.5 Fractional Factorials

                  • 3.5.1 The Half Fraction

Tài liệu cùng người dùng

  • Đang cập nhật ...

Tài liệu liên quan