báo cáo khoa học: "Prediction of genetic merit from data on binary and quantitative variates with an application to calving difficulty, birth weight and pelvic opening" pot

23 325 0
báo cáo khoa học: "Prediction of genetic merit from data on binary and quantitative variates with an application to calving difficulty, birth weight and pelvic opening" pot

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

Prediction of genetic merit from data on binary and quantitative variates with an application to calving difficulty, birth weight and pelvic opening J.L. FOULLEY, D. GIANOLA R. THOMPSON’ LN.R.A., Station de Genetique quantitative et appliquie, Centre de Recherches zootechniques, F 78350 Jouy-en-Josas * Department of Animal Science, University of Illinois, Urbana, Illinois 61801, U.S.A., ** A.R.C. Unit of Statistics, University of Edinburgh, Mayfield Road, Edinburgh, EH9 3JZ, Scotland Summary A method of prediction of genetic merit from jointly distributed quanta] and quantitative responses is described. The probability of response in one of two mutually exclusive and exhaustive categories is modeled as a non-linear function of classification and « risk » variables. Inferences are made from the mode of a posterior distribution resulting from the combination of a multivariate normal density, a priori, and a product binomial likelihood function. Parameter estimates are obtained with the Newton-Raphson algorithm, which yields a system similar to the mixed model equations. « Nested » Gauss-Seidel and conjugate gradient procedures are suggested to proceed from one iterate to the next in large problems. A possible method for estimating multivariate variance (covariance) components involving, jointly, the categorical and quantitative variates is presented. The method was applied to prediction of calving difficulty as a binary variable with birth weight and pelvic opening as « risk » variables in a Blonde d’Aquitaine population. Key-words : sire evaluation, categorical data, non-linear models, prediction, Bayesian methods. Résumé Prédiction génétique à partir de données binaires et continues : application aux difficultés de vêlage, poids à la naissance et ouverture pelvienne. Cet article présente une méthode de prédiction de la valeur génétique à partir d’observations quantitatives et qualitatives. La probabilité de réponse selon l’une des deux modalités exclusives et exhaustives envisagées est exprimée comme une fonction non linéaire d’effets de facteurs d’incidence et de variables de risque. L’inférence statistique repose sur le mode de la distribution a posteriori qui combine une densité multinormale a priori et une fonction de vraisemblance produit de binomiales. Les estimations sont calculées à partir de l’algorithme de Newton-Raphson qui conduit à un système d’équations similaires à celles du modèle mixte. Pour les gros fichiers, on suggère des méthodes itératives de résolution telles que celles de Gauss-Seidel et du gradient conjugué. On pro- pose également une méthode d’estimation des composantes de variances et covariances relatives aux variables discrètes et continues. Enfin, la méthodologie présentée est illustrée par une application numérique qui a trait à la prédiction des difficultés de vêlage en race bovine Blonde d’Aquitaine utilisant d’une part, l’appréciation tout-ou-rien du caractère, et d’autre part, le poids à la naissance du veau et l’ouverture pelvienne de la mère comme des variables de risque. Mots-clés : Évaluation des reproducteurs, données discrètes, modèle non linéaire, prédiction, méthode bayesienne. 1. Introduction In many animal breeding applications, the data comprise observations on one or more quantitative variates and on categorical responses. The probability of « successful » outcome of the discrete variate, e.g., survival, may be a non-linear function of genetic and non-genetic variables (sire, breed, herd-year) and may also depend on quantitative response variates. A possible course of action in the analysis of this type of data might be to carry out a multiple-trait evaluation regarding the discrete trait as if it were continuous, and then utilizing available linear methodology (H ENDER SO N, 1973). Further, the model for the discrete trait should allow for the effects of the quantitative variates. In addition to the problems of describing discrete variation with linear models (Cox, 1970; THO MPSON , 1979; G IANOLA , 1980), the presence of stochastic « regressors in the model introduces a complexity which animal breeding theory has not addressed. This paper describes a method of analysis for this type of data based on a Bayesian approach; hence, the distinction between « fixed and « random variables is circumvented. General aspects of the method of inference are described in detail to facilitate comprehension of subsequent developments. An estimation algorithm is developed, and we consider some approximations for posterior inference and fit of the model. A method is proposed to estimate jointly the components of variance and covariance involving the quantitative and the categorical variates. Finally, procedures are illustrated with a data set pertaining to calving difficulty (categorical), birth weight and pelvic opening. II. Method of inference : general aspects Suppose the available data pertain to three random variables: two quantitative (e.g., calf’s birth weight and dam’s pelvic opening) and one binary (e.g., easy vs. difficult calving). Let the data for birth weight and dam’s pelvic opening be represented by the vectors y, and Y2 , respectively. Those for calving difficulty are represented by a set Y of indicator variables describing the configuration of the following s x 2 contingency table: where the s rows indicate conditions affecting individual or grouped records. The two categories of response are mutually exclusive and exhaustive, and the number of observations in each row, n; !0, is assumed fixed. The random quantity n il (or, conversely, n; - ni ,) can be null, so contingency tables where n, = 1, for i = 1, , s, are allowed. The data can be represented symbolically by the vector Y’=(Y,, Y2, , Y,), n!, where y i= 7- Y ir with Yi, being an indicator variable equal to 1 if a response occurs r=i I and zero otherwise. The data Y, y, and y2, and a parameter vector 0 are assumed to have a joint density f(Y, y,, y2, 0) written as where f,(9) is the marginal or a priori density of 0. From (1) where f3 (Y, y, y,) is the marginal density of the data, i.e., with 0 integrated out, and f4 (o I Y, , Y &dquo; Y2 ) is the a posteriori density of 0. As f3 (Y, y,, Y2 ) does not depend on 0, one can write (2) as which is Bayes theorem in the context of our setting. Equation (3) states that inferences can be made a posteriori by combining prior information with data translated to the posterior density via the likelihood function f2 (Y, YI , Y210). The dispersion of 0 reflects the a priori relative uncertainty about 0, this based on the results of previous data or experiments. If a new experiment is conducted, new data are combined with the prior density to yield the posterior. In turn, this becomes the a priori density for further experiments. In this form, continued iteration with (3) illustrates the process of knowledge accumulation (CORNFIELD, 1969). Comprehensive discussions of the merits, philosophy and limitations of Bayesian inference have been presented by C ORNFIELD (1969), and LirrDLEY & SMITH (1972). The latter argued in the context of linear models that (3) leads to estimates which may be substantially improved from those arising in the method of least-squares. Equation (3) is taken in this paper as a point of departure for a method of estimation similar to the one used in early developments of mixed model prediction (H ENDER SO N et al., 1959). Best linear unbiased predictors could also be derived following Bayesian considerations (R6 NNIN G EN , 1971; D EMPFLE , 1977). The Bayes estimator of 0 is the vector 6 minimizing the expected a posteriori risk where 1(6, 0) is a loss function (MOOD & GR A YB ILL , 1963). If the loss is quadratic Equating (6) to zero, yields Ô=E(9IY, yi, yz ). Note that differentiating (6) with respect to 0 yields a positive number, i.e., 0 minimizes the expected posterior risk, and 0 is identical to the best predictor of 0 in the squared-error sense of H ENDERSON (1973). Unfortunately, calculating 4 requires deriving the conditional density of 0 given Y, y, and y,, and then computing the conditional expectation. In practice, this is difficult or impossible to execute as discussed by H ENDER S ON (1973). In view of these difficulties, L INDLEY & SMITH (1972) have suggested to approximate the posterior mean by the mode of the posterior density; if the posterior is unimodal and approximately symmetric, its mode will be close to the mean. HARVIL LE (1977) has pointed out, that if an improper prior is used in place of the « true prior, the posterior mode has the advantage over the posterior mean, of being less sensitive to the tails of the posterior density. In (3), it is convenient to write so the log of the posterior density can be written as In[f 4 (Ø/Y, Yt , y z )] =In[f 6(y ly,, Yz , Ø)]+ In [f s( Yt . Yzl ø)]+ 1n[f ¡ (Ø)] + const. (8) III. Model A. Categorical variate The probability of response (e.g., easy calving) for the i’! row of the contingency table can be written as some cumulative distribution function with an argument peculiar to this row. Possibilities (GI ANOL A & FOULLEY, 1983) are the standard normal and logistic distribution functions. In the first case, the probability of response is where <1>(.) and (D(.) are the density and distribution functions of a standard normal variate, respectively, and w; is a location variable. In the logistic case, The justification of (9) and (10) is that they provide a liaison with the classical threshold model (D EMPST ER & LER NER, 1950; G IAN O LA , 1982). If an easy calving occurs whenever the realized value of an underlying normal variable, zw-N(8 ;, 1), is less than a fixed threshold value t, we can write for the i lh row Letting p.,=t-8 i, !Li+5 is the probit transformation used in dose-response relationships (F INNEY , 1952) ; defining !L4,= ¡. t,’ IT /V3, then For -5<p.,<5, the difference between the left and right hand sides of ( l lb) does not exceed .022, being negligible from a practical point of view. Suppose that a normal function is chosen to describe the probability of response. Let y ;3 be the underlying variable, which under the conditions of the i’ h row of the contingency table, is modeled as where X:3 and Z:3 are known row vectors, JJ3 and U3 are unknown vectors, and ei, is a residual. Likewise, the models for birth weight and pelvic opening are Define I-Li in (9) as which holds if e ;3 is correlated only with ei, and e i2’ In a multivariate normal setting where the p; ,’s and the (T!,’s are residual correlations and residual standard deviations, respectively. Similarly where p! ! is the fraction of the residual variance of the underlying variable explained by a linear relationship with e;, and e ;2 . Since the unit of measurement in the conditional distribution of the underlying variate given PH P2 1 Ull U21 P3 1 u3, yi, and Yi2 is the standard deviation, then ( 14) can be written as Hence, (13) can be written in matrix notation as where X&dquo; X2, Z, and Z2 are known matrices arising from writing (12b) and (12c) as vectors. Now, suppose for simplicity that X3 is a matrix such that all factors and levels in X, and X2 are represented in X3 and let ZI =Z Z =Z3’ Write where Q, and Q, are matrices of operators obtained by deleting columns of identity matrices of appropriate order. Thus, (19) can be written as 2 2 Letting T = P3 - L b ;Q;[ 3; and v = U3 - L b,u,, (20) can be expressed as ¡-I i W Note that if b, = b 2 = 0, then T = (i 3, v = U3 . and (21 ) is equal to the expectation of ( 12a). Given fl , the indicator variables Y are assumed to be conditionally independent, and the likelihood function is taken as product binomial so where 0* ’ = [P I’ P 2’ fl 3, Ul , u2, U3 , bi, b2 l. Also Letting 0’ = [fli , [3 2, T, Ul , u2, v, b,, b 2l , then from (23) and (24) B. Conditional density of « risk H variables. The conditional density of y, and y, given 6 is assumed to be multivariate normal with location and dispersion following from ( 12b) and ( 12c) where (27) is a non-singular known covariance matrix. Letting R&dquo;, R’ 2, R2’ and R 22 be respective partitions of the inverse of (27), one can write C. Prior density. In this paper we assume that the residual covariance matrix is known. From ( 16) and (17), this implies that b, and b2 are also known. Therefore, and the vector of unknowns becomes 9’=[JJ h [3 z, T, u,, u2, v] multivariate normal distribution with Cov (u!, u;)=G;;(i, j=1, , 3 Note that Gc depends on b, and b2; when b, =b 2 =0, it follows from (30) that G!= f G;;!. Now where Ge ’={G!’}(i, i = 1, , 3). Prior knowledge about J3 is assumed to be vague so r - m and r- t ! 0. Therefore IV. Estimation The terms of the log-posterior density in (8) are given in equations (22), (28) and (33). To obtain the mode of the posterior density, the derivatives of (8) with respect to 0 are equated to zero. The resulting system of equations is not linear in 9 and an iterative solution is required. Letting L(9) be the log of the posterior density, the Newton-Raphson algorithm (DAHLQUIST & BJORCK, 1974) consists of iterating with Note that the inverse of the matrix of second partial derivatives exists as 13 can be uniquely defined, e.g., with Xi having full-column rank, i=1, 3. It is convenient to write (34) as A. First derivatives. Differentiating (8) with respect to the elements of 6 yields The derivatives of L(0) with respect to T and v are slightly different where x!. 3 is the i‘&dquo; row of X3, and Now, let v be a sxl vector with elements where ij, = -<I>(I Lj )/P jl and i j2 = <I>(ILj)/( 1 - P,,), and note that vj is the opposite of the sum of normal scores for the j‘&dquo; row. Then B. Second derivatives The symmetric matrix of second partial derivatives can be deduced from equations (36) through (41). Explicitly In (42 i) through (42 k), W is an sxs diagonal matrix with elements indicating that calculations are somewhat simpler if «scoring» is used instead of Newton-Raphson. C. Equations Using the first and second derivatives in (36-41) and (42a-42k), respectively, equations (35) can be written after algebra as (45). In (45), (3; ’’, ft’2&dquo;, !1[&dquo;1 and !12&dquo; are solutions at the [i&dquo;’] iterate while the 0’s are corrections at the [it’] iterate pertaining to the parameters affecting the probability of response, e.g., A!=T!-T!’’&dquo;. Iteration proceeds by first taking a guess for T and v, calculating W1°1 and v1°1, amending the right hand-sides and then solving for the unknowns. The cycle is repeated until the solutions stabilize. Equations (45) can also be written as in (46). The similarity between (46) and the « mixed model equations » (HENDERSO N, 1973) should be noted. The coefficient matrix and the « working » vector Y3 change in every iteration; note that y!i-B]=X3T[’-I]+Z3V[i-BLt.(W[’ - lJttv[l-IJ. l. 1!. Sowing Me equations In animal breeding practice, solving (45) or (46) poses a formidable numerical problem. The order of the coefficient matrix can be in the tens of thousands, and this difficulty arises in every iterate. As (3&dquo; (3 2, u, and u, are « nuisance » variables in this problem, the first step is to eliminate them from the system, if this is feasible. The order of the remaining equations is still very large in most animal breeding problems so direct inversion is not possible. At the it’ iterate, the remaining equations can be written as Next, decomposeP [; -1] as the sum of three matrices L1°! ! l, Dl&dquo;! ’ ’, Ul’! ! I, which are lower triangular, diagonal and upper triangular, respectively. Therefore Now, for each iterate i, sub-iterate with for j=0, 1, ; iteration can start with y li , °1 = 0. As this is a «nested» Gauss-Seidel iteration, with P°-&dquo; symmetric and positive definite (VAN NORTON, 1960). Then, one needs to return to (47) and to the back solution, and work with (48). The cycle finishes when the solutions y stabilize. [...]... non-linear models is an open area of potential importance VII Numerical application Data were obtained from 47 Blonde d’Aquitaine heifers mated to the same bull and assembled to calve in the Casteljaloux Station, France Each calving record included information on the following: region of origin and sire of the heifer, pelvic opening and season of calving, sex and birth weight of the calf, and calving difficulty... evaluation based on raw frequencies can be seriously misleading progeny group sizes were small (Table 2) and none of the evaluations calculated with (63) could be considered different from zero (Table 5) However, the VII Conclusions This paper presents a solution to the problem of estimating the genetic merit of candidates for selection when both quantal and continuous information is available in a set of. .. discussion of Bayes estimation of variance components, see HILL (1965), T & T (1965), T & Box (1967), IAO AO I AN Y NDLE I L & SMITH (1972) and H (1977) LEONARD (1972) considered estimation E VILL R A of variance components with binomial data for a one-way model Equations (46) suggest methods for estimating variance quantitative- categorical setting Write and covariance components in this Equations (46)... in the following section = -.0184; Note that as !Li(jkl-) increases, so does the probability of difficult calving; also, w;!;k,m> where ’ t k season x increases with increased birth vector T and v were then weight and decreases with increased pelvic opening The B Conditional covariance Given 6, the variance-covariance matrix of birth weight and pelvic opening is where Q is the Kronecker product The values... method can be used to estimate the additive genetic covariance between the quantitative traits and the hypothetical underlying variate with binary sion holding part of P32 are matrix expres- Expressions in (53) and (54) suggest that some of the methods for estimating variance and covariance components in linear models could be used to estimate the covariance structure in (54) One possibility would be to. .. adapted to the situation where the probability of « response» is a function of continuous « riskvariables Also, consideration is given to the assumption that candidates for selection are sampled from a distribution with second moments known, a priori The method can be extended to multiple ordered or unordered categories of response along the lines presented by GIANOLA & FOULLEY (1983) The method is non-linear... by « Henderson’s Mixed Model Method » Z Tierz Ziichtbiol., 88, 186-193 CHAEFFER S L.R., W J.W., T R., 1978 Simultaneous estimation of variance and ILTON HOMPSON covariance components from multitrait mixed model equations Biometrics, 34, 199-208 HOMPSON T R., 1979 Sire evaluation Biometrics, 35, 339-353 AN G.C., T W.Y., 1965 Bayesian analysis of random effects models in the analysis of variance I Posterior... calvings between sexes and maternal grandsires A Models Birth weight was modeled as where D is the effect of the it’ region of origin of the heifer (i=1,2), T, is the effect i &dquo; t of the j‘&dquo; season of calving (j=1,2), L, is the effect of the k sex of calf (k=1: male, 2 = female), S, is the effect of the ph sire of the heifer (1= 1, , 6), and e is a m , ;;k residual The vectors IJ and u, were defined... [3,, 1J2 and matrix of u,, u and v was , 2 where G is a 3x33 matrix calculated c matrix was taken as T as was was assumed to be vague The covariance in (31) The unconditional where p is the genetic correlation between traits i and j in the ,, c prior covariance underlying : ,ra,, genetic correlations used were (MErrisslEa & S personal communication) : p!,3=.70 and p!23=-.50 The standard deviations were... approximation In each of the two cases (bl =1=0 and b 0, and b, b 0) computations were also 2 0 2 conducted using the logistic approximation in ( 11 bSince the residual variance in the logistic scale is Tr!/3, the prior covariance matrices G and Go discussed in the previous section were rescaled as = = where L is a 3 x3 diagonal matrix with elements 1,1 and 7 Solutions to (45) and r/V3 (46) obtained with . Prediction of genetic merit from data on binary and quantitative variates with an application to calving difficulty, birth weight and pelvic opening J.L. FOULLEY, D. GIANOLA R function of genetic and non -genetic variables (sire, breed, herd-year) and may also depend on quantitative response variates. A possible course of action in the analysis of. two quantitative (e.g., calf’s birth weight and dam’s pelvic opening) and one binary (e.g., easy vs. difficult calving) . Let the data for birth weight and dam’s pelvic

Ngày đăng: 09/08/2014, 22:23

Tài liệu cùng người dùng

Tài liệu liên quan