randomness and optimal estimation in data sampling

64 207 0
randomness and optimal estimation in data sampling

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

M Khoshnevisan, S Saxena, H P Singh, S Singh, F Smarandache RANDOMNESS AND OPTIMAL ESTIMATION IN DATA SAMPLING (second edition) 3000.00 PRE / ARB*1000 2500.00 2000.00 1500.00 PRE ARB*1000 1000.00 ARB(MMSE Esti.) PRE Cut-off Point 500.00 0.00 0.05 ∆ American Research Press Rehoboth 2002 M Khoshnevisan, S Saxena, H P Singh, S Singh, F Smarandache RANDOMNESS AND OPTIMAL ESTIMATION IN DATA SAMPLING (second edition) Dr Mohammad Khoshnevisan, Griffith University, School of Accounting and Finance, Qld., Australia Dr Housila P Singh and S Saxena, School of Statistics, Vikram University, UJJAIN, 456010, India Dr Sarjinder Singh Department of Mathematics and statistics.University of Saskatchewan, Canada Dr Florentin Smarandache, Department of Mathematics, UNM, USA American Research Press Rehoboth 2002 This book can be ordered in microfilm format from: ProQuest Information & Learning (University of Microfilm International) 300 N Zeeb Road P.O Box 1346, Ann Arbor MI 48106-1346, USA Tel.: 1-800-521-0600 (Customer Service) http://wwwlib.umi.com/bod/ (Books on Demand) Copyright 2002 by American Research Press & Authors Rehoboth, Box 141 NM 87322, USA Many books can be downloaded from our E-Library of Science: http://www.gallup.unm.edu/~smarandache/eBooks-otherformats.htm This book has been peer reviewed and recommended for publication by: Dr V Seleacu, Department of Mathematics / Probability and Statistics, University of Craiova, Romania; Dr Sabin Tabirca, University College Cork, Department of Computer Science and Mathematics, Ireland; Dr Vasantha Kandasamy, Department of Mathematics, Indian Institute of Technology, Madras, Chennai – 600 036, India ISBN: 1-931233-68-3 Standard Address Number 297-5092 Printed in the United States of America Forward The purpose of this book is to postulate some theories and test them numerically Estimation is often a difficult task and it has wide application in social sciences and financial market In order to obtain the optimum efficiency for some classes of estimators, we have devoted this book into three specialized sections: Part In this section we have studied a class of shrinkage estimators for shape parameter beta in failure censored samples from two-parameter Weibull distribution when some 'apriori' or guessed interval containing the parameter beta is available in addition to sample information and analyses their properties Some estimators are generated from the proposed class and compared with the minimum mean squared error (MMSE) estimator Numerical computations in terms of percent relative efficiency and absolute relative bias indicate that certain of these estimators substantially improve the MMSE estimator in some guessed interval of the parameter space of beta, especially for censored samples with small sizes Subsequently, a modified class of shrinkage estimators is proposed with its properties Part2 In this section we have analyzed the two classes of estimators for population median MY of the study character Y using information on two auxiliary characters X and Z in double sampling In this section we have shown that the suggested classes of estimators are more efficient than the one suggested by Singh et al (2001) Estimators based on estimated optimum values have been also considered with their properties The optimum values of the first phase and second phase sample sizes are also obtained for the fixed cost of survey Part3 In this section, we have investigated the impact of measurement errors on a family of estimators of population mean using multiauxiliary information This error minimization is vital in financial modeling whereby the objective function lies upon minimizing over-shooting and undershooting This book has been designed for graduate students and researchers who are active in the area of estimation and data sampling applied in financial survey modeling and applied statistics In our future research, we will address the computational aspects of the algorithms developed in this book The Authors Estimation of Weibull Shape Parameter by Shrinkage Towards An Interval Under Failure Censored Sampling Housila P Singh1, Sharad Saxena1, Mohammad Khoshnevisan2, Sarjinder Singh3, Florentin Smarandache4 School of Studies in Statistics, Vikram University, Ujjain - 456 010 (M P.), India School of Accounting and Finance, Griffith University, Australia Department of Mathematics and Statistics, University of Saskatchewan, Canada Department of Mathematics, University of New Mexico, USA Abstract This paper is speculated to propose a class of shrinkage estimators for shape parameter β in failure censored samples from two-parameter Weibull distribution when some ‘apriori’ or guessed interval containing the parameter β is available in addition to sample information and analyses their properties Some estimators are generated from the proposed class and compared with the minimum mean squared error (MMSE) estimator Numerical computations in terms of percent relative efficiency and absolute relative bias indicate that certain of these estimators substantially improve the MMSE estimator in some guessed interval of the parameter space of β , especially for censored samples with small sizes Subsequently, a modified class of shrinkage estimators is proposed with its properties Key Words & Phrases: Two-parameter Weibull distribution, Shape parameter, Guessed interval, Shrinkage estimation technique, Absolute relative bias, Relative mean square error, Percent relative efficiency 2000 MSC: 62E17 INTRODUCTION Identical rudiments subjected to identical environmental conditions will fail at different and unpredictable times The ‘time of failure’ or ‘life length’ of a component, measured from some specified time until it fails, is represented by the continuous random variable X One distribution that has been used extensively in recent years to deal with such problems of reliability and life-testing is the Weibull distribution introduced by Weibull(1939), who proposed it in connection with his studies on strength of material The Weibull distribution includes the exponential and the Rayleigh distributions as special cases The use of the distribution in reliability and quality control work was advocated by many authors following Weibull(1951), Lieblin and Zelen(1956), Kao(1958,1959), Berrettoni(1964) and Mann(1968 A) Weibull(1951) showed that the distribution is useful in describing the ‘wear-out’ or fatigue failures Kao(1959) used it as a model for vacuum tube failures while Lieblin and Zelen(1956) used it as a model for ball bearing failures Mann(1968 A) gives a variety of situations in which the distribution is used for other types of failure data The distribution often becomes suitable where the conditions for “strict randomness” of the exponential distribution are not satisfied with the shape parameter β having a characteristic or predictable value depending upon the fundamental nature of the problem being considered 1.1 The Model Let x1, x2, …, xn be a random sample of size n from a two-parameter Weibull distribution, probability density function of which is given by : { f ( x ; α , β ) = βα − β x β −1 exp − ( x / α ) β } ; x > 0,α > 0, β > (1.1) where α being the characteristic life acts as a scale parameter and β is the shape parameter The variable Y = ln x follows an extreme value distribution, sometimes called the log-Weibull distribution [e.g White(1969)], cumulative distribution function of which is given by :   y − u  F ( y ) = − exp− exp  ; − ∞ < y < ∞, − ∞ < u < ∞ , b >  b    (1.2) where b = 1/β and u = ln α are respectively the scale and location parameters The inferential procedures of the above model are quite complex Mann(1967 A,B, 1968 B) suggested the generalised least squares estimator using the variances and covariances of the ordered observations for which tables are available up to n = 25 only 1.2 Classical Estimators Suppose x1, x2, …, xm be the m smallest ordered observations in a sample of size n from Weibull distribution Bain(1972) defined an unbiased estimator for b as m−1  y − y  ∧  m b u = −∑  i , i =1  nK  ( m ,n )   (1.3) where K ( m ,n ) ( ) m −1   1  = −   E ∑ v i − v m  ,  n   i =1  (1.4) and vi = yi − u are ordered variables from the extreme value distribution with u = and b = b ∧ 1.The estimator b u is found to have high relative efficiency for heavily censored cases Contrary to this, ∧ the asymptotic relative efficiency of b u is zero for complete samples Engelhardt and Bain(1973) suggested a general form of the estimator as m  ∧  yi − y m   b g = −∑  , i =1  nK ( g ,m ,n )    (1.5) ∧ where g is a constant to be chosen so that the variance of b g is least and K(g,m,n) is an unbiasing constant ∧ hb g The statistic has been shown to follow approximately χ2 - distribution with h degrees of freedom, b ∧  where h = Var  b g b Therefore, we have      ∧ − jp  E β  = jp     β  h − 2 jp Γ[ (h / 2) + jp] ; j = 1,2 Γ (h / 2) (1.6) ∧ where β= () ∧ h−2 2β ˆ is an unbiased estimator of β with Var β = and t = hb g having density t (h − 4) β f (t ) =   Γ(h / )   h/2  − β t  ( h / 2) −1 exp ; t > t   ∧ The MMSE estimator of β, among the class of estimators of the form C β ; C being a constant for ∧ which the mean square error (MSE) of C β is minimum, is ∧ βM = h−4 , t (1.7) having absolute relative bias and relative mean squared error as { } ∧ ARB β M = , h−2 (1.8) and ∧    RMSE  β M  = , h−2 (1.9) respectively 1.3 Shrinkage Technique of Estimation Considerable amount of work dealing with shrinkage estimation methods for the parameters of the Weibull distribution has been done since 1970 An experimenter involved in life-testing experiments becomes quite familiar with failure data and hence may often develop knowledge about some parameters of the distribution In the case of Weibull distribution, for example, knowledge on the shape parameter β can be utilised to develop improved inference for the other parameters Thompson(1968 A,B) considered the problem of shrinking an unbiased estimator ξ of the parameter ξ either towards a natural origin ξ or (ξ , ξ ) towards an interval and suggested the shrunken estimators hξ + (1 − h) ξ and  ξ + ξ2  hξ + (1 − h)   , where < h < is a constant The relevance of such type of shrunken    estimators lies in the fact that, though perhaps they are biased, has smaller MSE than ξ for ξ in some  ξ1 + ξ   , as the case may be This type of shrinkage estimation of the Weibull    interval around ξ or   parameters has been discussed by various authors, including Singh and Bhatkulikar(1978), Pandey(1983), Pandey and Upadhyay(1985,1986) and Singh and Shukla(2000) For example, Singh and Bhatkulikar(1978) suggested performing a significance test of the validity of the prior value of β (which they took as 1) Pandey(1983) also suggested a similar preliminary test shrunken estimator for β In the present investigation, it is desired to estimate available in the form of an interval β in the presence of a prior information (β1 , β ) and the sample information contained in ˆ β Consequently, this article is an attempt in the direction of obtaining an efficient class of shrunken estimators for the scale parameter β The properties of the suggested class of estimators are also discussed theoretically and empirically The proposed class of shrunken estimators is furthermore modified with its properties THE PROPOSED CLASS OF SHRINKAGE ESTIMATORS ∗ Consider a class of estimators β ( p ,q ) for β in model (1.1) defined by ∗ β ( p ,q ) p  β +β   β1 + β       =  q + w ∧   ,  2β          (2.1) where p and q are real numbers such that p ≠ and q > 0, w is a stochastic variable which may in ∗ particular be a scalar, to be chosen such that MSE of β ( p ,q ) is minimum ∗ Assuming w a scalar and using result (1.6), the MSE of ∗     MSE β ( p , q )  = β {q∆ − 1} + w ∆ 2 2 ( p +1)   β ( p ,q ) is given by     h−2 2p Γ[(h / 2) + p ] Γ(h / 2)   Γ[(h / 2) + p ] + {q∆ − 1}w∆( p +1)    Γ(h / 2)  h−2  p (2.2)  β + β2 ∆=  2β  where     Minimising (2.2) with respect to w and replacing β by its unbiased estimator ∧ β , we get   β1 + β  ∧  ∧ p − q   − ββ ∧     w= w( p) ( p +1)  β1 + β      (2.3) p  h −  Γ[( h / 2) + p] w( p ) =  ,    Γ[(h / 2) + p] where (2.4) lies between and 1, {i.e., < w(p) ≤ 1} provided gamma functions exist, i.e., p > ( − h / 2) Substituting (2.3) in (2.1) yields a class of shrinkage estimators for β in a more feasible form as  β + β2  h−2 ˆ β ( p ,q ) =   { − w( p )}  w( p ) + q  t    (2.5) 2.1 Non-negativity ˆ µg (7 ) ˆ µg (8 ) x = y ∏ i  i =1  µ i p ωi   ,   p ∑ω i =1 −1  p ωµ  = y ∑ i i  ,  i =1 xi  i = 1, [Tuteja and Bahl (1991)] p ∑ω i =1 i p  µ ˆ (9 ) µ g = y ω p +1 + ∑ ω i  i x i =1  i   ,   p  x ˆ (10 ) µ g = y ω p +1 + ∑ ω i  i µ i =1  i  = 1, [Tuteja and Bahl (1991)]  ,   ˆ µg (11) q µ = y ∑ ω i  i   i =1  xi p ˆ  x + ∑  i  µ  i =q +1  i p +1 ∑ω i =1 i p +1 = ∑ω i   ;   p  q   ∑ ω i + ∑ ω i  ; [Srivastava (1965) and Rao   i = q +1  i =1  i =1 = =1 and Mudhalkar (1967)] ˆ µg (12 ) ˆ µg (13 ) x = y ∏ i  i =1  µ i p αi   (α i ' s are suitably constants ) [Srivastava (1967)]    x  = y ∏ 2 −  i µ i =1    i p p ˆ (14 ) µg = y ∏ i =1     αi    [Sahai and Rey (1980)]   xi [Walsh (1970)] {µ i + α i ( xi − µ i )}  p  ˆ (15 ) µ g = y exp ∑θ i log u i  [Srivastava (1971)]  i =1   p  ˆ g (16 ) = y exp ∑θ i (u i − 1) [Srivastava (1971)] µ  i =1  p ˆ (17 ) µ g = y ∑ ω i exp {(θ i / ω i ) log u i }; i =1 p ∑ω i =1 i = 1, [Srivastava (1971)] p ˆ (18 ) µ g = y + ∑ α i ( xi − µ i ) i =1 49 ˆ etc may be identified as particular members of the suggested family of estimators µ g The MSE of these estimators can be obtained from (2.4) It is well known that ( )( V ( y ) = µ / n C 02 + C (20 ) ) (2.8) It follows from (2.6) and (2.8) that the minimum variance of ˆ µ g is no longer than conventional unbiased estimator y On substituting σ(0)2=0, σ(i)2=0 ∀i=1,2,…,p in the equation (2.4), we obtain the no-measurement error case In that case, the MSE of ˆ µ g , is given by ˆ MSE (µ g ) = [ ( ) ( T 2 C µ + µ b T g *(1) (µ0 ,eT ) + g *(1) (µ0 ,eT ) A * g *(1) (µ0 ,eT ) n ˆ = MSE (µ g *) )] (2.9) where  X X ˆ µ g = g * Y , , ,  µ µ  = g * Y ,U T ( ) , Xp   µp   (2.10) and Y and X i (i = 1,2, , p ) are the sample means of the characteristics Y and Xi based on true ˆ measurements (Yj,Xij, i=1,2,…,p; j=1,2,…,n) The family of estimators µ g * at (2.10) is a generalized version of Srivastava (1971, 80) ˆ The MSE of µ g * is minimized for g *(1) (µ0 ,eT ) = − A *−1 bµ (2.11) ˆ Thus the resulting minimum MSE of µ g * is given by 50 [ µ0 C 02 − b T A *−1 b n σ0 = 1− R2 n ˆ min.MSE (µ g *) = ( ] ) (2.12) where A*=[a*ij] be a p×p matrix with a*ij = ρijCiCj and R stands for the multiple correlation coefficient of Y on X1,X2,…,Xp From (2.6) and (2.12) the increase in minimum MSE ˆ (µ ) due to measurement errors is g obtained as  µ0 ˆ ˆ min.MSE (µ g ) − min.MSE (µ g *) =   n  >0   C (0 ) + b T A *−1 b − b T A −1b   [ ] This is due to the fact that the measurement errors introduce the variances fallible measurements of study variate Y and auxiliary variates Xi Hence there is a need to take the contribution of measurement errors into account BIASES AND MEAN SQUARE ERRORS OF SOME PARTICULAR ESTIMATORS IN THE PRESENCE OF MEASUREMENT ERRORS To obtain the bias of the estimator ˆ µ g , we further assume that the third partial derivatives of g ( y , u T ) ( also exist and are continuous and bounded Then expanding g y , u (y, u ) = (µ T T ) about the point ) , e T in a third-order Taylor's series we obtain ( ) ˆ µ g = g µ , eT + ( y − µ ) + ( y − µ )2 ∂ g2(⋅) ∂y (µ { ∂g (⋅) T + (u − e ) g (1) (µ0 ,eT ) ∂y (µ0 ,eT ) + 2( y − µ )(u − e ) g (1) (µ0 ,eT ) T ,u T ) ( ) } + (u − e ) g (2 ) (µ0 ,eT ) (u − e ) T 51 ∂ ∂ 1 + ( y − µ ) + (u − e )  g y * , u *T ∂y ∂u  6 ( ) (3.1) ( where g(12)(µ0,eT) denotes the matrix of second partial derivatives of g y , u (y, u ) = (µ T T ) at the point ) , eT Noting that ( ) g u0 eT = µ ∂g (⋅) =1 ∂y (µ0 ,eT ) ∂ g (⋅) ∂y (µ ,e T ) =0 ˆ and taking expectation we obtain the bias of the family of estimators µ g to the first degree of approximation, ˆ B (µ g ) = { } 1  µ0 T (2 )  E (u − e ) g (µ0 ,eT ) (u − e ) + 2 n 2  ( )  T (12 ) T  b g (µ0 ,e )    (3.2) ˆ where bT=(b1,b2,…,bp) with bi=ρoiC0Ci; (i=1,2,…, p) Thus we see that the bias of µ g depends also upon ( the second order partial derivatives of the function on g y , u T ) at the point (µ ,e ), and hence will be T different for different optimum estimators of the family (i ) ˆ The biases and mean square errors of the estimators µ g ; i = to 18 up to terms of order n-1 along with the values of g(1)(µ0,eT), g(2)(µ0,eT) and g(12)(µ0,eT) are given in the Table 3.1 52 Table 3.1 Biases and mean squared errors of various estimators of µ0 ESTIMATOR g(1)(µ0,eT) g(2)(µ0,eT) µ W p× p ~ − µ0 ω ~ g(12)(µ0,eT) −ω ~ ˆ µ g (1) BIAS where C ~ pì p g (2 ) ˆ µ g (3 ) ω O ~ (null matrix) − µ ω * ω *T ωT µ ωT µ µ ω ~ ~ where ~ ~ ~ ~ ~ ~ T ~ − ω* ~ T ω µ ~ T ~ [   C + C (2o ) − 2b T ω + ω T A ω  ~ ~ ~  ( = C12 + C (2 ) , C + C (22 ) ,  µ0  T  b ω  n  ~ ~ µ0 ω *  µ02   n   µ0   T    C W − bT ω   ~ ~  p× p  n  where Wpxp=dig(ω1,ω2, ,ωp) µ0 ω MSE  µ0   n T  bT ω *     ω * Aω ~ ~ − T~   T T ω µ µ ω ω µ  ~ ~  ~ ~ ~ ~  µ0   n T b ω  T~ ω µ ~ , C p + C (2p ) [ ] ) ]  µ0   n    C + C (20 ) + 2b T ω + ω T A ω  ~ ~ ~   µ0   n  T  2b T ω * ω * A ω *   ~ ~   C + C (20 ) − + ~  ω T µ ω T µ µ Tω   ~ ~ ~  ~ ~ ~   µ0   n   2b T ω ωT Aω   2 ~   C + C (0 ) + T ~ + ~T  ω µ ω µ µ Tω   ~ ~ ~  ~ ~ ~  ω *T = ~ (ω1,*ω2,* ,ωp*) with (ωi,*=ωi µi) (i=1,2, ,p) µ0 ω O ωT µ (null matrix) ω µ µ  ω ω T + W p× p  ~ ~  ~   −ω ~ ˆ µ g (4 ) ~ ~ − µ0 ω ~ ˆ µg (5 ) ω ~ p× p ~ T ~ ~ ~  µ0     2n  ~ ω T A ω + C T W − 2b T ω  ~ ~ ~ ~ pì p à02 n  [   C + C (2o ) − 2b T ω + ω T A ω  ~ ~ ~  ] [ ] ˆ µ g (6 ) − µ0 ω ˆ µ g (7 ) µ0 ω µ  ω ω T − W p× p  ~ ~  ~   ω  µ0     2n  ˆ µ g (8 ) µ0 ω T µ  ω ω − W p× p  ~ ~  ~   ω  µ0   T   ω A ω − C T W ~ ~  n  ~ ~ ~ ~ 2µ ω ω T ~ ~ −ω ~ ~ ~  µ0  T   ω Aω − b T ω ~ ~  n  ~ ω T A ω − C T W + 2b T ω  ~ ~ ~ ~ p× p   54 [ ] [ ] [ ]  µ02   n  + bT ω  ~ p× p    C + C (2o ) − 2b T ω + ω T A ω  ~ ~ ~   µ02   n    C + C (2o ) + 2b T ω + ω T A ω  ~ ~ ~   µ02   n    C + C (2o ) + 2b T ω + ω T A ω  ~ ~ ~  Table 3.1 Biases and mean squared errors of various estimators of µ0 g(1)(µ0,eT) g(2)(µ0,eT) g(12)(µ0,eT) ˆ µ g (9 ) à0 W pì p ~ −ω  µ0   T    C W − bT ω   ~ ~  pì p n à0 n    C + C (20 ) + 2b T ω + ω T A ω  ~ ~ ~  ˆ µ g (10 ) µ0 ω O ω  µ0  T  b ω  n  ~  µ0   n    C + C (20 ) + 2b T ω + ω T A ω  ~ ~ ~   µ0   n  µ02   n  ESTIMATOR ~ ~ ~ ω (1) µ ˆ µ g (11) 2W ~ ~ where ~ ~ (1) pì p ~ à0 ~ 0 0 0 ωq 0 0 MSE [ ] [ ] ω (1) (1) =(-ω1,-ω2, , -ωq, -ωq+1, ,ωp)1×p ω 0 ω  0 ω3  W (1) p× p =  0 ~ 0  0 0   0 0  BIAS 0 0  0   , 0  0    p× p   T   C * W − bT ω   ~ ~ (1)  (1)  C*T=(C12+C(1)2,…, Cq2+C(q)2;…0) 55   C + C (2o ) − 2b T ω + ω T (1) A ω   ~ (1) ~ ~ (1)    ˆ µ g (12 ) µ  α α T − ∝ p× p  ~ ~  ~   α µ0 ~ where α ~ T =(α1,α2, ,αp)1×p −α ~  µ0     2n  [ ] [ ]   C + C (2o ) + 2b T α + α T A α  ~ ~ ~   µ02   n    C + C (2o ) − 2b T α + α T A α  ~ ~ ~   µ02   n    C + C (2o ) − 2b T α + α T A α  ~ ~ ~  ∝ =diag(α1,α2, ,αp) ~ − α µ0 T pì p  ~ ~  ~   −α  µ0   T    C ∝ − α T Aα − 2b T α  ~ ~ ~ ~ p× p   2n   ˆ µ g (14 ) − α µ0 2µ α α −α  µ0  T   α Aα − C T − b T α ~ ~  n  ~ ~ ]  µ02   n  ˆ µ g (13) ~ [ α T Aα − C T ∝ + 2b T α  ~ ~ ~ ~ p× p   ~ ~ ~ ~ [ 56 ] Table 3.1 Biases and mean squared errors of various estimators of µ0 g(1)(µ0,eT) ESTIMATOR ˆ µg µ0 θ , ~ (15 ) g(2)(µ0,eT) µ θ θ T − Θ p× p , ~ ~  ~   where ˆ µ g (16 ) ˆ µ g (17 ) µ0 θ ~ p× p µ0 θ α* ~  µ0   T   θ Aθ − C T Θ + 2b T θ  ~ ~ ~ p× p   2n   ~ [  µ0  T T   θ Aθ + 2b θ ~ ~ ~  2n  θ  θ  Θ* p× p =diag{θ1  −1 …,θp  p −1 } ω  ~ ω     p  Unbiased O pì p ~ =(i,ài, i=1,2, ,p) ~ 57 [ ] [ ] ]  µ0   n    C + C (20 ) + 2b T θ + θ T A θ  ~ ~ ~   µ0   n  ]  µ0  T   C Θ * p× p +2b T θ ~ ~  2n  ~ α * T =(α1*,α2*, ,αp*) with α *i ~ MSE [ θ O p× p ~ ~ ~ θ T BIAS =diag{1,2, p} * pì p , ~ ~ where v0 ~ ~ where ˆ µ g (18 ) Θ µ0 θ θ ~ g(12)(µ0,eT)   C + C (20 ) + 2b T θ + θ T A θ  ~ ~ ~   µ02   n  [   C + C (2o ) + 2b T θ + θ T A θ  ~ ~ ~  ] 1  T T   C + C ( o ) + µ b α *+ α * A α * ~ ~  n ~  ESTIMATORS BASED ON ESTIMATED OPTIMUM It may be noted that the minimum MSE (2.6) is obtained only when the optimum values of constants involved in the estimator, which are functions of the unknown population parameters µ0, b and A, are known quite accurately To use such estimators in practice, one has to use some guessed values of the parameters µ0, b and A, either through past experience or through a pilot sample survey Das and Tripathi (1978, sec.3) have illustrated that even if the values of the parameters used in the estimator are not exactly equal to their optimum values as given by (2.5) but are close enough, the resulting estimator will be better than the conventional unbiased estimator y For further discussion on this issue, the reader is referred to Murthy (1967), Reddy (1973), Srivenkataramana and Tracy (1984) and Sahai and Sahai (1985) On the other hand if the experimenter is unable to guess the values of population parameters due to lack of experience, it is advisable to replace the unknown population parameters by their consistent estimators Let ˆ φ be a consistent estimator of φ=A-1b We then replace φ by ˆ φ and also µ0 by y if necessary, in the ˆ ˆ optimum µ g resulting in the estimator µ g (est ) , say, which will now be a function of y , u and φ Thus we define a family of estimators (based on estimated optimum values) of µ0 as ( ˆ ˆ µ g (est ) = g * * y , u T , φ T ) (4.1) ( ˆ where g**(⋅) is a function of y , u , φ T T ) such that ( ) g * * µ , e T , φ T = µ for all µ , ⇒ ∂g * *(⋅) ∂y (µ T T , e ,φ ) =1 ∂g * *(⋅) ∂g (⋅) = = − µ A −1b = − µ 0φ ∂u (µ0 ,eT φ T ) ∂u ( µ0 ,e )T (4.2) and ∂g * *(⋅) ˆ ∂φ (µ ,e T ,φ T ) =0 With these conditions and following Srivastava and Jhajj (1983), it can be shown to the first degree of approximation that ˆ ˆ MSE (µ g (est ) ) = min.MSE (µ g )  µ02 =  n    C + C (20 ) − b T A −1b   [ ] Thus if the optimum values of constants involved in the estimator are replaced by their consistent estimators and conditions (4.2) hold true, the resulting estimator ˆ µ g (est ) will have the same asymptotic ˆ mean square error, as that of optimum µ g Our work needs to be extended and future research will explore the computational aspects of the proposed algorithm REFERENCES Biermer, P.P., Groves, R.M., Lyberg, L.E., Mathiowetz, N.A and Sudman, S (1991): Measurement errors in surveys, Wiley, New York Cochran, W G (1963): Sampling Techniques, John Wiley, New York Cochran, W.G (1968): Errors of measurement in statistics, Technometrics, 10(4), 637-666 Das, A.K and Tripathi, T.P (1978): Use of auxiliary information in estimating the finite population variance Sankhya, C, 40, 139-148 Fuller, W.A (1995): Estimation in the presence of measurement error International Statistical Review, 63, 2, 121-147 John, S (1969): On multivariate ratio and product estimators Biometrika, 533-536 Manisha and Singh, R.K (2001): An estimation of population mean in the presence of measurement errors Jour Ind Soc Agri Statist., 54 (1), 13-18 59 Mohanty, S and Pattanaik, L.M (1984): Alternative multivariate ratio estimators using geometric and harmonic means Jour Ind Soc.Agri Statist., 36, 110-118 Murthy, M.N (1967): Sampling Theory and Methods, Statistical Publishing Society, Calcutta Olkin, I (1958): Multivariate ratio estimation for finite population Biometrika, 45, 154-165 Rao, P.S.R.S and Mudholkar, G.S (1967): Generalized multivariate estimators for the mean of a finite population Jour Amer Statist Assoc 62, 1009-1012 Reddy, V.N and Rao, T.J (1977): Modified PPS method of estimation, Sankhya, C, 39, 185-197 Reddy, V.N (1973): On ratio and product methods of estimation Sankhya, B, 35, 307-316 Salabh (1997): Ratio method of estimation in the presence of measurement error, Jour Ind Soc Agri Statist., 52, 150-155 Sahai, A and Ray, S.K (1980): An efficient estimator using auxiliary information Metrika, 27, 271-275 Sahai, A., Chander, R and Mathur, A.K (1980): An alternative multivariate product estimator Jour Ind Soc Agril Statist., 32, 2, 6-12 Sahai, A and Sahai, A (1985): On efficient use of auxiliary information Jour Statist Plann Inference, 12, 203-212 Shukla, G K (1966): An alternative multivariate ratio estimate for finite population Cal Statist Assoc Bull., 15, 127-134 Singh, M P (1967): Multivariate product method of estimation for finite population Jour Ind Soc Agri Statist., 19, (2) 1-10 Srivastava, S.K (1965): An estimator of the mean of a finite population using several auxiliary characters Jour Ind Statist Assoc., 3, 189-194 Srivastava, S.K (1967): An estimator using auxiliary information in sample surveys Cal Statist Assoc Bull., 16, 121-132 Srivastava, S.K (1971): A generalized estimator for the mean of a finite population using multiauxiliary information Jour Amer Statist Assoc 66, 404-407 Srivastava, S.K (1980): A class of estimators using auxiliary information in sample surveys Canad Jour Statist., 8, 253-254 60 Srivastava, S.K and Jhajj, H.S (1983): A class of estimators of the population mean using multi-auxiliary information Cal Statist Assoc Bull., 32, 47-56 Srivenkataramana, T and Tracy, D.S (1984):: Positive and negative valued auxiliary variates in Surveys Metron, xxx(3-4), 3-13 Sud, U.C and Srivastava, S.K (2000): Estimation of population mean in repeat surveys in the presence of measurement errors Jour Ind Soc Ag Statist., 53 (2), 125-133 Sukhatme, P.V., Sukhatme, B.V., Sukhatme, S and Ashok, C (1984): Sampling theory of surveys with applications Iowa State University Press, USA Tuteja, R.K and Bahl, Shashi (1991): Multivariate product estimators Cal Statist Assoc Bull., 42, 109115 Tankou, V and Dharmadlikari, S (1989): Improvement of ratio-type estimators Biom Jour 31 (7), 795802 Walsh, J.E (1970): Generalization of ratio estimate for population total Sankhya, A, 32, 99-106 61 CONTENTS Forward ………………………………………………………………………………4 Estimation of Weibull Shape Parameter by Shrinkage Towards An Interval Under Failure Censored Sampling, by Housila P Singh, Sharad Saxena, Mohammad Khoshnevisan, Sarjinder Singh, Florentin Smarandache …………………………………………………….… A General Class of Estimators of Population Median Using Two Auxiliary Variables in Double Sampling, by Mohammad Khoshnevisan, Housila P Singh, Sarjinder Singh, Florentin Smarandache ………………………………………………………………… 26 A Family of Estimators of Population Mean Using Multiauxiliary Information in Presence of Measurement Errors, by Mohammad Khoshnevisan, Housila P Singh, Florentin Smarandache …… 44 62 The purpose of this book is to postulate some theories and test them numerically Estimation is often a difficult task and it has wide application in social sciences and financial market In order to obtain the optimum efficiency for some classes of estimators, we have devoted this book into three specialized sections X ≤ MX X > MX Total Y ≤ MY P11(x,y) P12(x,y) P1⋅(x,y) Y > MY P21(x,y) P22(x,y) P2⋅(x,y) Total P⋅1(x,y) P⋅2(x,y) $ 9.95 63 ... Saxena, H P Singh, S Singh, F Smarandache RANDOMNESS AND OPTIMAL ESTIMATION IN DATA SAMPLING (second edition) Dr Mohammad Khoshnevisan, Griffith University, School of Accounting and Finance, Qld.,... designed for graduate students and researchers who are active in the area of estimation and data sampling applied in financial survey modeling and applied statistics In our future research, we will... Population Median Using Two Auxiliary Variables in Double Sampling Mohammad Khoshnevisan1 , Housila P Singh2, Sarjinder Singh3, Florentin Smarandache4 School of Accounting and Finance, Griffith

Ngày đăng: 05/11/2014, 13:05

Mục lục

  • CONTENTS

  • Forward

  • Estimation of Weibull Shape Parameter by Shrinkage Towards An Interval Under Failure Censored Sampling

  • A General Class of Estimators of Population Median Using Two Auxiliary Variables in Double Sampling

  • A Family of Estimators of Population Mean Using Multiauxiliary Information in Presence of Measurement Errors

Tài liệu cùng người dùng

Tài liệu liên quan