Handbook of mathematics for engineers and scienteists part 155 pdf

7 310 0
Handbook of mathematics for engineers and scienteists part 155 pdf

Đang tải... (xem toàn văn)

Thông tin tài liệu

1046 PROBABILITY THEORY 20.2.2-6. Characteristic functions. Semi-invariants. 1 ◦ .Thecharacteristic function of a random variable X is the expectation of the random variable e itX , i.e., f(t)=E{e itX } =  +∞ –∞ e itx dF (x) = ⎧ ⎪ ⎪ ⎨ ⎪ ⎪ ⎩  j e itx j p j in the discrete case,  +∞ –∞ e itx p(x) dx in the continuous case, (20.2.2.11) where t is a real variable ranging from –∞ to +∞ and i is the imaginary unit, i 2 =–1. Properties of characteristic functions: 1. The cumulative distribution function is uniquely determined by the characteristic func- tion. 2. The characteristic function is uniformly continuous on the entire real line. 3. |f(t)| ≤ f(0)=1. 4. f(–t)=f (t). 5. f(t) is a real function if and only if the random variable X is symmetric. 6. The characteristic function of the sum of two independent random variables is equal to the product of their characteristic functions. 7. If a random variable X has a kth absolute moment, then the characteristic function of X is k times differentiable and the relation f (m) (0)=i m E{X m } holds for m ≤ k. 8. If x 1 and x 2 are points of continuity of the cumulative distribution function F(x), then F (x 2 )–F (x 1 )= 1 2π lim T →∞  T –T e –itx 1 – e –itx 2 it f(t) dt.(20.2.2.12) 9. If  +∞ –∞ |f(t)| dt < ∞, then the cumulative distribution function F (x) has a probability density function p(x), which is given by the formula p(x)= 1 2π  +∞ –∞ e –itx f(t) dt.(20.2.2.13) If the probability distribution has a kth moment α k , then there exist semi-invariants (cumulants) τ 1 , , τ k determined by the relation ln f(t)= k  l=1 τ l (it) l l! + o(t k ). (20.2.2.14) The semi-invariants τ 1 , , τ k can be calculated by the formulas τ l = i –l ∂ l ln f (t) ∂t l    t=0 . 20.2. RANDOM VARIABLES AND THEIR CHARACTERISTICS 1047 20.2.2-7. Generating functions. The generating function of a numerical sequence a 0 , a 1 , is defined as the power series ϕ X (z)= ∞  n=0 a n z n ,(20.2.2.15) where z is either a formal variable or a complex or real number. If X is a random variable whose absolute moments of any order are finite, then the series ∞  n=0 E{X n } z n n! (20.2.2.16) is called the moment-generating function of the random variable X. If X is a nonnegative random variable taking integer values, then the formulas ϕ X (z)=E{z X } = ∞  n=0 P (X = n)z n (20.2.2.17) define the probability-generating function, or simply the generating function of the random variable X. The generating function of a random variable X is related to its characteristic function f(t) by the formula f(t)=ϕ X (e it ). (20.2.2.18) 20.2.3. Main Discrete Distributions 20.2.3-1. Binomial distribution. A random variable X has the binomial distribution with parameters (n, p) (see Fig. 20.2) if P (X = k)=C k n p k (1 – p) n–k , k = 0, 1, , n,(20.2.3.1) where 0 < p < 1, n ≥ 1. 0 0 0.1 0.2 0.3 123456 k P Figure 20.2. Binomial distribution for p = 0.55, n = 6. 1048 PROBABILITY THEORY The cumulative distribution function, the probability-generating function, and the char- acteristic function have the form F (x)= ⎧ ⎪ ⎨ ⎪ ⎩ 1 for x > n, m  k=1 C k n p k (1 – p) n–k for m ≤ x < m + 1 (m = 1, 2, , n – 1), 0 for x < 0, ϕ X (z)=(1 – p + pz) n , f(t)=(1 – p + pe it ) n , (20.2.3.2) and the numerical characteristics are given by the formulas E{X} = np,Var{X} = np(1 – p), γ 1 = 1 – 2p √ np(1 – p) , γ 2 = 1 – 6p(1 – p) np(1 – p) . The binomial distribution is a model of random experiments consisting of n independent identical Bernoulli trials. If X 1 , , X n are independent random variables, each of which can take only two values 1 or 0 with probabilities p and q = 1 – p, respectively, then the random variable X = n  k=1 X k has the binomial distribution with parameters (n, p). The binomial distribution is asymptotically normal with parameters (np, np(1 – p)) as n →∞(the de Moivre–Laplace limit theorem, which is a special case of the central limit theorem, see Paragraph 20.3.2-2); specifically, P (X = k)=C k n p k (1 – p) n–k ≈ 1 √ np(1 – p) ϕ  k – np √ np(1 – p)  as (k – np) 3 [np(1 – p)] 4 → 0, P (k 1 ≤ X ≤ k 2 ) ≈ Φ  k 2 – np √ np(1 – p)  – Φ  k 1 – np √ np(1 – p)  as (k 1,2 – np) 3 [np(1 – p)] 4 → 0, where ϕ(x)andΦ(x) are the probability density function and the cumulative distribution function of the standard normal distribution (see Paragraph 20.2.4-3). 20.2.3-2. Geometric distribution. A random variable X has a geometric distribution with parameter p (0 < p < 1)(seeFig.20.3) if P (X = k)=p(1 – p) k , k = 0, 1, 2, (20.2.3.3) 0 0 0.2 0.4 0.6 123456 k P Figure 20.3. Geometric distribution for p = 0.55. 20.2. RANDOM VARIABLES AND THEIR CHARACTERISTICS 1049 The probability-generating function and the characteristic function have the form ϕ X (z)=p[1 –(1 – p)z] –1 , f(t)=p[1 –(1 – p)e it ] –1 , and the numerical characteristics can be calculated by the formulas E{X} = 1 – p p , α 2 = (1 – p)(2 – p) p 2 ,Var{X} = 1 – p p 2 , γ 1 = 2 – p √ 1 – p , γ 2 = 6 + p 2 1 – p . The geometric distribution describes a random variable X equal to the number of failures before the first success in a sequence of Bernoulli trials with probability p of success in each trial. The geometric distribution is the only discrete distribution that is memoryless, i.e., satisfies the relation P (X > s + t|X > t)=P(X > s) for all s, t > 0. This property permits one to view the geometric distribution as the discrete analog of the exponential distribution. 20.2.3-3. Hypergeometric distribution. A random variable X has the hypergeometric distribution with parameters (N, p, n)(see Fig. 20.4) if P (X = k)= C k Np C n–k N(1–p) C n N , k = 0, 1, , n,(20.2.3.4) where 0 < p < 1, 0 ≤ n ≤ N, N > 0. 0 0 0.2 0.4 1234 k P Figure 20.4. Hypergeometric distribution for p = 0.5, N = 10, n = 4. The numerical characteristics are given by the formulas E{X} = np,Var{X} = N – n N – 1 np(1 – p). A typical scheme in which the hypergeometric distribution arises is as follows: n ele- ments are randomly drawn without replacement from a population of N elements containing exactly Np elements of type I and N(1 – p) elements of type II. The number of elements of type I in the sample is described by the hypergeometric distribution. If n  N (in practice, n < 0.1N), then C k Np C n–k N(1–p) C n N ≈ C k n p k (1 – p) n–k ; i.e., the hypergeometric distribution tends to the binomial distribution. 1050 PROBABILITY THEORY 0 0 0.1 0.2 0.3 123456 k P Figure 20.5. Poisson distribution for λ = 2. 20.2.3-4. Poisson distribution. A random variable X has the Poisson distribution with parameter λ (λ > 0) (see Fig. 20.5) if P (X = k)= λ k k! e –λ , k = 0, 1, 2, (20.2.3.5) The cumulative distribution function of the Poisson distribution at the points k = 0, 1, 2, is given by the formula F (k)= 1 k!  ∞ λ y k e –y dy = 1 – S k+1 (λ), where S k+1 (λ) is the value at the point λ of the cumulative distribution function of the gamma distribution with parameter k + 1. In particular, P(X = k)=S k (λ)–S k+1 (λ). The sum of independent random variables X 1 , , X n , obeying the Poisson distributions with parameters λ 1 , , λ n , respectively, has the Poisson distribution with parameter λ 1 +···+λ n . The probability-generating function and the characteristic function have the form ϕ X (z)=e λ(z–1) , f(t)=e λ(e it –1) , and the numerical characteristics are given by the expressions E{X} = λ,Var{X} = λ, α 2 = λ 2 + λ, α 3 = λ(λ 2 + 3λ + 1), α 4 = λ(λ 3 + 6λ 2 + 7λ + 1), μ 3 = λ, μ 4 = 3λ 2 + λ, γ 1 = λ –1/2 , γ 2 = λ –1 . The Poisson distribution is the limit distribution for many discrete distributions such as the hypergeometric distribution, the binomial distribution, the negative binomial distribu- tion, distributions arising in problems of arrangement of particles in cells, etc. The Poisson distribution is an acceptable model for describing the random number of occurrences of certain events on a given time interval in a given domain in space. 20.2.3-5. Negative binomial distribution. A random variable X has the negative binomial distribution with parameters (r, p)(see Fig. 20.6) if P (X = k)=C r–1 r+k–1 p r (1 – p) k , k = 0, 1, , r,(20.2.3.6) where 0 < p < 1, r > 0. 20.2. RANDOM VARIABLES AND THEIR CHARACTERISTICS 1051 0 0 0.1 0.2 0.3 123456 k P Figure 20.6. Negative binomial distribution for p = 0.8, n = 6. The probability-generating function and the characteristic function have the form ϕ X (z)=  p 1 –(1 – p)z  r , f(t)=  p 1 –(1 – p)e it  r , and the numerical characteristics can be calculated by the formulas E{X} = r(1 – p) p ,Var{X} = r(1 – p) p 2 , γ 1 = 2 – p √ r(1 – p) , γ 2 = 6 r + p 2 r(1 – p) . The negative binomial distribution describes the number X of failures before the rth success in a Bernoulli process with probability p of success on each trial. For r = 1,the negative binomial distribution coincides with the geometric distribution. 20.2.4. Continuous Distributions 20.2.4-1 Uniform distribution. A random variable X is uniformly distributed on the interval [a, b] (Fig. 20.7a)if p(x)= 1 b – a for x [a, b]. (20.2.4.1) a ()a ()b ab x px() Fx() x b 1 Figure 20.7. Probability density (a) and cumulate distribution (b) functions of uniform distribution. 1052 PROBABILITY THEORY The cumulative distribution function (see Fig. 20.7b) and the characteristic function have the form F (x)= ⎧ ⎨ ⎩ 0 for x ≤ a, x – a b – a for a < x ≤ b, 1 for x > b, f(t)= 1 t(b – a) (e itb – e ita ), (20.2.4.2) and the numerical characteristics are given by the expressions E{X} = a + b 2 ,Var{X} = (b – a) 2 12 , γ 1 = 0, γ 2 =–1.2,Med{X} = a + b 2 (a + b). The uniform distribution does not have a mode. 20.2.4-2. Exponential distribution. A random variable X has the exponential distribution with parameter λ > 0 (Fig. 20.8a)if p(x)=λe –λx , x > 0.(20.2.4.3) 11 OO 11 22 223 x px() Fx() x 3 ()a ()b Figure 20.8. Probability density (a) and cumulate distribution (b) functions of exponential distribution for λ = 2. The cumulative distribution function (see Fig. 20.8b) and the characteristic function have the form F (x)=  1 – e –λx for x > 0, 0 for x ≤ 0, f(t)=  1 – it λ  –1 , (20.2.4.4) and the numerical characteristics are given by the formulas E{X} = 1 λ , α 2 = 2 λ 2 ,Med{X} = ln 2 λ ,Var{X} = 1 λ 2 , γ 1 = 2, γ 2 = 6. The exponential distribution is the continuous analog of the geometric distribution and is memoryless: P (X > t + s|X > s)=P (X > s). The exponential distribution is closely related to Poisson processes: if a flow of events is described by a Poisson process, then the time intervals between successive events are independent random variables obeying the exponential distribution. The exponential distri- bution is used in queuing theory and theory of reliability. . ele- ments are randomly drawn without replacement from a population of N elements containing exactly Np elements of type I and N(1 – p) elements of type II. The number of elements of type I in the. characteristic function of X is k times differentiable and the relation f (m) (0)=i m E{X m } holds for m ≤ k. 8. If x 1 and x 2 are points of continuity of the cumulative distribution function F(x),. or simply the generating function of the random variable X. The generating function of a random variable X is related to its characteristic function f(t) by the formula f(t)=ϕ X (e it ). (20.2.2.18) 20.2.3.

Ngày đăng: 02/07/2014, 13:20

Tài liệu cùng người dùng

Tài liệu liên quan