Modern mathematical statistics with applications (2nd edition) by devore

858 138 0
Modern mathematical statistics with applications (2nd edition) by devore

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

Springer Texts in Statistics Series Editors: G Casella S Fienberg I Olkin For further volumes: http://www.springer.com/series/417 Modern Mathematical Statistics with Applications Second Edition Jay L Devore California Polytechnic State University Kenneth N Berk Illinois State University Jay L Devore California Polytechnic State University Statistics Department San Luis Obispo California USA jdevore@calpoly.edu Kenneth N Berk Illinois State University Department of Mathematics Normal Illinois USA kberk@ilstu.edu ISBN 978-1-4614-0390-6 e-ISBN 978-1-4614-0391-3 DOI 10.1007/978-1-4614-0391-3 Springer New York Dordrecht Heidelberg London Library of Congress Control Number: 2011936004 # Springer Science+Business Media, LLC 2012 All rights reserved This work may not be translated or copied in whole or in part without the written permission of the publisher (Springer Science+Business Media, LLC, 233 Spring Street, New York, NY 10013, USA), except for brief excerpts in connection with reviews or scholarly analysis Use in connection with any form of information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed is forbidden The use in this publication of trade names, trademarks, service marks, and similar terms, even if they are not identified as such, is not to be taken as an expression of opinion as to whether or not they are subject to proprietary rights Printed on acid-free paper Springer is part of Springer Science+Business Media (www.springer.com) To my wife Carol whose continuing support of my writing efforts over the years has made all the difference To my wife Laura who, as a successful author, is my mentor and role model About the Authors Jay L Devore Jay Devore received a B.S in Engineering Science from the University of California, Berkeley, and a Ph.D in Statistics from Stanford University He previously taught at the University of Florida and Oberlin College, and has had visiting positions at Stanford, Harvard, the University of Washington, New York University, and Columbia He has been at California Polytechnic State University, San Luis Obispo, since 1977, where he was chair of the Department of Statistics for years and recently achieved the exalted status of Professor Emeritus Jay has previously authored or coauthored five other books, including Probability and Statistics for Engineering and the Sciences, which won a McGuffey Longevity Award from the Text and Academic Authors Association for demonstrated excellence over time He is a Fellow of the American Statistical Association, has been an associate editor for both the Journal of the American Statistical Association and The American Statistician, and received the Distinguished Teaching Award from Cal Poly in 1991 His recreational interests include reading, playing tennis, traveling, and cooking and eating good food Kenneth N Berk Ken Berk has a B.S in Physics from Carnegie Tech (now Carnegie Mellon) and a Ph.D in Mathematics from the University of Minnesota He is Professor Emeritus of Mathematics at Illinois State University and a Fellow of the American Statistical Association He founded the Software Reviews section of The American Statistician and edited it for years He served as secretary/treasurer, program chair, and chair of the Statistical Computing Section of the American Statistical Association, and he twice co-chaired the Interface Symposium, the main annual meeting in statistical computing His published work includes papers on time series, statistical computing, regression analysis, and statistical graphics, as well as the book Data Analysis with Microsoft Excel (with Patrick Carey) vi Contents Preface x Overview and Descriptive Statistics 1.1 1.2 1.3 1.4 56 Introduction 96 Random Variables 97 Probability Distributions for Discrete Random Variables 101 Expected Values of Discrete Random Variables 112 Moments and Moment Generating Functions 121 The Binomial Probability Distribution 128 Hypergeometric and Negative Binomial Distributions 138 The Poisson Probability Distribution 146 Continuous Random Variables and Probability Distributions 158 4.1 4.2 4.3 4.4 4.5 4.6 4.7 Introduction 50 Sample Spaces and Events 51 Axioms, Interpretations, and Properties of Probability Counting Techniques 66 Conditional Probability 74 Independence 84 Discrete Random Variables and Probability Distributions 96 3.1 3.2 3.3 3.4 3.5 3.6 3.7 Probability 50 2.1 2.2 2.3 2.4 2.5 Introduction Populations and Samples Pictorial and Tabular Methods in Descriptive Statistics Measures of Location 24 Measures of Variability 32 Introduction 158 Probability Density Functions and Cumulative Distribution Functions Expected Values and Moment Generating Functions 171 The Normal Distribution 179 The Gamma Distribution and Its Relatives 194 Other Continuous Distributions 202 Probability Plots 210 Transformations of a Random Variable 220 159 Joint Probability Distributions 232 5.1 5.2 5.3 5.4 5.5 Introduction 232 Jointly Distributed Random Variables 233 Expected Values, Covariance, and Correlation Conditional Distributions 253 Transformations of Random Variables 265 Order Statistics 271 245 vii viii Contents Statistics and Sampling Distributions 284 6.1 6.2 6.3 6.4 Point Estimation 331 7.1 7.2 7.3 7.4 8.5 10.2 10.3 10.4 10.5 10.6 Introduction 484 z Tests and Confidence Intervals for a Difference Between Two Population Means 485 The Two-Sample t Test and Confidence Interval 499 Analysis of Paired Data 509 Inferences About Two Population Proportions 519 Inferences About Two Population Variances 527 Comparisons Using the Bootstrap and Permutation Methods 532 The Analysis of Variance 552 11.1 11.2 11.3 11.4 11.5 12 Introduction 425 Hypotheses and Test Procedures 426 Tests About a Population Mean 436 Tests Concerning a Population Proportion 450 P-Values 456 Some Comments on Selecting a Test Procedure 467 Inferences Based on Two Samples 484 10.1 11 Introduction 382 Basic Properties of Confidence Intervals 383 Large-Sample Confidence Intervals for a Population Mean and Proportion Intervals Based on a Normal Population Distribution 401 Confidence Intervals for the Variance and Standard Deviation of a Normal Population 409 Bootstrap Confidence Intervals 411 Tests of Hypotheses Based on a Single Sample 425 9.1 9.2 9.3 9.4 9.5 10 Introduction 331 General Concepts and Criteria 332 Methods of Point Estimation 350 Sufficiency 361 Information and Efficiency 371 Statistical Intervals Based on a Single Sample 382 8.1 8.2 8.3 8.4 Introduction 284 Statistics and Their Distributions 285 The Distribution of the Sample Mean 296 The Mean, Variance, and MGF for Several Variables 306 Distributions Based on a Normal Random Sample 315 Appendix: Proof of the Central Limit Theorem 329 Introduction 552 Single-Factor ANOVA 553 Multiple Comparisons in ANOVA 564 More on Single-Factor ANOVA 572 Two-Factor ANOVA with Kij ¼ 582 Two-Factor ANOVA with Kij > 597 Regression and Correlation 613 12.1 12.2 12.3 Introduction 613 The Simple Linear and Logistic Regression Models 614 Estimating Model Parameters 624 Inferences About the Regression Coefficient b1 640 391 Contents 12.4 12.5 12.6 12.7 12.8 13 654 Goodness-of-Fit Tests and Categorical Data Analysis 723 13.1 13.2 13.3 14 Inferences Concerning mY Áx à and the Prediction of Future Y Values Correlation 662 Assessing Model Adequacy 674 Multiple Regression Analysis 682 Regression with Matrices 705 Introduction 723 Goodness-of-Fit Tests When Category Probabilities Are Completely Specified 724 Goodness-of-Fit Tests for Composite Hypotheses 732 Two-Way Contingency Tables 744 Alternative Approaches to Inference 758 14.1 14.2 14.3 14.4 Introduction 758 The Wilcoxon Signed-Rank Test 759 The Wilcoxon Rank-Sum Test 766 Distribution-Free Confidence Intervals 771 Bayesian Methods 776 Appendix Tables 787 A.1 A.2 A.3 A.4 A.5 A.6 A.7 A.8 A.9 A.10 A.11 A.12 A.13 A.14 A.15 A.16 Cumulative Binomial Probabilities 788 Cumulative Poisson Probabilities 790 Standard Normal Curve Areas 792 The Incomplete Gamma Function 794 Critical Values for t Distributions 795 Critical Values for Chi-Squared Distributions 796 t Curve Tail Areas 797 Critical Values for F Distributions 799 Critical Values for Studentized Range Distributions 805 Chi-Squared Curve Tail Areas 806 Critical Values for the Ryan–Joiner Test of Normality 808 Critical Values for the Wilcoxon Signed-Rank Test 809 Critical Values for the Wilcoxon Rank-Sum Test 810 Critical Values for the Wilcoxon Signed-Rank Interval 811 Critical Values for the Wilcoxon Rank-Sum Interval 812 b Curves for t Tests 813 Answers to Odd-Numbered Exercises 814 Index 835 ix 798 Appendix Tables Table A.7 t Curve Tail Areas (cont.) Appendix Tables Table A.8 Critical Values for F Distributions 799 800 Appendix Tables Table A.8 Critical Values for F Distributions (cont.) Appendix Tables Table A.8 Critical Values for F Distributions (cont.) 801 802 Appendix Tables Table A.8 Critical Values for F Distributions (cont.) Appendix Tables Table A.8 Critical Values for F Distributions (cont.) 803 804 Appendix Tables Table A.8 Critical Values for F Distributions (cont.) Appendix Tables Table A.9 Critical Values for Studentized Range Distributions 805 806 Appendix Tables Table A.10 Chi-Squared Curve Tail Areas Appendix Tables Table A.10 Chi-Squared Curve Tail Areas (cont.) 807 808 Appendix Tables Table A.11 Critical Values for the Ryan–Joiner Test of Normality Appendix Tables Table A.12 Critical Values for the Wilcoxon Signed-Rank Test 809 810 Appendix Tables Table A.13 Critical Values for the Wilcoxon Rank-Sum Test Appendix Tables Table A.14 Critical Values for the Wilcoxon Signed-Rank Interval 811 812 Appendix Tables Table A.15 Critical Values for the Wilcoxon Rank-Sum Interval ... Texts in Statistics Series Editors: G Casella S Fienberg I Olkin For further volumes: http://www.springer.com/series/417 Modern Mathematical Statistics with Applications Second Edition Jay L Devore. .. CO (g/mile) 13.8 118 18.3 149 32.2 232 32.5 236 J.L Devore and K.N Berk, Modern Mathematical Statistics with Applications, Springer Texts in Statistics, DOI 10.1007/978-1-4614-0391-3_1, # Springer... audience and level Mathematical Level The challenge for students at this level should lie with mastery of statistical concepts as well as with mathematical wizardry Consequently, the mathematical prerequisites

Ngày đăng: 08/08/2018, 16:51

Từ khóa liên quan

Mục lục

  • Cover & Table of Contents - Modern Mathematical Statistics with Applications (2nd Edition)

    • GetFullPageImage

    • front-matter

      • Modern Mathematical Statistics with Applications

        • About the Authors

        • Contents

        • Preface

          • Purpose

          • Content

          • Mathematical Level

          • Recommended Coverage

          • Acknowledgments

          • A Final Thought

    • fulltext

      • Chapter 1: Overview and Descriptive Statistics

        • 1.1 Populations and Samples

          • Branches of Statistics

          • Collecting Data

        • 1.2 Pictorial and Tabular Methods in Descriptive Statistics

          • Notation

          • Stem-and-Leaf Displays

          • Dotplots

          • Histograms

          • Histogram Shapes

          • Qualitative Data

          • Multivariate Data

        • 1.3 Measures of Location

          • The Mean

          • The Median

          • Other Measures of Location: Quartiles, Percentiles, and Trimmed Means

          • Categorical Data and Sample Proportions

        • 1.4 Measures of Variability

          • Measures of Variability for Sample Data

          • Motivation for s2

          • A Computing Formula for s2

          • Boxplots

          • Boxplots That Show Outliers

          • Comparative Boxplots

        • Bibliography

    • fulltext(1)

      • Chapter 2: Probability

        • 2.1 Sample Spaces and Events

          • The Sample Space of an Experiment

          • Events

          • Some Relations from Set Theory

        • 2.2 Axioms, Interpretations, and Properties of Probability

          • Interpreting Probability

          • More Probability Properties

          • Determining Probabilities Systematically

          • Equally Likely Outcomes

        • 2.3 Counting Techniques

          • The Product Rule for Ordered Pairs

          • Tree Diagrams

          • A More General Product Rule

          • Permutations

          • Combinations

        • 2.4 Conditional Probability

          • The Definition of Conditional Probability

          • The Multiplication Rule for P(A ∩ B)

          • Bayes´ Theorem

        • 2.5 Independence

          • P(A ∩ B) When Events Are Independent

          • Independence of More Than Two Events

        • Bibliography

    • fulltext(2)

      • Chapter 3: Discrete Random Variables and Probability Distributions

        • 3.1 Random Variables

          • Two Types of Random Variables

        • 3.2 Probability Distributions for Discrete Random Variables

          • A Parameter of a Probability Distribution

          • The Cumulative Distribution Function

          • Another View of Probability Mass Functions

        • 3.3 Expected Values of Discrete Random Variables

          • The Expected Value of X

          • The Expected Value of a Function

          • The Variance of X

          • A Shortcut Formula for sigma2

          • Rules of Variance

        • 3.4 Moments and Moment Generating Functions

        • 3.5 The Binomial Probability Distribution

          • The Binomial Random Variable and Distribution

          • Using Binomial Tables

          • The Mean and Variance of X

          • The Moment Generating Function of X

        • 3.6 Hypergeometric and Negative Binomial Distributions

          • The Hypergeometric Distribution

          • The Negative Binomial Distribution

        • 3.7 The Poisson Probability Distribution

          • The Poisson Distribution as a Limit

          • The Mean, Variance and MGF of X

          • The Poisson Process

        • Bibliography

    • fulltext(3)

      • Chapter 4: Continuous Random Variables and Probability Distributions

        • 4.1 Probability Density Functions and Cumulative Distribution Functions

          • Probability Distributions for Continuous Variables

          • The Cumulative Distribution Function

          • Using F(x) to Compute Probabilities

          • Obtaining f(x) from F(x)

          • Percentiles of a Continuous Distribution

        • 4.2 Expected Values and Moment Generating Functions

          • Expected Values

          • The Variance and Standard Deviation

          • Approximating the Mean Value and Standard Deviation

          • Moment Generating Functions

        • 4.3 The Normal Distribution

          • The Standard Normal Distribution

          • Percentiles of the Standard Normal Distribution

            • zα Notation

          • Nonstandard Normal Distributions

            • Percentiles of an Arbitrary Normal Distribution

          • The Normal Distribution and Discrete Populations

          • Approximating the Binomial Distribution

          • The Normal Moment Generating Function

        • 4.4 The Gamma Distribution and Its Relatives

          • The Family of Gamma Distributions

          • The Exponential Distribution

          • The Chi-Squared Distribution

        • 4.5 Other Continuous Distributions

          • The Weibull Distribution

          • The Lognormal Distribution

          • The Beta Distribution

        • 4.6 Probability Plots

          • Sample Percentiles

          • A Probability Plot

          • Beyond Normality

        • 4.7 Transformations of a Random Variable

        • Bibliography

    • fulltext(4)

      • Chapter 5: Joint Probability Distributions

        • 5.1 Jointly Distributed Random Variables

          • The Joint Probability Mass Function for Two Discrete Random Variables

          • The Joint Probability Density Function for Two Continuous Random Variables

          • Independent Random Variables

          • More than Two Random Variables

        • 5.2 Expected Values, Covariance, and Correlation

          • Covariance

          • Correlation

        • 5.3 Conditional Distributions

          • Independence

          • The Bivariate Normal Distribution

          • Regression to the Mean

          • The Mean and Variance Via the Conditional Mean and Variance

        • 5.4 Transformations of Random Variables

          • The Joint Distribution of Two New Random Variables

          • The Joint Distribution of More than Two New Variables

        • 5.5 Order Statistics

          • The Distributions of Yn and Y1

          • The Joint Distribution of the n Order Statistics

          • The Distribution of a Single Order Statistic

          • The Joint Distribution of Two Order Statistics

          • An Intuitive Derivation of Order Statistic PDF´s

        • Bibliography

    • fulltext(5)

      • Chapter 6: Statistics and Sampling Distributions

        • 6.1 Statistics and Their Distributions

          • Random Samples

          • Deriving the Sampling Distribution of a Statistic

          • Simulation Experiments

        • 6.2 The Distribution of the Sample Mean

          • The Case of a Normal Population Distribution

          • The Central Limit Theorem

          • Other Applications of the Central Limit Theorem

          • The Law of Large Numbers

        • 6.3 The Mean, Variance, and MGF for Several Variables

          • The Difference Between Two Random Variables

          • The Case of Normal Random Variables

          • Moment Generating Functions for Linear Combinations

        • 6.4 Distributions Based on a Normal Random Sample

          • The Chi-Squared Distribution

          • The t Distribution

          • The F Distribution

          • Summary of Relationships

        • Bibliography

        • Appendix: Proof of the Central Limit Theorem

    • fulltext(6)

      • Chapter 7: Point Estimation

        • 7.1 General Concepts and Criteria

          • Mean Squared Error

          • Unbiased Estimators

          • Estimators with Minimum Variance

          • More Complications

          • Reporting a Point Estimate: The Standard Error

          • The Bootstrap

        • 7.2 Methods of Point Estimation

          • The Method of Moments

          • Maximum Likelihood Estimation

          • Some Properties of MLEs

          • Large-Sample Behavior of the MLE

          • Some Complications

        • 7.3 Sufficiency

          • The Factorization Theorem

          • Jointly Sufficient Statistics

          • Minimal Sufficiency

          • Improving an Estimator

          • Further Comments

        • 7.4 Information and Efficiency

          • Information in a Random Sample

          • The Cramér-Rao Inequality

          • Large Sample Properties of the MLE

        • Bibliography

    • fulltext(7)

      • Chapter 8: Statistical Intervals Based on a Single Sample

        • 8.1 Basic Properties of Confidence Intervals

          • Interpreting a Confidence Level

          • Other Levels of Confidence

          • Confidence Level, Precision, and Choice of Sample Size

          • Deriving a Confidence Interval

        • 8.2 Large-Sample Confidence Intervals for a Population Mean and Proportion

          • A Large-Sample Interval for μ

          • A General Large-Sample Confidence Interval

          • A Confidence Interval for a Population Proportion

          • One-Sided Confidence Intervals (Confidence Bounds)

        • 8.3 Intervals Based on a Normal Population Distribution

          • Properties of t Distributions

          • The One-Sample t Confidence Interval

          • A Prediction Interval for a Single Future Value

          • Tolerance Intervals

          • Intervals Based on Nonnormal Population Distributions

        • 8.4 Confidence Intervals for the Variance and Standard Deviation of a Normal Population

        • 8.5 Bootstrap Confidence Intervals

          • The Percentile Interval

          • A Refined Interval

          • Bootstrapping the Median

          • The Mean Versus the Median

        • Bibliography

    • fulltext(8)

      • Chapter 9: Tests of Hypotheses Based on a Single Sample

        • 9.1 Hypotheses and Test Procedures

          • Test Procedures

          • Errors in Hypothesis Testing

        • 9.2 Tests About a Population Mean

          • Case I: A Normal Population with Known sigma

          • Case II: Large-Sample Tests

          • Case III: A Normal Population Distribution with Unknown sigma

        • 9.3 Tests Concerning a Population Proportion

          • Large-Sample Tests

          • Small-Sample Tests

        • 9.4 P-Values

          • P-Values for z Tests

          • P-Values for t Tests

          • More on Interpreting P-Values

        • 9.5 Some Comments on Selecting a Test Procedure

          • Statistical Versus Practical Significance

          • Best Tests for Simple Hypotheses

          • Power and Uniformly Most Powerful Tests

          • Likelihood Ratio Tests

        • Bibliography

    • fulltext(9)

      • Chapter 10: Inferences Based on Two Samples

        • 10.1 z Tests and Confidence Intervals for a Difference Between Two Population Means

          • Test Procedures for Normal Populations with Known Variances

          • Using a Comparison to Identify Causality

          • beta and the Choice of Sample Size

          • Large-Sample Tests

          • Confidence Intervals for M1-M2

        • 10.2 The Two-Sample t Test and Confidence Interval

          • Pooled t Procedures

          • Type II Error Probabilities

        • 10.3 Analysis of Paired Data

          • The Paired t Test

          • A Confidence Interval for MD

            • Paired Data and Two-Sample t Procedures

            • Paired Versus Unpaired Experiments

        • 10.4 Inferences About Two Population Proportions

          • A Large-Sample Test Procedure

          • Type II Error Probabilities and Sample Sizes

          • A Large-Sample Confidence Interval for p1 - p2

          • Small-Sample Inferences

        • 10.5 Inferences About Two Population Variances

          • Testing Hypotheses

          • P-Values for F Tests

          • A Confidence Interval for s

            • Bibliography

        • 10.6 Comparisons Using the Bootstrap and Permutation Methods

          • The Bootstrap for Two Samples

          • Permutation Tests

          • Inferences About Variability

          • The Analysis of Paired Data

    • fulltext(10)

      • Chapter 11: The Analysis of Variance

        • 11.1 Single-Factor ANOVA

          • Notation and Assumptions

          • Sums of Squares and Mean Squares

          • The F Test

          • Computational Formulas

          • Testing for the Assumption of Equal Variances

        • 11.2 Multiple Comparisons in ANOVA

          • Tukey´s Procedure

          • The Interpretation of α in Tukey´s Procedure

          • Confidence Intervals for Other Parametric Functions

        • 11.3 More on Single-Factor ANOVA

          • An Alternative Description of the ANOVA Model

          • beta for the F Test

          • Relationship of the F Test to the t Test

          • Single-Factor ANOVA When Sample Sizes Are Unequal

          • Multiple Comparisons When Sample Sizes Are Unequal

          • Data Transformation

          • A Random Effects Model

        • 11.4 Two-Factor ANOVA with Kij=1

          • The Model

          • Test Procedures

          • Expected Mean Squares

          • Multiple Comparisons

          • Randomized Block Experiments

          • Models for Random Effects

        • 11.5 Two-Factor ANOVA with Kij1

          • Parameters for the Fixed Effects Model with Interaction

          • Notation, Model, and Analysis

          • Multiple Comparisons

          • Models with Mixed and Random Effects

        • Bibliography

    • fulltext(11)

      • Chapter 12: Regression and Correlation

        • 12.1 The Simple Linear and Logistic Regression Models

          • A Linear Probabilistic Model

          • The Logistic Regression Model

        • 12.2 Estimating Model Parameters

          • Estimating σ2 and σ

          • The Coefficient of Determination

          • Terminology and Scope of Regression Analysis

        • 12.3 Inferences About the Regression Coefficient β1

          • A Confidence Interval for β1

          • Hypothesis-Testing Procedures

          • Regression and ANOVA

          • Fitting the Logistic Regression Model

        • 12.4 Inferences Concerning μY.x* and the Prediction of Future Y Values

          • Inferences Concerning μY.x*

          • A Prediction Interval for a Future Value of Y

        • 12.5 Correlation

          • The Sample Correlation Coefficient r

          • Properties of r

          • The Population Correlation Coefficient rho and Inferences About Correlation

          • Other Inferences Concerning rho

        • 12.6 Assessing Model Adequacy

          • Residuals and Standardized Residuals

          • Diagnostic Plots

          • Difficulties and Remedies

        • 12.7 Multiple Regression Analysis

          • Estimating Parameters

          • σ2 and the Coefficient of Multiple Determination

          • A Model Utility Test

          • Inferences in Multiple Regression

          • Assessing Model Adequacy

          • Multiple Regression Models

          • Models with Predictors for Categorical Variables

        • 12.8 Regression with Matrices

          • The Normal Equations

          • Residuals, ANOVA, F, and R-Squared

          • Covariance Matrices

          • The Hat Matrix

        • Bibliography

    • fulltext(12)

      • Chapter 13: Goodness-of-Fit Tests and Categorical Data Analysis

        • 13.1 Goodness-of-Fit Tests When Category Probabilities Are Completely Specified

          • P-Values for Chi-Squared Tests

          • 2 When the pi´s Are Functions of Other Parameters

          • 2 When the Underlying Distribution Is Continuous

        • 13.2 Goodness-of-Fit Tests for Composite Hypotheses

          • 2 When Parameters Are Estimated

          • Goodness of Fit for Discrete Distributions

          • Goodness of Fit for Continuous Distributions

          • A Special Test for Normality

        • 13.3 Two-Way Contingency Tables

          • Testing for Homogeneity

          • Testing for Independence

          • Ordinal Factors and Logistic Regression

        • Bibliography

    • fulltext(13)

      • Chapter 14: Alternative Approaches to Inference

        • 14.1 The Wilcoxon Signed-Rank Test

          • A General Description of the Wilcoxon Signed-Rank Test

          • Paired Observations

          • Efficiency of the Wilcoxon Signed-Rank Test

        • 14.2 The Wilcoxon Rank-Sum Test

          • Development of the Test When m=3, n=4

          • General Description of the Rank-Sum Test

          • Efficiency of the Wilcoxon Rank-Sum Test

        • 14.3 Distribution-Free Confidence Intervals

          • The Wilcoxon Signed-Rank Interval

          • The Wilcoxon Rank-Sum Interval

        • 14.4 Bayesian Methods

        • Bibliography

    • back-matter

    • Index

    • Answers to Odd-Numbered Exercises

      • Chapter 1

      • Chapter 2

      • Chapter 3

      • Chapter 4

      • Chapter 5

      • Chapter 6

      • Chapter 7

      • Chapter 8

      • Chapter 9

      • Chapter 10

      • Chapter 11

      • Chapter 12

      • Chapter 13

      • Chapter 14

    • Appendix Tables

    • GetFullPageImage

    • front-matter

      • Modern Mathematical Statistics with Applications

        • About the Authors

        • Contents

        • Preface

    • fulltext

    • fulltext(1)

    • fulltext(2)

    • fulltext(3)

    • fulltext(4)

    • fulltext(5)

    • fulltext(6)

    • fulltext(7)

    • fulltext(8)

    • fulltext(9)

    • fulltext(10)

    • fulltext(11)

    • fulltext(12)

    • fulltext(13)

    • back-matter

    • Index

    • Answers to Odd-Numbered Exercises

    • Appendix Tables

  • Chapter 1 Overview and Descriptive Statistics

    • GetFullPageImage

    • front-matter

      • Modern Mathematical Statistics with Applications

        • About the Authors

        • Contents

        • Preface

          • Purpose

          • Content

          • Mathematical Level

          • Recommended Coverage

          • Acknowledgments

          • A Final Thought

    • fulltext

      • Chapter 1: Overview and Descriptive Statistics

        • 1.1 Populations and Samples

          • Branches of Statistics

          • Collecting Data

        • 1.2 Pictorial and Tabular Methods in Descriptive Statistics

          • Notation

          • Stem-and-Leaf Displays

          • Dotplots

          • Histograms

          • Histogram Shapes

          • Qualitative Data

          • Multivariate Data

        • 1.3 Measures of Location

          • The Mean

          • The Median

          • Other Measures of Location: Quartiles, Percentiles, and Trimmed Means

          • Categorical Data and Sample Proportions

        • 1.4 Measures of Variability

          • Measures of Variability for Sample Data

          • Motivation for s2

          • A Computing Formula for s2

          • Boxplots

          • Boxplots That Show Outliers

          • Comparative Boxplots

        • Bibliography

    • fulltext(1)

      • Chapter 2: Probability

        • 2.1 Sample Spaces and Events

          • The Sample Space of an Experiment

          • Events

          • Some Relations from Set Theory

        • 2.2 Axioms, Interpretations, and Properties of Probability

          • Interpreting Probability

          • More Probability Properties

          • Determining Probabilities Systematically

          • Equally Likely Outcomes

        • 2.3 Counting Techniques

          • The Product Rule for Ordered Pairs

          • Tree Diagrams

          • A More General Product Rule

          • Permutations

          • Combinations

        • 2.4 Conditional Probability

          • The Definition of Conditional Probability

          • The Multiplication Rule for P(A ∩ B)

          • Bayes´ Theorem

        • 2.5 Independence

          • P(A ∩ B) When Events Are Independent

          • Independence of More Than Two Events

        • Bibliography

    • fulltext(2)

      • Chapter 3: Discrete Random Variables and Probability Distributions

        • 3.1 Random Variables

          • Two Types of Random Variables

        • 3.2 Probability Distributions for Discrete Random Variables

          • A Parameter of a Probability Distribution

          • The Cumulative Distribution Function

          • Another View of Probability Mass Functions

        • 3.3 Expected Values of Discrete Random Variables

          • The Expected Value of X

          • The Expected Value of a Function

          • The Variance of X

          • A Shortcut Formula for sigma2

          • Rules of Variance

        • 3.4 Moments and Moment Generating Functions

        • 3.5 The Binomial Probability Distribution

          • The Binomial Random Variable and Distribution

          • Using Binomial Tables

          • The Mean and Variance of X

          • The Moment Generating Function of X

        • 3.6 Hypergeometric and Negative Binomial Distributions

          • The Hypergeometric Distribution

          • The Negative Binomial Distribution

        • 3.7 The Poisson Probability Distribution

          • The Poisson Distribution as a Limit

          • The Mean, Variance and MGF of X

          • The Poisson Process

        • Bibliography

    • fulltext(3)

      • Chapter 4: Continuous Random Variables and Probability Distributions

        • 4.1 Probability Density Functions and Cumulative Distribution Functions

          • Probability Distributions for Continuous Variables

          • The Cumulative Distribution Function

          • Using F(x) to Compute Probabilities

          • Obtaining f(x) from F(x)

          • Percentiles of a Continuous Distribution

        • 4.2 Expected Values and Moment Generating Functions

          • Expected Values

          • The Variance and Standard Deviation

          • Approximating the Mean Value and Standard Deviation

          • Moment Generating Functions

        • 4.3 The Normal Distribution

          • The Standard Normal Distribution

          • Percentiles of the Standard Normal Distribution

            • zα Notation

          • Nonstandard Normal Distributions

            • Percentiles of an Arbitrary Normal Distribution

          • The Normal Distribution and Discrete Populations

          • Approximating the Binomial Distribution

          • The Normal Moment Generating Function

        • 4.4 The Gamma Distribution and Its Relatives

          • The Family of Gamma Distributions

          • The Exponential Distribution

          • The Chi-Squared Distribution

        • 4.5 Other Continuous Distributions

          • The Weibull Distribution

          • The Lognormal Distribution

          • The Beta Distribution

        • 4.6 Probability Plots

          • Sample Percentiles

          • A Probability Plot

          • Beyond Normality

        • 4.7 Transformations of a Random Variable

        • Bibliography

    • fulltext(4)

      • Chapter 5: Joint Probability Distributions

        • 5.1 Jointly Distributed Random Variables

          • The Joint Probability Mass Function for Two Discrete Random Variables

          • The Joint Probability Density Function for Two Continuous Random Variables

          • Independent Random Variables

          • More than Two Random Variables

        • 5.2 Expected Values, Covariance, and Correlation

          • Covariance

          • Correlation

        • 5.3 Conditional Distributions

          • Independence

          • The Bivariate Normal Distribution

          • Regression to the Mean

          • The Mean and Variance Via the Conditional Mean and Variance

        • 5.4 Transformations of Random Variables

          • The Joint Distribution of Two New Random Variables

          • The Joint Distribution of More than Two New Variables

        • 5.5 Order Statistics

          • The Distributions of Yn and Y1

          • The Joint Distribution of the n Order Statistics

          • The Distribution of a Single Order Statistic

          • The Joint Distribution of Two Order Statistics

          • An Intuitive Derivation of Order Statistic PDF´s

        • Bibliography

    • fulltext(5)

      • Chapter 6: Statistics and Sampling Distributions

        • 6.1 Statistics and Their Distributions

          • Random Samples

          • Deriving the Sampling Distribution of a Statistic

          • Simulation Experiments

        • 6.2 The Distribution of the Sample Mean

          • The Case of a Normal Population Distribution

          • The Central Limit Theorem

          • Other Applications of the Central Limit Theorem

          • The Law of Large Numbers

        • 6.3 The Mean, Variance, and MGF for Several Variables

          • The Difference Between Two Random Variables

          • The Case of Normal Random Variables

          • Moment Generating Functions for Linear Combinations

        • 6.4 Distributions Based on a Normal Random Sample

          • The Chi-Squared Distribution

          • The t Distribution

          • The F Distribution

          • Summary of Relationships

        • Bibliography

        • Appendix: Proof of the Central Limit Theorem

    • fulltext(6)

      • Chapter 7: Point Estimation

        • 7.1 General Concepts and Criteria

          • Mean Squared Error

          • Unbiased Estimators

          • Estimators with Minimum Variance

          • More Complications

          • Reporting a Point Estimate: The Standard Error

          • The Bootstrap

        • 7.2 Methods of Point Estimation

          • The Method of Moments

          • Maximum Likelihood Estimation

          • Some Properties of MLEs

          • Large-Sample Behavior of the MLE

          • Some Complications

        • 7.3 Sufficiency

          • The Factorization Theorem

          • Jointly Sufficient Statistics

          • Minimal Sufficiency

          • Improving an Estimator

          • Further Comments

        • 7.4 Information and Efficiency

          • Information in a Random Sample

          • The Cramér-Rao Inequality

          • Large Sample Properties of the MLE

        • Bibliography

    • fulltext(7)

      • Chapter 8: Statistical Intervals Based on a Single Sample

        • 8.1 Basic Properties of Confidence Intervals

          • Interpreting a Confidence Level

          • Other Levels of Confidence

          • Confidence Level, Precision, and Choice of Sample Size

          • Deriving a Confidence Interval

        • 8.2 Large-Sample Confidence Intervals for a Population Mean and Proportion

          • A Large-Sample Interval for μ

          • A General Large-Sample Confidence Interval

          • A Confidence Interval for a Population Proportion

          • One-Sided Confidence Intervals (Confidence Bounds)

        • 8.3 Intervals Based on a Normal Population Distribution

          • Properties of t Distributions

          • The One-Sample t Confidence Interval

          • A Prediction Interval for a Single Future Value

          • Tolerance Intervals

          • Intervals Based on Nonnormal Population Distributions

        • 8.4 Confidence Intervals for the Variance and Standard Deviation of a Normal Population

        • 8.5 Bootstrap Confidence Intervals

          • The Percentile Interval

          • A Refined Interval

          • Bootstrapping the Median

          • The Mean Versus the Median

        • Bibliography

    • fulltext(8)

      • Chapter 9: Tests of Hypotheses Based on a Single Sample

        • 9.1 Hypotheses and Test Procedures

          • Test Procedures

          • Errors in Hypothesis Testing

        • 9.2 Tests About a Population Mean

          • Case I: A Normal Population with Known sigma

          • Case II: Large-Sample Tests

          • Case III: A Normal Population Distribution with Unknown sigma

        • 9.3 Tests Concerning a Population Proportion

          • Large-Sample Tests

          • Small-Sample Tests

        • 9.4 P-Values

          • P-Values for z Tests

          • P-Values for t Tests

          • More on Interpreting P-Values

        • 9.5 Some Comments on Selecting a Test Procedure

          • Statistical Versus Practical Significance

          • Best Tests for Simple Hypotheses

          • Power and Uniformly Most Powerful Tests

          • Likelihood Ratio Tests

        • Bibliography

    • fulltext(9)

      • Chapter 10: Inferences Based on Two Samples

        • 10.1 z Tests and Confidence Intervals for a Difference Between Two Population Means

          • Test Procedures for Normal Populations with Known Variances

          • Using a Comparison to Identify Causality

          • beta and the Choice of Sample Size

          • Large-Sample Tests

          • Confidence Intervals for M1-M2

        • 10.2 The Two-Sample t Test and Confidence Interval

          • Pooled t Procedures

          • Type II Error Probabilities

        • 10.3 Analysis of Paired Data

          • The Paired t Test

          • A Confidence Interval for MD

            • Paired Data and Two-Sample t Procedures

            • Paired Versus Unpaired Experiments

        • 10.4 Inferences About Two Population Proportions

          • A Large-Sample Test Procedure

          • Type II Error Probabilities and Sample Sizes

          • A Large-Sample Confidence Interval for p1 - p2

          • Small-Sample Inferences

        • 10.5 Inferences About Two Population Variances

          • Testing Hypotheses

          • P-Values for F Tests

          • A Confidence Interval for s

            • Bibliography

        • 10.6 Comparisons Using the Bootstrap and Permutation Methods

          • The Bootstrap for Two Samples

          • Permutation Tests

          • Inferences About Variability

          • The Analysis of Paired Data

    • fulltext(10)

      • Chapter 11: The Analysis of Variance

        • 11.1 Single-Factor ANOVA

          • Notation and Assumptions

          • Sums of Squares and Mean Squares

          • The F Test

          • Computational Formulas

          • Testing for the Assumption of Equal Variances

        • 11.2 Multiple Comparisons in ANOVA

          • Tukey´s Procedure

          • The Interpretation of α in Tukey´s Procedure

          • Confidence Intervals for Other Parametric Functions

        • 11.3 More on Single-Factor ANOVA

          • An Alternative Description of the ANOVA Model

          • beta for the F Test

          • Relationship of the F Test to the t Test

          • Single-Factor ANOVA When Sample Sizes Are Unequal

          • Multiple Comparisons When Sample Sizes Are Unequal

          • Data Transformation

          • A Random Effects Model

        • 11.4 Two-Factor ANOVA with Kij=1

          • The Model

          • Test Procedures

          • Expected Mean Squares

          • Multiple Comparisons

          • Randomized Block Experiments

          • Models for Random Effects

        • 11.5 Two-Factor ANOVA with Kij1

          • Parameters for the Fixed Effects Model with Interaction

          • Notation, Model, and Analysis

          • Multiple Comparisons

          • Models with Mixed and Random Effects

        • Bibliography

    • fulltext(11)

      • Chapter 12: Regression and Correlation

        • 12.1 The Simple Linear and Logistic Regression Models

          • A Linear Probabilistic Model

          • The Logistic Regression Model

        • 12.2 Estimating Model Parameters

          • Estimating σ2 and σ

          • The Coefficient of Determination

          • Terminology and Scope of Regression Analysis

        • 12.3 Inferences About the Regression Coefficient β1

          • A Confidence Interval for β1

          • Hypothesis-Testing Procedures

          • Regression and ANOVA

          • Fitting the Logistic Regression Model

        • 12.4 Inferences Concerning μY.x* and the Prediction of Future Y Values

          • Inferences Concerning μY.x*

          • A Prediction Interval for a Future Value of Y

        • 12.5 Correlation

          • The Sample Correlation Coefficient r

          • Properties of r

          • The Population Correlation Coefficient rho and Inferences About Correlation

          • Other Inferences Concerning rho

        • 12.6 Assessing Model Adequacy

          • Residuals and Standardized Residuals

          • Diagnostic Plots

          • Difficulties and Remedies

        • 12.7 Multiple Regression Analysis

          • Estimating Parameters

          • σ2 and the Coefficient of Multiple Determination

          • A Model Utility Test

          • Inferences in Multiple Regression

          • Assessing Model Adequacy

          • Multiple Regression Models

          • Models with Predictors for Categorical Variables

        • 12.8 Regression with Matrices

          • The Normal Equations

          • Residuals, ANOVA, F, and R-Squared

          • Covariance Matrices

          • The Hat Matrix

        • Bibliography

    • fulltext(12)

      • Chapter 13: Goodness-of-Fit Tests and Categorical Data Analysis

        • 13.1 Goodness-of-Fit Tests When Category Probabilities Are Completely Specified

          • P-Values for Chi-Squared Tests

          • 2 When the pi´s Are Functions of Other Parameters

          • 2 When the Underlying Distribution Is Continuous

        • 13.2 Goodness-of-Fit Tests for Composite Hypotheses

          • 2 When Parameters Are Estimated

          • Goodness of Fit for Discrete Distributions

          • Goodness of Fit for Continuous Distributions

          • A Special Test for Normality

        • 13.3 Two-Way Contingency Tables

          • Testing for Homogeneity

          • Testing for Independence

          • Ordinal Factors and Logistic Regression

        • Bibliography

    • fulltext(13)

      • Chapter 14: Alternative Approaches to Inference

        • 14.1 The Wilcoxon Signed-Rank Test

          • A General Description of the Wilcoxon Signed-Rank Test

          • Paired Observations

          • Efficiency of the Wilcoxon Signed-Rank Test

        • 14.2 The Wilcoxon Rank-Sum Test

          • Development of the Test When m=3, n=4

          • General Description of the Rank-Sum Test

          • Efficiency of the Wilcoxon Rank-Sum Test

        • 14.3 Distribution-Free Confidence Intervals

          • The Wilcoxon Signed-Rank Interval

          • The Wilcoxon Rank-Sum Interval

        • 14.4 Bayesian Methods

        • Bibliography

    • back-matter

    • Index

    • Answers to Odd-Numbered Exercises

      • Chapter 1

      • Chapter 2

      • Chapter 3

      • Chapter 4

      • Chapter 5

      • Chapter 6

      • Chapter 7

      • Chapter 8

      • Chapter 9

      • Chapter 10

      • Chapter 11

      • Chapter 12

      • Chapter 13

      • Chapter 14

    • Appendix Tables

    • GetFullPageImage

    • front-matter

      • Modern Mathematical Statistics with Applications

        • About the Authors

        • Contents

        • Preface

    • fulltext

    • fulltext(1)

    • fulltext(2)

    • fulltext(3)

    • fulltext(4)

    • fulltext(5)

    • fulltext(6)

    • fulltext(7)

    • fulltext(8)

    • fulltext(9)

    • fulltext(10)

    • fulltext(11)

    • fulltext(12)

    • fulltext(13)

    • back-matter

    • Index

    • Answers to Odd-Numbered Exercises

    • Appendix Tables

  • Chapter 2 Probability

    • GetFullPageImage

    • front-matter

      • Modern Mathematical Statistics with Applications

        • About the Authors

        • Contents

        • Preface

          • Purpose

          • Content

          • Mathematical Level

          • Recommended Coverage

          • Acknowledgments

          • A Final Thought

    • fulltext

      • Chapter 1: Overview and Descriptive Statistics

        • 1.1 Populations and Samples

          • Branches of Statistics

          • Collecting Data

        • 1.2 Pictorial and Tabular Methods in Descriptive Statistics

          • Notation

          • Stem-and-Leaf Displays

          • Dotplots

          • Histograms

          • Histogram Shapes

          • Qualitative Data

          • Multivariate Data

        • 1.3 Measures of Location

          • The Mean

          • The Median

          • Other Measures of Location: Quartiles, Percentiles, and Trimmed Means

          • Categorical Data and Sample Proportions

        • 1.4 Measures of Variability

          • Measures of Variability for Sample Data

          • Motivation for s2

          • A Computing Formula for s2

          • Boxplots

          • Boxplots That Show Outliers

          • Comparative Boxplots

        • Bibliography

    • fulltext(1)

      • Chapter 2: Probability

        • 2.1 Sample Spaces and Events

          • The Sample Space of an Experiment

          • Events

          • Some Relations from Set Theory

        • 2.2 Axioms, Interpretations, and Properties of Probability

          • Interpreting Probability

          • More Probability Properties

          • Determining Probabilities Systematically

          • Equally Likely Outcomes

        • 2.3 Counting Techniques

          • The Product Rule for Ordered Pairs

          • Tree Diagrams

          • A More General Product Rule

          • Permutations

          • Combinations

        • 2.4 Conditional Probability

          • The Definition of Conditional Probability

          • The Multiplication Rule for P(A ∩ B)

          • Bayes´ Theorem

        • 2.5 Independence

          • P(A ∩ B) When Events Are Independent

          • Independence of More Than Two Events

        • Bibliography

    • fulltext(2)

      • Chapter 3: Discrete Random Variables and Probability Distributions

        • 3.1 Random Variables

          • Two Types of Random Variables

        • 3.2 Probability Distributions for Discrete Random Variables

          • A Parameter of a Probability Distribution

          • The Cumulative Distribution Function

          • Another View of Probability Mass Functions

        • 3.3 Expected Values of Discrete Random Variables

          • The Expected Value of X

          • The Expected Value of a Function

          • The Variance of X

          • A Shortcut Formula for sigma2

          • Rules of Variance

        • 3.4 Moments and Moment Generating Functions

        • 3.5 The Binomial Probability Distribution

          • The Binomial Random Variable and Distribution

          • Using Binomial Tables

          • The Mean and Variance of X

          • The Moment Generating Function of X

        • 3.6 Hypergeometric and Negative Binomial Distributions

          • The Hypergeometric Distribution

          • The Negative Binomial Distribution

        • 3.7 The Poisson Probability Distribution

          • The Poisson Distribution as a Limit

          • The Mean, Variance and MGF of X

          • The Poisson Process

        • Bibliography

    • fulltext(3)

      • Chapter 4: Continuous Random Variables and Probability Distributions

        • 4.1 Probability Density Functions and Cumulative Distribution Functions

          • Probability Distributions for Continuous Variables

          • The Cumulative Distribution Function

          • Using F(x) to Compute Probabilities

          • Obtaining f(x) from F(x)

          • Percentiles of a Continuous Distribution

        • 4.2 Expected Values and Moment Generating Functions

          • Expected Values

          • The Variance and Standard Deviation

          • Approximating the Mean Value and Standard Deviation

          • Moment Generating Functions

        • 4.3 The Normal Distribution

          • The Standard Normal Distribution

          • Percentiles of the Standard Normal Distribution

            • zα Notation

          • Nonstandard Normal Distributions

            • Percentiles of an Arbitrary Normal Distribution

          • The Normal Distribution and Discrete Populations

          • Approximating the Binomial Distribution

          • The Normal Moment Generating Function

        • 4.4 The Gamma Distribution and Its Relatives

          • The Family of Gamma Distributions

          • The Exponential Distribution

          • The Chi-Squared Distribution

        • 4.5 Other Continuous Distributions

          • The Weibull Distribution

          • The Lognormal Distribution

          • The Beta Distribution

        • 4.6 Probability Plots

          • Sample Percentiles

          • A Probability Plot

          • Beyond Normality

        • 4.7 Transformations of a Random Variable

        • Bibliography

    • fulltext(4)

      • Chapter 5: Joint Probability Distributions

        • 5.1 Jointly Distributed Random Variables

          • The Joint Probability Mass Function for Two Discrete Random Variables

          • The Joint Probability Density Function for Two Continuous Random Variables

          • Independent Random Variables

          • More than Two Random Variables

        • 5.2 Expected Values, Covariance, and Correlation

          • Covariance

          • Correlation

        • 5.3 Conditional Distributions

          • Independence

          • The Bivariate Normal Distribution

          • Regression to the Mean

          • The Mean and Variance Via the Conditional Mean and Variance

        • 5.4 Transformations of Random Variables

          • The Joint Distribution of Two New Random Variables

          • The Joint Distribution of More than Two New Variables

        • 5.5 Order Statistics

          • The Distributions of Yn and Y1

          • The Joint Distribution of the n Order Statistics

          • The Distribution of a Single Order Statistic

          • The Joint Distribution of Two Order Statistics

          • An Intuitive Derivation of Order Statistic PDF´s

        • Bibliography

    • fulltext(5)

      • Chapter 6: Statistics and Sampling Distributions

        • 6.1 Statistics and Their Distributions

          • Random Samples

          • Deriving the Sampling Distribution of a Statistic

          • Simulation Experiments

        • 6.2 The Distribution of the Sample Mean

          • The Case of a Normal Population Distribution

          • The Central Limit Theorem

          • Other Applications of the Central Limit Theorem

          • The Law of Large Numbers

        • 6.3 The Mean, Variance, and MGF for Several Variables

          • The Difference Between Two Random Variables

          • The Case of Normal Random Variables

          • Moment Generating Functions for Linear Combinations

        • 6.4 Distributions Based on a Normal Random Sample

          • The Chi-Squared Distribution

          • The t Distribution

          • The F Distribution

          • Summary of Relationships

        • Bibliography

        • Appendix: Proof of the Central Limit Theorem

    • fulltext(6)

      • Chapter 7: Point Estimation

        • 7.1 General Concepts and Criteria

          • Mean Squared Error

          • Unbiased Estimators

          • Estimators with Minimum Variance

          • More Complications

          • Reporting a Point Estimate: The Standard Error

          • The Bootstrap

        • 7.2 Methods of Point Estimation

          • The Method of Moments

          • Maximum Likelihood Estimation

          • Some Properties of MLEs

          • Large-Sample Behavior of the MLE

          • Some Complications

        • 7.3 Sufficiency

          • The Factorization Theorem

          • Jointly Sufficient Statistics

          • Minimal Sufficiency

          • Improving an Estimator

          • Further Comments

        • 7.4 Information and Efficiency

          • Information in a Random Sample

          • The Cramér-Rao Inequality

          • Large Sample Properties of the MLE

        • Bibliography

    • fulltext(7)

      • Chapter 8: Statistical Intervals Based on a Single Sample

        • 8.1 Basic Properties of Confidence Intervals

          • Interpreting a Confidence Level

          • Other Levels of Confidence

          • Confidence Level, Precision, and Choice of Sample Size

          • Deriving a Confidence Interval

        • 8.2 Large-Sample Confidence Intervals for a Population Mean and Proportion

          • A Large-Sample Interval for μ

          • A General Large-Sample Confidence Interval

          • A Confidence Interval for a Population Proportion

          • One-Sided Confidence Intervals (Confidence Bounds)

        • 8.3 Intervals Based on a Normal Population Distribution

          • Properties of t Distributions

          • The One-Sample t Confidence Interval

          • A Prediction Interval for a Single Future Value

          • Tolerance Intervals

          • Intervals Based on Nonnormal Population Distributions

        • 8.4 Confidence Intervals for the Variance and Standard Deviation of a Normal Population

        • 8.5 Bootstrap Confidence Intervals

          • The Percentile Interval

          • A Refined Interval

          • Bootstrapping the Median

          • The Mean Versus the Median

        • Bibliography

    • fulltext(8)

      • Chapter 9: Tests of Hypotheses Based on a Single Sample

        • 9.1 Hypotheses and Test Procedures

          • Test Procedures

          • Errors in Hypothesis Testing

        • 9.2 Tests About a Population Mean

          • Case I: A Normal Population with Known sigma

          • Case II: Large-Sample Tests

          • Case III: A Normal Population Distribution with Unknown sigma

        • 9.3 Tests Concerning a Population Proportion

          • Large-Sample Tests

          • Small-Sample Tests

        • 9.4 P-Values

          • P-Values for z Tests

          • P-Values for t Tests

          • More on Interpreting P-Values

        • 9.5 Some Comments on Selecting a Test Procedure

          • Statistical Versus Practical Significance

          • Best Tests for Simple Hypotheses

          • Power and Uniformly Most Powerful Tests

          • Likelihood Ratio Tests

        • Bibliography

    • fulltext(9)

      • Chapter 10: Inferences Based on Two Samples

        • 10.1 z Tests and Confidence Intervals for a Difference Between Two Population Means

          • Test Procedures for Normal Populations with Known Variances

          • Using a Comparison to Identify Causality

          • beta and the Choice of Sample Size

          • Large-Sample Tests

          • Confidence Intervals for M1-M2

        • 10.2 The Two-Sample t Test and Confidence Interval

          • Pooled t Procedures

          • Type II Error Probabilities

        • 10.3 Analysis of Paired Data

          • The Paired t Test

          • A Confidence Interval for MD

            • Paired Data and Two-Sample t Procedures

            • Paired Versus Unpaired Experiments

        • 10.4 Inferences About Two Population Proportions

          • A Large-Sample Test Procedure

          • Type II Error Probabilities and Sample Sizes

          • A Large-Sample Confidence Interval for p1 - p2

          • Small-Sample Inferences

        • 10.5 Inferences About Two Population Variances

          • Testing Hypotheses

          • P-Values for F Tests

          • A Confidence Interval for s

            • Bibliography

        • 10.6 Comparisons Using the Bootstrap and Permutation Methods

          • The Bootstrap for Two Samples

          • Permutation Tests

          • Inferences About Variability

          • The Analysis of Paired Data

    • fulltext(10)

      • Chapter 11: The Analysis of Variance

        • 11.1 Single-Factor ANOVA

          • Notation and Assumptions

          • Sums of Squares and Mean Squares

          • The F Test

          • Computational Formulas

          • Testing for the Assumption of Equal Variances

        • 11.2 Multiple Comparisons in ANOVA

          • Tukey´s Procedure

          • The Interpretation of α in Tukey´s Procedure

          • Confidence Intervals for Other Parametric Functions

        • 11.3 More on Single-Factor ANOVA

          • An Alternative Description of the ANOVA Model

          • beta for the F Test

          • Relationship of the F Test to the t Test

          • Single-Factor ANOVA When Sample Sizes Are Unequal

          • Multiple Comparisons When Sample Sizes Are Unequal

          • Data Transformation

          • A Random Effects Model

        • 11.4 Two-Factor ANOVA with Kij=1

          • The Model

          • Test Procedures

          • Expected Mean Squares

          • Multiple Comparisons

          • Randomized Block Experiments

          • Models for Random Effects

        • 11.5 Two-Factor ANOVA with Kij1

          • Parameters for the Fixed Effects Model with Interaction

          • Notation, Model, and Analysis

          • Multiple Comparisons

          • Models with Mixed and Random Effects

        • Bibliography

    • fulltext(11)

      • Chapter 12: Regression and Correlation

        • 12.1 The Simple Linear and Logistic Regression Models

          • A Linear Probabilistic Model

          • The Logistic Regression Model

        • 12.2 Estimating Model Parameters

          • Estimating σ2 and σ

          • The Coefficient of Determination

          • Terminology and Scope of Regression Analysis

        • 12.3 Inferences About the Regression Coefficient β1

          • A Confidence Interval for β1

          • Hypothesis-Testing Procedures

          • Regression and ANOVA

          • Fitting the Logistic Regression Model

        • 12.4 Inferences Concerning μY.x* and the Prediction of Future Y Values

          • Inferences Concerning μY.x*

          • A Prediction Interval for a Future Value of Y

        • 12.5 Correlation

          • The Sample Correlation Coefficient r

          • Properties of r

          • The Population Correlation Coefficient rho and Inferences About Correlation

          • Other Inferences Concerning rho

        • 12.6 Assessing Model Adequacy

          • Residuals and Standardized Residuals

          • Diagnostic Plots

          • Difficulties and Remedies

        • 12.7 Multiple Regression Analysis

          • Estimating Parameters

          • σ2 and the Coefficient of Multiple Determination

          • A Model Utility Test

          • Inferences in Multiple Regression

          • Assessing Model Adequacy

          • Multiple Regression Models

          • Models with Predictors for Categorical Variables

        • 12.8 Regression with Matrices

          • The Normal Equations

          • Residuals, ANOVA, F, and R-Squared

          • Covariance Matrices

          • The Hat Matrix

        • Bibliography

    • fulltext(12)

      • Chapter 13: Goodness-of-Fit Tests and Categorical Data Analysis

        • 13.1 Goodness-of-Fit Tests When Category Probabilities Are Completely Specified

          • P-Values for Chi-Squared Tests

          • 2 When the pi´s Are Functions of Other Parameters

          • 2 When the Underlying Distribution Is Continuous

        • 13.2 Goodness-of-Fit Tests for Composite Hypotheses

          • 2 When Parameters Are Estimated

          • Goodness of Fit for Discrete Distributions

          • Goodness of Fit for Continuous Distributions

          • A Special Test for Normality

        • 13.3 Two-Way Contingency Tables

          • Testing for Homogeneity

          • Testing for Independence

          • Ordinal Factors and Logistic Regression

        • Bibliography

    • fulltext(13)

      • Chapter 14: Alternative Approaches to Inference

        • 14.1 The Wilcoxon Signed-Rank Test

          • A General Description of the Wilcoxon Signed-Rank Test

          • Paired Observations

          • Efficiency of the Wilcoxon Signed-Rank Test

        • 14.2 The Wilcoxon Rank-Sum Test

          • Development of the Test When m=3, n=4

          • General Description of the Rank-Sum Test

          • Efficiency of the Wilcoxon Rank-Sum Test

        • 14.3 Distribution-Free Confidence Intervals

          • The Wilcoxon Signed-Rank Interval

          • The Wilcoxon Rank-Sum Interval

        • 14.4 Bayesian Methods

        • Bibliography

    • back-matter

    • Index

    • Answers to Odd-Numbered Exercises

      • Chapter 1

      • Chapter 2

      • Chapter 3

      • Chapter 4

      • Chapter 5

      • Chapter 6

      • Chapter 7

      • Chapter 8

      • Chapter 9

      • Chapter 10

      • Chapter 11

      • Chapter 12

      • Chapter 13

      • Chapter 14

    • Appendix Tables

    • GetFullPageImage

    • front-matter

      • Modern Mathematical Statistics with Applications

        • About the Authors

        • Contents

        • Preface

    • fulltext

    • fulltext(1)

    • fulltext(2)

    • fulltext(3)

    • fulltext(4)

    • fulltext(5)

    • fulltext(6)

    • fulltext(7)

    • fulltext(8)

    • fulltext(9)

    • fulltext(10)

    • fulltext(11)

    • fulltext(12)

    • fulltext(13)

    • back-matter

    • Index

    • Answers to Odd-Numbered Exercises

    • Appendix Tables

  • Chapter 3 Discrete Random Variables and Probability Distributions

    • GetFullPageImage

    • front-matter

      • Modern Mathematical Statistics with Applications

        • About the Authors

        • Contents

        • Preface

          • Purpose

          • Content

          • Mathematical Level

          • Recommended Coverage

          • Acknowledgments

          • A Final Thought

    • fulltext

      • Chapter 1: Overview and Descriptive Statistics

        • 1.1 Populations and Samples

          • Branches of Statistics

          • Collecting Data

        • 1.2 Pictorial and Tabular Methods in Descriptive Statistics

          • Notation

          • Stem-and-Leaf Displays

          • Dotplots

          • Histograms

          • Histogram Shapes

          • Qualitative Data

          • Multivariate Data

        • 1.3 Measures of Location

          • The Mean

          • The Median

          • Other Measures of Location: Quartiles, Percentiles, and Trimmed Means

          • Categorical Data and Sample Proportions

        • 1.4 Measures of Variability

          • Measures of Variability for Sample Data

          • Motivation for s2

          • A Computing Formula for s2

          • Boxplots

          • Boxplots That Show Outliers

          • Comparative Boxplots

        • Bibliography

    • fulltext(1)

      • Chapter 2: Probability

        • 2.1 Sample Spaces and Events

          • The Sample Space of an Experiment

          • Events

          • Some Relations from Set Theory

        • 2.2 Axioms, Interpretations, and Properties of Probability

          • Interpreting Probability

          • More Probability Properties

          • Determining Probabilities Systematically

          • Equally Likely Outcomes

        • 2.3 Counting Techniques

          • The Product Rule for Ordered Pairs

          • Tree Diagrams

          • A More General Product Rule

          • Permutations

          • Combinations

        • 2.4 Conditional Probability

          • The Definition of Conditional Probability

          • The Multiplication Rule for P(A ∩ B)

          • Bayes´ Theorem

        • 2.5 Independence

          • P(A ∩ B) When Events Are Independent

          • Independence of More Than Two Events

        • Bibliography

    • fulltext(2)

      • Chapter 3: Discrete Random Variables and Probability Distributions

        • 3.1 Random Variables

          • Two Types of Random Variables

        • 3.2 Probability Distributions for Discrete Random Variables

          • A Parameter of a Probability Distribution

          • The Cumulative Distribution Function

          • Another View of Probability Mass Functions

        • 3.3 Expected Values of Discrete Random Variables

          • The Expected Value of X

          • The Expected Value of a Function

          • The Variance of X

          • A Shortcut Formula for sigma2

          • Rules of Variance

        • 3.4 Moments and Moment Generating Functions

        • 3.5 The Binomial Probability Distribution

          • The Binomial Random Variable and Distribution

          • Using Binomial Tables

          • The Mean and Variance of X

          • The Moment Generating Function of X

        • 3.6 Hypergeometric and Negative Binomial Distributions

          • The Hypergeometric Distribution

          • The Negative Binomial Distribution

        • 3.7 The Poisson Probability Distribution

          • The Poisson Distribution as a Limit

          • The Mean, Variance and MGF of X

          • The Poisson Process

        • Bibliography

    • fulltext(3)

      • Chapter 4: Continuous Random Variables and Probability Distributions

        • 4.1 Probability Density Functions and Cumulative Distribution Functions

          • Probability Distributions for Continuous Variables

          • The Cumulative Distribution Function

          • Using F(x) to Compute Probabilities

          • Obtaining f(x) from F(x)

          • Percentiles of a Continuous Distribution

        • 4.2 Expected Values and Moment Generating Functions

          • Expected Values

          • The Variance and Standard Deviation

          • Approximating the Mean Value and Standard Deviation

          • Moment Generating Functions

        • 4.3 The Normal Distribution

          • The Standard Normal Distribution

          • Percentiles of the Standard Normal Distribution

            • zα Notation

          • Nonstandard Normal Distributions

            • Percentiles of an Arbitrary Normal Distribution

          • The Normal Distribution and Discrete Populations

          • Approximating the Binomial Distribution

          • The Normal Moment Generating Function

        • 4.4 The Gamma Distribution and Its Relatives

          • The Family of Gamma Distributions

          • The Exponential Distribution

          • The Chi-Squared Distribution

        • 4.5 Other Continuous Distributions

          • The Weibull Distribution

          • The Lognormal Distribution

          • The Beta Distribution

        • 4.6 Probability Plots

          • Sample Percentiles

          • A Probability Plot

          • Beyond Normality

        • 4.7 Transformations of a Random Variable

        • Bibliography

    • fulltext(4)

      • Chapter 5: Joint Probability Distributions

        • 5.1 Jointly Distributed Random Variables

          • The Joint Probability Mass Function for Two Discrete Random Variables

          • The Joint Probability Density Function for Two Continuous Random Variables

          • Independent Random Variables

          • More than Two Random Variables

        • 5.2 Expected Values, Covariance, and Correlation

          • Covariance

          • Correlation

        • 5.3 Conditional Distributions

          • Independence

          • The Bivariate Normal Distribution

          • Regression to the Mean

          • The Mean and Variance Via the Conditional Mean and Variance

        • 5.4 Transformations of Random Variables

          • The Joint Distribution of Two New Random Variables

          • The Joint Distribution of More than Two New Variables

        • 5.5 Order Statistics

          • The Distributions of Yn and Y1

          • The Joint Distribution of the n Order Statistics

          • The Distribution of a Single Order Statistic

          • The Joint Distribution of Two Order Statistics

          • An Intuitive Derivation of Order Statistic PDF´s

        • Bibliography

    • fulltext(5)

      • Chapter 6: Statistics and Sampling Distributions

        • 6.1 Statistics and Their Distributions

          • Random Samples

          • Deriving the Sampling Distribution of a Statistic

          • Simulation Experiments

        • 6.2 The Distribution of the Sample Mean

          • The Case of a Normal Population Distribution

          • The Central Limit Theorem

          • Other Applications of the Central Limit Theorem

          • The Law of Large Numbers

        • 6.3 The Mean, Variance, and MGF for Several Variables

          • The Difference Between Two Random Variables

          • The Case of Normal Random Variables

          • Moment Generating Functions for Linear Combinations

        • 6.4 Distributions Based on a Normal Random Sample

          • The Chi-Squared Distribution

          • The t Distribution

          • The F Distribution

          • Summary of Relationships

        • Bibliography

        • Appendix: Proof of the Central Limit Theorem

    • fulltext(6)

      • Chapter 7: Point Estimation

        • 7.1 General Concepts and Criteria

          • Mean Squared Error

          • Unbiased Estimators

          • Estimators with Minimum Variance

          • More Complications

          • Reporting a Point Estimate: The Standard Error

          • The Bootstrap

        • 7.2 Methods of Point Estimation

          • The Method of Moments

          • Maximum Likelihood Estimation

          • Some Properties of MLEs

          • Large-Sample Behavior of the MLE

          • Some Complications

        • 7.3 Sufficiency

          • The Factorization Theorem

          • Jointly Sufficient Statistics

          • Minimal Sufficiency

          • Improving an Estimator

          • Further Comments

        • 7.4 Information and Efficiency

          • Information in a Random Sample

          • The Cramér-Rao Inequality

          • Large Sample Properties of the MLE

        • Bibliography

    • fulltext(7)

      • Chapter 8: Statistical Intervals Based on a Single Sample

        • 8.1 Basic Properties of Confidence Intervals

          • Interpreting a Confidence Level

          • Other Levels of Confidence

          • Confidence Level, Precision, and Choice of Sample Size

          • Deriving a Confidence Interval

        • 8.2 Large-Sample Confidence Intervals for a Population Mean and Proportion

          • A Large-Sample Interval for μ

          • A General Large-Sample Confidence Interval

          • A Confidence Interval for a Population Proportion

          • One-Sided Confidence Intervals (Confidence Bounds)

        • 8.3 Intervals Based on a Normal Population Distribution

          • Properties of t Distributions

          • The One-Sample t Confidence Interval

          • A Prediction Interval for a Single Future Value

          • Tolerance Intervals

          • Intervals Based on Nonnormal Population Distributions

        • 8.4 Confidence Intervals for the Variance and Standard Deviation of a Normal Population

        • 8.5 Bootstrap Confidence Intervals

          • The Percentile Interval

          • A Refined Interval

          • Bootstrapping the Median

          • The Mean Versus the Median

        • Bibliography

    • fulltext(8)

      • Chapter 9: Tests of Hypotheses Based on a Single Sample

        • 9.1 Hypotheses and Test Procedures

          • Test Procedures

          • Errors in Hypothesis Testing

        • 9.2 Tests About a Population Mean

          • Case I: A Normal Population with Known sigma

          • Case II: Large-Sample Tests

          • Case III: A Normal Population Distribution with Unknown sigma

        • 9.3 Tests Concerning a Population Proportion

          • Large-Sample Tests

          • Small-Sample Tests

        • 9.4 P-Values

          • P-Values for z Tests

          • P-Values for t Tests

          • More on Interpreting P-Values

        • 9.5 Some Comments on Selecting a Test Procedure

          • Statistical Versus Practical Significance

          • Best Tests for Simple Hypotheses

          • Power and Uniformly Most Powerful Tests

          • Likelihood Ratio Tests

        • Bibliography

    • fulltext(9)

      • Chapter 10: Inferences Based on Two Samples

        • 10.1 z Tests and Confidence Intervals for a Difference Between Two Population Means

          • Test Procedures for Normal Populations with Known Variances

          • Using a Comparison to Identify Causality

          • beta and the Choice of Sample Size

          • Large-Sample Tests

          • Confidence Intervals for M1-M2

        • 10.2 The Two-Sample t Test and Confidence Interval

          • Pooled t Procedures

          • Type II Error Probabilities

        • 10.3 Analysis of Paired Data

          • The Paired t Test

          • A Confidence Interval for MD

            • Paired Data and Two-Sample t Procedures

            • Paired Versus Unpaired Experiments

        • 10.4 Inferences About Two Population Proportions

          • A Large-Sample Test Procedure

          • Type II Error Probabilities and Sample Sizes

          • A Large-Sample Confidence Interval for p1 - p2

          • Small-Sample Inferences

        • 10.5 Inferences About Two Population Variances

          • Testing Hypotheses

          • P-Values for F Tests

          • A Confidence Interval for s

            • Bibliography

        • 10.6 Comparisons Using the Bootstrap and Permutation Methods

          • The Bootstrap for Two Samples

          • Permutation Tests

          • Inferences About Variability

          • The Analysis of Paired Data

    • fulltext(10)

      • Chapter 11: The Analysis of Variance

        • 11.1 Single-Factor ANOVA

          • Notation and Assumptions

          • Sums of Squares and Mean Squares

          • The F Test

          • Computational Formulas

          • Testing for the Assumption of Equal Variances

        • 11.2 Multiple Comparisons in ANOVA

          • Tukey´s Procedure

          • The Interpretation of α in Tukey´s Procedure

          • Confidence Intervals for Other Parametric Functions

        • 11.3 More on Single-Factor ANOVA

          • An Alternative Description of the ANOVA Model

          • beta for the F Test

          • Relationship of the F Test to the t Test

          • Single-Factor ANOVA When Sample Sizes Are Unequal

          • Multiple Comparisons When Sample Sizes Are Unequal

          • Data Transformation

          • A Random Effects Model

        • 11.4 Two-Factor ANOVA with Kij=1

          • The Model

          • Test Procedures

          • Expected Mean Squares

          • Multiple Comparisons

          • Randomized Block Experiments

          • Models for Random Effects

        • 11.5 Two-Factor ANOVA with Kij1

          • Parameters for the Fixed Effects Model with Interaction

          • Notation, Model, and Analysis

          • Multiple Comparisons

          • Models with Mixed and Random Effects

        • Bibliography

    • fulltext(11)

      • Chapter 12: Regression and Correlation

        • 12.1 The Simple Linear and Logistic Regression Models

          • A Linear Probabilistic Model

          • The Logistic Regression Model

        • 12.2 Estimating Model Parameters

          • Estimating σ2 and σ

          • The Coefficient of Determination

          • Terminology and Scope of Regression Analysis

        • 12.3 Inferences About the Regression Coefficient β1

          • A Confidence Interval for β1

          • Hypothesis-Testing Procedures

          • Regression and ANOVA

          • Fitting the Logistic Regression Model

        • 12.4 Inferences Concerning μY.x* and the Prediction of Future Y Values

          • Inferences Concerning μY.x*

          • A Prediction Interval for a Future Value of Y

        • 12.5 Correlation

          • The Sample Correlation Coefficient r

          • Properties of r

          • The Population Correlation Coefficient rho and Inferences About Correlation

          • Other Inferences Concerning rho

        • 12.6 Assessing Model Adequacy

          • Residuals and Standardized Residuals

          • Diagnostic Plots

          • Difficulties and Remedies

        • 12.7 Multiple Regression Analysis

          • Estimating Parameters

          • σ2 and the Coefficient of Multiple Determination

          • A Model Utility Test

          • Inferences in Multiple Regression

          • Assessing Model Adequacy

          • Multiple Regression Models

          • Models with Predictors for Categorical Variables

        • 12.8 Regression with Matrices

          • The Normal Equations

          • Residuals, ANOVA, F, and R-Squared

          • Covariance Matrices

          • The Hat Matrix

        • Bibliography

    • fulltext(12)

      • Chapter 13: Goodness-of-Fit Tests and Categorical Data Analysis

        • 13.1 Goodness-of-Fit Tests When Category Probabilities Are Completely Specified

          • P-Values for Chi-Squared Tests

          • 2 When the pi´s Are Functions of Other Parameters

          • 2 When the Underlying Distribution Is Continuous

        • 13.2 Goodness-of-Fit Tests for Composite Hypotheses

          • 2 When Parameters Are Estimated

          • Goodness of Fit for Discrete Distributions

          • Goodness of Fit for Continuous Distributions

          • A Special Test for Normality

        • 13.3 Two-Way Contingency Tables

          • Testing for Homogeneity

          • Testing for Independence

          • Ordinal Factors and Logistic Regression

        • Bibliography

    • fulltext(13)

      • Chapter 14: Alternative Approaches to Inference

        • 14.1 The Wilcoxon Signed-Rank Test

          • A General Description of the Wilcoxon Signed-Rank Test

          • Paired Observations

          • Efficiency of the Wilcoxon Signed-Rank Test

        • 14.2 The Wilcoxon Rank-Sum Test

          • Development of the Test When m=3, n=4

          • General Description of the Rank-Sum Test

          • Efficiency of the Wilcoxon Rank-Sum Test

        • 14.3 Distribution-Free Confidence Intervals

          • The Wilcoxon Signed-Rank Interval

          • The Wilcoxon Rank-Sum Interval

        • 14.4 Bayesian Methods

        • Bibliography

    • back-matter

    • Index

    • Answers to Odd-Numbered Exercises

      • Chapter 1

      • Chapter 2

      • Chapter 3

      • Chapter 4

      • Chapter 5

      • Chapter 6

      • Chapter 7

      • Chapter 8

      • Chapter 9

      • Chapter 10

      • Chapter 11

      • Chapter 12

      • Chapter 13

      • Chapter 14

    • Appendix Tables

    • GetFullPageImage

    • front-matter

      • Modern Mathematical Statistics with Applications

        • About the Authors

        • Contents

        • Preface

    • fulltext

    • fulltext(1)

    • fulltext(2)

    • fulltext(3)

    • fulltext(4)

    • fulltext(5)

    • fulltext(6)

    • fulltext(7)

    • fulltext(8)

    • fulltext(9)

    • fulltext(10)

    • fulltext(11)

    • fulltext(12)

    • fulltext(13)

    • back-matter

    • Index

    • Answers to Odd-Numbered Exercises

    • Appendix Tables

  • Chapter 4 Continuous Random Variables and Probability Distributions

    • GetFullPageImage

    • front-matter

      • Modern Mathematical Statistics with Applications

        • About the Authors

        • Contents

        • Preface

          • Purpose

          • Content

          • Mathematical Level

          • Recommended Coverage

          • Acknowledgments

          • A Final Thought

    • fulltext

      • Chapter 1: Overview and Descriptive Statistics

        • 1.1 Populations and Samples

          • Branches of Statistics

          • Collecting Data

        • 1.2 Pictorial and Tabular Methods in Descriptive Statistics

          • Notation

          • Stem-and-Leaf Displays

          • Dotplots

          • Histograms

          • Histogram Shapes

          • Qualitative Data

          • Multivariate Data

        • 1.3 Measures of Location

          • The Mean

          • The Median

          • Other Measures of Location: Quartiles, Percentiles, and Trimmed Means

          • Categorical Data and Sample Proportions

        • 1.4 Measures of Variability

          • Measures of Variability for Sample Data

          • Motivation for s2

          • A Computing Formula for s2

          • Boxplots

          • Boxplots That Show Outliers

          • Comparative Boxplots

        • Bibliography

    • fulltext(1)

      • Chapter 2: Probability

        • 2.1 Sample Spaces and Events

          • The Sample Space of an Experiment

          • Events

          • Some Relations from Set Theory

        • 2.2 Axioms, Interpretations, and Properties of Probability

          • Interpreting Probability

          • More Probability Properties

          • Determining Probabilities Systematically

          • Equally Likely Outcomes

        • 2.3 Counting Techniques

          • The Product Rule for Ordered Pairs

          • Tree Diagrams

          • A More General Product Rule

          • Permutations

          • Combinations

        • 2.4 Conditional Probability

          • The Definition of Conditional Probability

          • The Multiplication Rule for P(A ∩ B)

          • Bayes´ Theorem

        • 2.5 Independence

          • P(A ∩ B) When Events Are Independent

          • Independence of More Than Two Events

        • Bibliography

    • fulltext(2)

      • Chapter 3: Discrete Random Variables and Probability Distributions

        • 3.1 Random Variables

          • Two Types of Random Variables

        • 3.2 Probability Distributions for Discrete Random Variables

          • A Parameter of a Probability Distribution

          • The Cumulative Distribution Function

          • Another View of Probability Mass Functions

        • 3.3 Expected Values of Discrete Random Variables

          • The Expected Value of X

          • The Expected Value of a Function

          • The Variance of X

          • A Shortcut Formula for sigma2

          • Rules of Variance

        • 3.4 Moments and Moment Generating Functions

        • 3.5 The Binomial Probability Distribution

          • The Binomial Random Variable and Distribution

          • Using Binomial Tables

          • The Mean and Variance of X

          • The Moment Generating Function of X

        • 3.6 Hypergeometric and Negative Binomial Distributions

          • The Hypergeometric Distribution

          • The Negative Binomial Distribution

        • 3.7 The Poisson Probability Distribution

          • The Poisson Distribution as a Limit

          • The Mean, Variance and MGF of X

          • The Poisson Process

        • Bibliography

    • fulltext(3)

      • Chapter 4: Continuous Random Variables and Probability Distributions

        • 4.1 Probability Density Functions and Cumulative Distribution Functions

          • Probability Distributions for Continuous Variables

          • The Cumulative Distribution Function

          • Using F(x) to Compute Probabilities

          • Obtaining f(x) from F(x)

          • Percentiles of a Continuous Distribution

        • 4.2 Expected Values and Moment Generating Functions

          • Expected Values

          • The Variance and Standard Deviation

          • Approximating the Mean Value and Standard Deviation

          • Moment Generating Functions

        • 4.3 The Normal Distribution

          • The Standard Normal Distribution

          • Percentiles of the Standard Normal Distribution

            • zα Notation

          • Nonstandard Normal Distributions

            • Percentiles of an Arbitrary Normal Distribution

          • The Normal Distribution and Discrete Populations

          • Approximating the Binomial Distribution

          • The Normal Moment Generating Function

        • 4.4 The Gamma Distribution and Its Relatives

          • The Family of Gamma Distributions

          • The Exponential Distribution

          • The Chi-Squared Distribution

        • 4.5 Other Continuous Distributions

          • The Weibull Distribution

          • The Lognormal Distribution

          • The Beta Distribution

        • 4.6 Probability Plots

          • Sample Percentiles

          • A Probability Plot

          • Beyond Normality

        • 4.7 Transformations of a Random Variable

        • Bibliography

    • fulltext(4)

      • Chapter 5: Joint Probability Distributions

        • 5.1 Jointly Distributed Random Variables

          • The Joint Probability Mass Function for Two Discrete Random Variables

          • The Joint Probability Density Function for Two Continuous Random Variables

          • Independent Random Variables

          • More than Two Random Variables

        • 5.2 Expected Values, Covariance, and Correlation

          • Covariance

          • Correlation

        • 5.3 Conditional Distributions

          • Independence

          • The Bivariate Normal Distribution

          • Regression to the Mean

          • The Mean and Variance Via the Conditional Mean and Variance

        • 5.4 Transformations of Random Variables

          • The Joint Distribution of Two New Random Variables

          • The Joint Distribution of More than Two New Variables

        • 5.5 Order Statistics

          • The Distributions of Yn and Y1

          • The Joint Distribution of the n Order Statistics

          • The Distribution of a Single Order Statistic

          • The Joint Distribution of Two Order Statistics

          • An Intuitive Derivation of Order Statistic PDF´s

        • Bibliography

    • fulltext(5)

      • Chapter 6: Statistics and Sampling Distributions

        • 6.1 Statistics and Their Distributions

          • Random Samples

          • Deriving the Sampling Distribution of a Statistic

          • Simulation Experiments

        • 6.2 The Distribution of the Sample Mean

          • The Case of a Normal Population Distribution

          • The Central Limit Theorem

          • Other Applications of the Central Limit Theorem

          • The Law of Large Numbers

        • 6.3 The Mean, Variance, and MGF for Several Variables

          • The Difference Between Two Random Variables

          • The Case of Normal Random Variables

          • Moment Generating Functions for Linear Combinations

        • 6.4 Distributions Based on a Normal Random Sample

          • The Chi-Squared Distribution

          • The t Distribution

          • The F Distribution

          • Summary of Relationships

        • Bibliography

        • Appendix: Proof of the Central Limit Theorem

    • fulltext(6)

      • Chapter 7: Point Estimation

        • 7.1 General Concepts and Criteria

          • Mean Squared Error

          • Unbiased Estimators

          • Estimators with Minimum Variance

          • More Complications

          • Reporting a Point Estimate: The Standard Error

          • The Bootstrap

        • 7.2 Methods of Point Estimation

          • The Method of Moments

          • Maximum Likelihood Estimation

          • Some Properties of MLEs

          • Large-Sample Behavior of the MLE

          • Some Complications

        • 7.3 Sufficiency

          • The Factorization Theorem

          • Jointly Sufficient Statistics

          • Minimal Sufficiency

          • Improving an Estimator

          • Further Comments

        • 7.4 Information and Efficiency

          • Information in a Random Sample

          • The Cramér-Rao Inequality

          • Large Sample Properties of the MLE

        • Bibliography

    • fulltext(7)

      • Chapter 8: Statistical Intervals Based on a Single Sample

        • 8.1 Basic Properties of Confidence Intervals

          • Interpreting a Confidence Level

          • Other Levels of Confidence

          • Confidence Level, Precision, and Choice of Sample Size

          • Deriving a Confidence Interval

        • 8.2 Large-Sample Confidence Intervals for a Population Mean and Proportion

          • A Large-Sample Interval for μ

          • A General Large-Sample Confidence Interval

          • A Confidence Interval for a Population Proportion

          • One-Sided Confidence Intervals (Confidence Bounds)

        • 8.3 Intervals Based on a Normal Population Distribution

          • Properties of t Distributions

          • The One-Sample t Confidence Interval

          • A Prediction Interval for a Single Future Value

          • Tolerance Intervals

          • Intervals Based on Nonnormal Population Distributions

        • 8.4 Confidence Intervals for the Variance and Standard Deviation of a Normal Population

        • 8.5 Bootstrap Confidence Intervals

          • The Percentile Interval

          • A Refined Interval

          • Bootstrapping the Median

          • The Mean Versus the Median

        • Bibliography

    • fulltext(8)

      • Chapter 9: Tests of Hypotheses Based on a Single Sample

        • 9.1 Hypotheses and Test Procedures

          • Test Procedures

          • Errors in Hypothesis Testing

        • 9.2 Tests About a Population Mean

          • Case I: A Normal Population with Known sigma

          • Case II: Large-Sample Tests

          • Case III: A Normal Population Distribution with Unknown sigma

        • 9.3 Tests Concerning a Population Proportion

          • Large-Sample Tests

          • Small-Sample Tests

        • 9.4 P-Values

          • P-Values for z Tests

          • P-Values for t Tests

          • More on Interpreting P-Values

        • 9.5 Some Comments on Selecting a Test Procedure

          • Statistical Versus Practical Significance

          • Best Tests for Simple Hypotheses

          • Power and Uniformly Most Powerful Tests

          • Likelihood Ratio Tests

        • Bibliography

    • fulltext(9)

      • Chapter 10: Inferences Based on Two Samples

        • 10.1 z Tests and Confidence Intervals for a Difference Between Two Population Means

          • Test Procedures for Normal Populations with Known Variances

          • Using a Comparison to Identify Causality

          • beta and the Choice of Sample Size

          • Large-Sample Tests

          • Confidence Intervals for M1-M2

        • 10.2 The Two-Sample t Test and Confidence Interval

          • Pooled t Procedures

          • Type II Error Probabilities

        • 10.3 Analysis of Paired Data

          • The Paired t Test

          • A Confidence Interval for MD

            • Paired Data and Two-Sample t Procedures

            • Paired Versus Unpaired Experiments

        • 10.4 Inferences About Two Population Proportions

          • A Large-Sample Test Procedure

          • Type II Error Probabilities and Sample Sizes

          • A Large-Sample Confidence Interval for p1 - p2

          • Small-Sample Inferences

        • 10.5 Inferences About Two Population Variances

          • Testing Hypotheses

          • P-Values for F Tests

          • A Confidence Interval for s

            • Bibliography

        • 10.6 Comparisons Using the Bootstrap and Permutation Methods

          • The Bootstrap for Two Samples

          • Permutation Tests

          • Inferences About Variability

          • The Analysis of Paired Data

    • fulltext(10)

      • Chapter 11: The Analysis of Variance

        • 11.1 Single-Factor ANOVA

          • Notation and Assumptions

          • Sums of Squares and Mean Squares

          • The F Test

          • Computational Formulas

          • Testing for the Assumption of Equal Variances

        • 11.2 Multiple Comparisons in ANOVA

          • Tukey´s Procedure

          • The Interpretation of α in Tukey´s Procedure

          • Confidence Intervals for Other Parametric Functions

        • 11.3 More on Single-Factor ANOVA

          • An Alternative Description of the ANOVA Model

          • beta for the F Test

          • Relationship of the F Test to the t Test

          • Single-Factor ANOVA When Sample Sizes Are Unequal

          • Multiple Comparisons When Sample Sizes Are Unequal

          • Data Transformation

          • A Random Effects Model

        • 11.4 Two-Factor ANOVA with Kij=1

          • The Model

          • Test Procedures

          • Expected Mean Squares

          • Multiple Comparisons

          • Randomized Block Experiments

          • Models for Random Effects

        • 11.5 Two-Factor ANOVA with Kij1

          • Parameters for the Fixed Effects Model with Interaction

          • Notation, Model, and Analysis

          • Multiple Comparisons

          • Models with Mixed and Random Effects

        • Bibliography

    • fulltext(11)

      • Chapter 12: Regression and Correlation

        • 12.1 The Simple Linear and Logistic Regression Models

          • A Linear Probabilistic Model

          • The Logistic Regression Model

        • 12.2 Estimating Model Parameters

          • Estimating σ2 and σ

          • The Coefficient of Determination

          • Terminology and Scope of Regression Analysis

        • 12.3 Inferences About the Regression Coefficient β1

          • A Confidence Interval for β1

          • Hypothesis-Testing Procedures

          • Regression and ANOVA

          • Fitting the Logistic Regression Model

        • 12.4 Inferences Concerning μY.x* and the Prediction of Future Y Values

          • Inferences Concerning μY.x*

          • A Prediction Interval for a Future Value of Y

        • 12.5 Correlation

          • The Sample Correlation Coefficient r

          • Properties of r

          • The Population Correlation Coefficient rho and Inferences About Correlation

          • Other Inferences Concerning rho

        • 12.6 Assessing Model Adequacy

          • Residuals and Standardized Residuals

          • Diagnostic Plots

          • Difficulties and Remedies

        • 12.7 Multiple Regression Analysis

          • Estimating Parameters

          • σ2 and the Coefficient of Multiple Determination

          • A Model Utility Test

          • Inferences in Multiple Regression

          • Assessing Model Adequacy

          • Multiple Regression Models

          • Models with Predictors for Categorical Variables

        • 12.8 Regression with Matrices

          • The Normal Equations

          • Residuals, ANOVA, F, and R-Squared

          • Covariance Matrices

          • The Hat Matrix

        • Bibliography

    • fulltext(12)

      • Chapter 13: Goodness-of-Fit Tests and Categorical Data Analysis

        • 13.1 Goodness-of-Fit Tests When Category Probabilities Are Completely Specified

          • P-Values for Chi-Squared Tests

          • 2 When the pi´s Are Functions of Other Parameters

          • 2 When the Underlying Distribution Is Continuous

        • 13.2 Goodness-of-Fit Tests for Composite Hypotheses

          • 2 When Parameters Are Estimated

          • Goodness of Fit for Discrete Distributions

          • Goodness of Fit for Continuous Distributions

          • A Special Test for Normality

        • 13.3 Two-Way Contingency Tables

          • Testing for Homogeneity

          • Testing for Independence

          • Ordinal Factors and Logistic Regression

        • Bibliography

    • fulltext(13)

      • Chapter 14: Alternative Approaches to Inference

        • 14.1 The Wilcoxon Signed-Rank Test

          • A General Description of the Wilcoxon Signed-Rank Test

          • Paired Observations

          • Efficiency of the Wilcoxon Signed-Rank Test

        • 14.2 The Wilcoxon Rank-Sum Test

          • Development of the Test When m=3, n=4

          • General Description of the Rank-Sum Test

          • Efficiency of the Wilcoxon Rank-Sum Test

        • 14.3 Distribution-Free Confidence Intervals

          • The Wilcoxon Signed-Rank Interval

          • The Wilcoxon Rank-Sum Interval

        • 14.4 Bayesian Methods

        • Bibliography

    • back-matter

    • Index

    • Answers to Odd-Numbered Exercises

      • Chapter 1

      • Chapter 2

      • Chapter 3

      • Chapter 4

      • Chapter 5

      • Chapter 6

      • Chapter 7

      • Chapter 8

      • Chapter 9

      • Chapter 10

      • Chapter 11

      • Chapter 12

      • Chapter 13

      • Chapter 14

    • Appendix Tables

    • GetFullPageImage

    • front-matter

      • Modern Mathematical Statistics with Applications

        • About the Authors

        • Contents

        • Preface

    • fulltext

    • fulltext(1)

    • fulltext(2)

    • fulltext(3)

    • fulltext(4)

    • fulltext(5)

    • fulltext(6)

    • fulltext(7)

    • fulltext(8)

    • fulltext(9)

    • fulltext(10)

    • fulltext(11)

    • fulltext(12)

    • fulltext(13)

    • back-matter

    • Index

    • Answers to Odd-Numbered Exercises

    • Appendix Tables

  • Chapter 5 Joint Probability Distributions

    • GetFullPageImage

    • front-matter

      • Modern Mathematical Statistics with Applications

        • About the Authors

        • Contents

        • Preface

          • Purpose

          • Content

          • Mathematical Level

          • Recommended Coverage

          • Acknowledgments

          • A Final Thought

    • fulltext

      • Chapter 1: Overview and Descriptive Statistics

        • 1.1 Populations and Samples

          • Branches of Statistics

          • Collecting Data

        • 1.2 Pictorial and Tabular Methods in Descriptive Statistics

          • Notation

          • Stem-and-Leaf Displays

          • Dotplots

          • Histograms

          • Histogram Shapes

          • Qualitative Data

          • Multivariate Data

        • 1.3 Measures of Location

          • The Mean

          • The Median

          • Other Measures of Location: Quartiles, Percentiles, and Trimmed Means

          • Categorical Data and Sample Proportions

        • 1.4 Measures of Variability

          • Measures of Variability for Sample Data

          • Motivation for s2

          • A Computing Formula for s2

          • Boxplots

          • Boxplots That Show Outliers

          • Comparative Boxplots

        • Bibliography

    • fulltext(1)

      • Chapter 2: Probability

        • 2.1 Sample Spaces and Events

          • The Sample Space of an Experiment

          • Events

          • Some Relations from Set Theory

        • 2.2 Axioms, Interpretations, and Properties of Probability

          • Interpreting Probability

          • More Probability Properties

          • Determining Probabilities Systematically

          • Equally Likely Outcomes

        • 2.3 Counting Techniques

          • The Product Rule for Ordered Pairs

          • Tree Diagrams

          • A More General Product Rule

          • Permutations

          • Combinations

        • 2.4 Conditional Probability

          • The Definition of Conditional Probability

          • The Multiplication Rule for P(A ∩ B)

          • Bayes´ Theorem

        • 2.5 Independence

          • P(A ∩ B) When Events Are Independent

          • Independence of More Than Two Events

        • Bibliography

    • fulltext(2)

      • Chapter 3: Discrete Random Variables and Probability Distributions

        • 3.1 Random Variables

          • Two Types of Random Variables

        • 3.2 Probability Distributions for Discrete Random Variables

          • A Parameter of a Probability Distribution

          • The Cumulative Distribution Function

          • Another View of Probability Mass Functions

        • 3.3 Expected Values of Discrete Random Variables

          • The Expected Value of X

          • The Expected Value of a Function

          • The Variance of X

          • A Shortcut Formula for sigma2

          • Rules of Variance

        • 3.4 Moments and Moment Generating Functions

        • 3.5 The Binomial Probability Distribution

          • The Binomial Random Variable and Distribution

          • Using Binomial Tables

          • The Mean and Variance of X

          • The Moment Generating Function of X

        • 3.6 Hypergeometric and Negative Binomial Distributions

          • The Hypergeometric Distribution

          • The Negative Binomial Distribution

        • 3.7 The Poisson Probability Distribution

          • The Poisson Distribution as a Limit

          • The Mean, Variance and MGF of X

          • The Poisson Process

        • Bibliography

    • fulltext(3)

      • Chapter 4: Continuous Random Variables and Probability Distributions

        • 4.1 Probability Density Functions and Cumulative Distribution Functions

          • Probability Distributions for Continuous Variables

          • The Cumulative Distribution Function

          • Using F(x) to Compute Probabilities

          • Obtaining f(x) from F(x)

          • Percentiles of a Continuous Distribution

        • 4.2 Expected Values and Moment Generating Functions

          • Expected Values

          • The Variance and Standard Deviation

          • Approximating the Mean Value and Standard Deviation

          • Moment Generating Functions

        • 4.3 The Normal Distribution

          • The Standard Normal Distribution

          • Percentiles of the Standard Normal Distribution

            • zα Notation

          • Nonstandard Normal Distributions

            • Percentiles of an Arbitrary Normal Distribution

          • The Normal Distribution and Discrete Populations

          • Approximating the Binomial Distribution

          • The Normal Moment Generating Function

        • 4.4 The Gamma Distribution and Its Relatives

          • The Family of Gamma Distributions

          • The Exponential Distribution

          • The Chi-Squared Distribution

        • 4.5 Other Continuous Distributions

          • The Weibull Distribution

          • The Lognormal Distribution

          • The Beta Distribution

        • 4.6 Probability Plots

          • Sample Percentiles

          • A Probability Plot

          • Beyond Normality

        • 4.7 Transformations of a Random Variable

        • Bibliography

    • fulltext(4)

      • Chapter 5: Joint Probability Distributions

        • 5.1 Jointly Distributed Random Variables

          • The Joint Probability Mass Function for Two Discrete Random Variables

          • The Joint Probability Density Function for Two Continuous Random Variables

          • Independent Random Variables

          • More than Two Random Variables

        • 5.2 Expected Values, Covariance, and Correlation

          • Covariance

          • Correlation

        • 5.3 Conditional Distributions

          • Independence

          • The Bivariate Normal Distribution

          • Regression to the Mean

          • The Mean and Variance Via the Conditional Mean and Variance

        • 5.4 Transformations of Random Variables

          • The Joint Distribution of Two New Random Variables

          • The Joint Distribution of More than Two New Variables

        • 5.5 Order Statistics

          • The Distributions of Yn and Y1

          • The Joint Distribution of the n Order Statistics

          • The Distribution of a Single Order Statistic

          • The Joint Distribution of Two Order Statistics

          • An Intuitive Derivation of Order Statistic PDF´s

        • Bibliography

    • fulltext(5)

      • Chapter 6: Statistics and Sampling Distributions

        • 6.1 Statistics and Their Distributions

          • Random Samples

          • Deriving the Sampling Distribution of a Statistic

          • Simulation Experiments

        • 6.2 The Distribution of the Sample Mean

          • The Case of a Normal Population Distribution

          • The Central Limit Theorem

          • Other Applications of the Central Limit Theorem

          • The Law of Large Numbers

        • 6.3 The Mean, Variance, and MGF for Several Variables

          • The Difference Between Two Random Variables

          • The Case of Normal Random Variables

          • Moment Generating Functions for Linear Combinations

        • 6.4 Distributions Based on a Normal Random Sample

          • The Chi-Squared Distribution

          • The t Distribution

          • The F Distribution

          • Summary of Relationships

        • Bibliography

        • Appendix: Proof of the Central Limit Theorem

    • fulltext(6)

      • Chapter 7: Point Estimation

        • 7.1 General Concepts and Criteria

          • Mean Squared Error

          • Unbiased Estimators

          • Estimators with Minimum Variance

          • More Complications

          • Reporting a Point Estimate: The Standard Error

          • The Bootstrap

        • 7.2 Methods of Point Estimation

          • The Method of Moments

          • Maximum Likelihood Estimation

          • Some Properties of MLEs

          • Large-Sample Behavior of the MLE

          • Some Complications

        • 7.3 Sufficiency

          • The Factorization Theorem

          • Jointly Sufficient Statistics

          • Minimal Sufficiency

          • Improving an Estimator

          • Further Comments

        • 7.4 Information and Efficiency

          • Information in a Random Sample

          • The Cramér-Rao Inequality

          • Large Sample Properties of the MLE

        • Bibliography

    • fulltext(7)

      • Chapter 8: Statistical Intervals Based on a Single Sample

        • 8.1 Basic Properties of Confidence Intervals

          • Interpreting a Confidence Level

          • Other Levels of Confidence

          • Confidence Level, Precision, and Choice of Sample Size

          • Deriving a Confidence Interval

        • 8.2 Large-Sample Confidence Intervals for a Population Mean and Proportion

          • A Large-Sample Interval for μ

          • A General Large-Sample Confidence Interval

          • A Confidence Interval for a Population Proportion

          • One-Sided Confidence Intervals (Confidence Bounds)

        • 8.3 Intervals Based on a Normal Population Distribution

          • Properties of t Distributions

          • The One-Sample t Confidence Interval

          • A Prediction Interval for a Single Future Value

          • Tolerance Intervals

          • Intervals Based on Nonnormal Population Distributions

        • 8.4 Confidence Intervals for the Variance and Standard Deviation of a Normal Population

        • 8.5 Bootstrap Confidence Intervals

          • The Percentile Interval

          • A Refined Interval

          • Bootstrapping the Median

          • The Mean Versus the Median

        • Bibliography

    • fulltext(8)

      • Chapter 9: Tests of Hypotheses Based on a Single Sample

        • 9.1 Hypotheses and Test Procedures

          • Test Procedures

          • Errors in Hypothesis Testing

        • 9.2 Tests About a Population Mean

          • Case I: A Normal Population with Known sigma

          • Case II: Large-Sample Tests

          • Case III: A Normal Population Distribution with Unknown sigma

        • 9.3 Tests Concerning a Population Proportion

          • Large-Sample Tests

          • Small-Sample Tests

        • 9.4 P-Values

          • P-Values for z Tests

          • P-Values for t Tests

          • More on Interpreting P-Values

        • 9.5 Some Comments on Selecting a Test Procedure

          • Statistical Versus Practical Significance

          • Best Tests for Simple Hypotheses

          • Power and Uniformly Most Powerful Tests

          • Likelihood Ratio Tests

        • Bibliography

    • fulltext(9)

      • Chapter 10: Inferences Based on Two Samples

        • 10.1 z Tests and Confidence Intervals for a Difference Between Two Population Means

          • Test Procedures for Normal Populations with Known Variances

          • Using a Comparison to Identify Causality

          • beta and the Choice of Sample Size

          • Large-Sample Tests

          • Confidence Intervals for M1-M2

        • 10.2 The Two-Sample t Test and Confidence Interval

          • Pooled t Procedures

          • Type II Error Probabilities

        • 10.3 Analysis of Paired Data

          • The Paired t Test

          • A Confidence Interval for MD

            • Paired Data and Two-Sample t Procedures

            • Paired Versus Unpaired Experiments

        • 10.4 Inferences About Two Population Proportions

          • A Large-Sample Test Procedure

          • Type II Error Probabilities and Sample Sizes

          • A Large-Sample Confidence Interval for p1 - p2

          • Small-Sample Inferences

        • 10.5 Inferences About Two Population Variances

          • Testing Hypotheses

          • P-Values for F Tests

          • A Confidence Interval for s

            • Bibliography

        • 10.6 Comparisons Using the Bootstrap and Permutation Methods

          • The Bootstrap for Two Samples

          • Permutation Tests

          • Inferences About Variability

          • The Analysis of Paired Data

    • fulltext(10)

      • Chapter 11: The Analysis of Variance

        • 11.1 Single-Factor ANOVA

          • Notation and Assumptions

          • Sums of Squares and Mean Squares

          • The F Test

          • Computational Formulas

          • Testing for the Assumption of Equal Variances

        • 11.2 Multiple Comparisons in ANOVA

          • Tukey´s Procedure

          • The Interpretation of α in Tukey´s Procedure

          • Confidence Intervals for Other Parametric Functions

        • 11.3 More on Single-Factor ANOVA

          • An Alternative Description of the ANOVA Model

          • beta for the F Test

          • Relationship of the F Test to the t Test

          • Single-Factor ANOVA When Sample Sizes Are Unequal

          • Multiple Comparisons When Sample Sizes Are Unequal

          • Data Transformation

          • A Random Effects Model

        • 11.4 Two-Factor ANOVA with Kij=1

          • The Model

          • Test Procedures

          • Expected Mean Squares

          • Multiple Comparisons

          • Randomized Block Experiments

          • Models for Random Effects

        • 11.5 Two-Factor ANOVA with Kij1

          • Parameters for the Fixed Effects Model with Interaction

          • Notation, Model, and Analysis

          • Multiple Comparisons

          • Models with Mixed and Random Effects

        • Bibliography

    • fulltext(11)

      • Chapter 12: Regression and Correlation

        • 12.1 The Simple Linear and Logistic Regression Models

          • A Linear Probabilistic Model

          • The Logistic Regression Model

        • 12.2 Estimating Model Parameters

          • Estimating σ2 and σ

          • The Coefficient of Determination

          • Terminology and Scope of Regression Analysis

        • 12.3 Inferences About the Regression Coefficient β1

          • A Confidence Interval for β1

          • Hypothesis-Testing Procedures

          • Regression and ANOVA

          • Fitting the Logistic Regression Model

        • 12.4 Inferences Concerning μY.x* and the Prediction of Future Y Values

          • Inferences Concerning μY.x*

          • A Prediction Interval for a Future Value of Y

        • 12.5 Correlation

          • The Sample Correlation Coefficient r

          • Properties of r

          • The Population Correlation Coefficient rho and Inferences About Correlation

          • Other Inferences Concerning rho

        • 12.6 Assessing Model Adequacy

          • Residuals and Standardized Residuals

          • Diagnostic Plots

          • Difficulties and Remedies

        • 12.7 Multiple Regression Analysis

          • Estimating Parameters

          • σ2 and the Coefficient of Multiple Determination

          • A Model Utility Test

          • Inferences in Multiple Regression

          • Assessing Model Adequacy

          • Multiple Regression Models

          • Models with Predictors for Categorical Variables

        • 12.8 Regression with Matrices

          • The Normal Equations

          • Residuals, ANOVA, F, and R-Squared

          • Covariance Matrices

          • The Hat Matrix

        • Bibliography

    • fulltext(12)

      • Chapter 13: Goodness-of-Fit Tests and Categorical Data Analysis

        • 13.1 Goodness-of-Fit Tests When Category Probabilities Are Completely Specified

          • P-Values for Chi-Squared Tests

          • 2 When the pi´s Are Functions of Other Parameters

          • 2 When the Underlying Distribution Is Continuous

        • 13.2 Goodness-of-Fit Tests for Composite Hypotheses

          • 2 When Parameters Are Estimated

          • Goodness of Fit for Discrete Distributions

          • Goodness of Fit for Continuous Distributions

          • A Special Test for Normality

        • 13.3 Two-Way Contingency Tables

          • Testing for Homogeneity

          • Testing for Independence

          • Ordinal Factors and Logistic Regression

        • Bibliography

    • fulltext(13)

      • Chapter 14: Alternative Approaches to Inference

        • 14.1 The Wilcoxon Signed-Rank Test

          • A General Description of the Wilcoxon Signed-Rank Test

          • Paired Observations

          • Efficiency of the Wilcoxon Signed-Rank Test

        • 14.2 The Wilcoxon Rank-Sum Test

          • Development of the Test When m=3, n=4

          • General Description of the Rank-Sum Test

          • Efficiency of the Wilcoxon Rank-Sum Test

        • 14.3 Distribution-Free Confidence Intervals

          • The Wilcoxon Signed-Rank Interval

          • The Wilcoxon Rank-Sum Interval

        • 14.4 Bayesian Methods

        • Bibliography

    • back-matter

    • Index

    • Answers to Odd-Numbered Exercises

      • Chapter 1

      • Chapter 2

      • Chapter 3

      • Chapter 4

      • Chapter 5

      • Chapter 6

      • Chapter 7

      • Chapter 8

      • Chapter 9

      • Chapter 10

      • Chapter 11

      • Chapter 12

      • Chapter 13

      • Chapter 14

    • Appendix Tables

    • GetFullPageImage

    • front-matter

      • Modern Mathematical Statistics with Applications

        • About the Authors

        • Contents

        • Preface

    • fulltext

    • fulltext(1)

    • fulltext(2)

    • fulltext(3)

    • fulltext(4)

    • fulltext(5)

    • fulltext(6)

    • fulltext(7)

    • fulltext(8)

    • fulltext(9)

    • fulltext(10)

    • fulltext(11)

    • fulltext(12)

    • fulltext(13)

    • back-matter

    • Index

    • Answers to Odd-Numbered Exercises

    • Appendix Tables

  • Chapter 6 Statistics and Sampling Distributions

    • GetFullPageImage

    • front-matter

      • Modern Mathematical Statistics with Applications

        • About the Authors

        • Contents

        • Preface

          • Purpose

          • Content

          • Mathematical Level

          • Recommended Coverage

          • Acknowledgments

          • A Final Thought

    • fulltext

      • Chapter 1: Overview and Descriptive Statistics

        • 1.1 Populations and Samples

          • Branches of Statistics

          • Collecting Data

        • 1.2 Pictorial and Tabular Methods in Descriptive Statistics

          • Notation

          • Stem-and-Leaf Displays

          • Dotplots

          • Histograms

          • Histogram Shapes

          • Qualitative Data

          • Multivariate Data

        • 1.3 Measures of Location

          • The Mean

          • The Median

          • Other Measures of Location: Quartiles, Percentiles, and Trimmed Means

          • Categorical Data and Sample Proportions

        • 1.4 Measures of Variability

          • Measures of Variability for Sample Data

          • Motivation for s2

          • A Computing Formula for s2

          • Boxplots

          • Boxplots That Show Outliers

          • Comparative Boxplots

        • Bibliography

    • fulltext(1)

      • Chapter 2: Probability

        • 2.1 Sample Spaces and Events

          • The Sample Space of an Experiment

          • Events

          • Some Relations from Set Theory

        • 2.2 Axioms, Interpretations, and Properties of Probability

          • Interpreting Probability

          • More Probability Properties

          • Determining Probabilities Systematically

          • Equally Likely Outcomes

        • 2.3 Counting Techniques

          • The Product Rule for Ordered Pairs

          • Tree Diagrams

          • A More General Product Rule

          • Permutations

          • Combinations

        • 2.4 Conditional Probability

          • The Definition of Conditional Probability

          • The Multiplication Rule for P(A ∩ B)

          • Bayes´ Theorem

        • 2.5 Independence

          • P(A ∩ B) When Events Are Independent

          • Independence of More Than Two Events

        • Bibliography

    • fulltext(2)

      • Chapter 3: Discrete Random Variables and Probability Distributions

        • 3.1 Random Variables

          • Two Types of Random Variables

        • 3.2 Probability Distributions for Discrete Random Variables

          • A Parameter of a Probability Distribution

          • The Cumulative Distribution Function

          • Another View of Probability Mass Functions

        • 3.3 Expected Values of Discrete Random Variables

          • The Expected Value of X

          • The Expected Value of a Function

          • The Variance of X

          • A Shortcut Formula for sigma2

          • Rules of Variance

        • 3.4 Moments and Moment Generating Functions

        • 3.5 The Binomial Probability Distribution

          • The Binomial Random Variable and Distribution

          • Using Binomial Tables

          • The Mean and Variance of X

          • The Moment Generating Function of X

        • 3.6 Hypergeometric and Negative Binomial Distributions

          • The Hypergeometric Distribution

          • The Negative Binomial Distribution

        • 3.7 The Poisson Probability Distribution

          • The Poisson Distribution as a Limit

          • The Mean, Variance and MGF of X

          • The Poisson Process

        • Bibliography

    • fulltext(3)

      • Chapter 4: Continuous Random Variables and Probability Distributions

        • 4.1 Probability Density Functions and Cumulative Distribution Functions

          • Probability Distributions for Continuous Variables

          • The Cumulative Distribution Function

          • Using F(x) to Compute Probabilities

          • Obtaining f(x) from F(x)

          • Percentiles of a Continuous Distribution

        • 4.2 Expected Values and Moment Generating Functions

          • Expected Values

          • The Variance and Standard Deviation

          • Approximating the Mean Value and Standard Deviation

          • Moment Generating Functions

        • 4.3 The Normal Distribution

          • The Standard Normal Distribution

          • Percentiles of the Standard Normal Distribution

            • zα Notation

          • Nonstandard Normal Distributions

            • Percentiles of an Arbitrary Normal Distribution

          • The Normal Distribution and Discrete Populations

          • Approximating the Binomial Distribution

          • The Normal Moment Generating Function

        • 4.4 The Gamma Distribution and Its Relatives

          • The Family of Gamma Distributions

          • The Exponential Distribution

          • The Chi-Squared Distribution

        • 4.5 Other Continuous Distributions

          • The Weibull Distribution

          • The Lognormal Distribution

          • The Beta Distribution

        • 4.6 Probability Plots

          • Sample Percentiles

          • A Probability Plot

          • Beyond Normality

        • 4.7 Transformations of a Random Variable

        • Bibliography

    • fulltext(4)

      • Chapter 5: Joint Probability Distributions

        • 5.1 Jointly Distributed Random Variables

          • The Joint Probability Mass Function for Two Discrete Random Variables

          • The Joint Probability Density Function for Two Continuous Random Variables

          • Independent Random Variables

          • More than Two Random Variables

        • 5.2 Expected Values, Covariance, and Correlation

          • Covariance

          • Correlation

        • 5.3 Conditional Distributions

          • Independence

          • The Bivariate Normal Distribution

          • Regression to the Mean

          • The Mean and Variance Via the Conditional Mean and Variance

        • 5.4 Transformations of Random Variables

          • The Joint Distribution of Two New Random Variables

          • The Joint Distribution of More than Two New Variables

        • 5.5 Order Statistics

          • The Distributions of Yn and Y1

          • The Joint Distribution of the n Order Statistics

          • The Distribution of a Single Order Statistic

          • The Joint Distribution of Two Order Statistics

          • An Intuitive Derivation of Order Statistic PDF´s

        • Bibliography

    • fulltext(5)

      • Chapter 6: Statistics and Sampling Distributions

        • 6.1 Statistics and Their Distributions

          • Random Samples

          • Deriving the Sampling Distribution of a Statistic

          • Simulation Experiments

        • 6.2 The Distribution of the Sample Mean

          • The Case of a Normal Population Distribution

          • The Central Limit Theorem

          • Other Applications of the Central Limit Theorem

          • The Law of Large Numbers

        • 6.3 The Mean, Variance, and MGF for Several Variables

          • The Difference Between Two Random Variables

          • The Case of Normal Random Variables

          • Moment Generating Functions for Linear Combinations

        • 6.4 Distributions Based on a Normal Random Sample

          • The Chi-Squared Distribution

          • The t Distribution

          • The F Distribution

          • Summary of Relationships

        • Bibliography

        • Appendix: Proof of the Central Limit Theorem

    • fulltext(6)

      • Chapter 7: Point Estimation

        • 7.1 General Concepts and Criteria

          • Mean Squared Error

          • Unbiased Estimators

          • Estimators with Minimum Variance

          • More Complications

          • Reporting a Point Estimate: The Standard Error

          • The Bootstrap

        • 7.2 Methods of Point Estimation

          • The Method of Moments

          • Maximum Likelihood Estimation

          • Some Properties of MLEs

          • Large-Sample Behavior of the MLE

          • Some Complications

        • 7.3 Sufficiency

          • The Factorization Theorem

          • Jointly Sufficient Statistics

          • Minimal Sufficiency

          • Improving an Estimator

          • Further Comments

        • 7.4 Information and Efficiency

          • Information in a Random Sample

          • The Cramér-Rao Inequality

          • Large Sample Properties of the MLE

        • Bibliography

    • fulltext(7)

      • Chapter 8: Statistical Intervals Based on a Single Sample

        • 8.1 Basic Properties of Confidence Intervals

          • Interpreting a Confidence Level

          • Other Levels of Confidence

          • Confidence Level, Precision, and Choice of Sample Size

          • Deriving a Confidence Interval

        • 8.2 Large-Sample Confidence Intervals for a Population Mean and Proportion

          • A Large-Sample Interval for μ

          • A General Large-Sample Confidence Interval

          • A Confidence Interval for a Population Proportion

          • One-Sided Confidence Intervals (Confidence Bounds)

        • 8.3 Intervals Based on a Normal Population Distribution

          • Properties of t Distributions

          • The One-Sample t Confidence Interval

          • A Prediction Interval for a Single Future Value

          • Tolerance Intervals

          • Intervals Based on Nonnormal Population Distributions

        • 8.4 Confidence Intervals for the Variance and Standard Deviation of a Normal Population

        • 8.5 Bootstrap Confidence Intervals

          • The Percentile Interval

          • A Refined Interval

          • Bootstrapping the Median

          • The Mean Versus the Median

        • Bibliography

    • fulltext(8)

      • Chapter 9: Tests of Hypotheses Based on a Single Sample

        • 9.1 Hypotheses and Test Procedures

          • Test Procedures

          • Errors in Hypothesis Testing

        • 9.2 Tests About a Population Mean

          • Case I: A Normal Population with Known sigma

          • Case II: Large-Sample Tests

          • Case III: A Normal Population Distribution with Unknown sigma

        • 9.3 Tests Concerning a Population Proportion

          • Large-Sample Tests

          • Small-Sample Tests

        • 9.4 P-Values

          • P-Values for z Tests

          • P-Values for t Tests

          • More on Interpreting P-Values

        • 9.5 Some Comments on Selecting a Test Procedure

          • Statistical Versus Practical Significance

          • Best Tests for Simple Hypotheses

          • Power and Uniformly Most Powerful Tests

          • Likelihood Ratio Tests

        • Bibliography

    • fulltext(9)

      • Chapter 10: Inferences Based on Two Samples

        • 10.1 z Tests and Confidence Intervals for a Difference Between Two Population Means

          • Test Procedures for Normal Populations with Known Variances

          • Using a Comparison to Identify Causality

          • beta and the Choice of Sample Size

          • Large-Sample Tests

          • Confidence Intervals for M1-M2

        • 10.2 The Two-Sample t Test and Confidence Interval

          • Pooled t Procedures

          • Type II Error Probabilities

        • 10.3 Analysis of Paired Data

          • The Paired t Test

          • A Confidence Interval for MD

            • Paired Data and Two-Sample t Procedures

            • Paired Versus Unpaired Experiments

        • 10.4 Inferences About Two Population Proportions

          • A Large-Sample Test Procedure

          • Type II Error Probabilities and Sample Sizes

          • A Large-Sample Confidence Interval for p1 - p2

          • Small-Sample Inferences

        • 10.5 Inferences About Two Population Variances

          • Testing Hypotheses

          • P-Values for F Tests

          • A Confidence Interval for s

            • Bibliography

        • 10.6 Comparisons Using the Bootstrap and Permutation Methods

          • The Bootstrap for Two Samples

          • Permutation Tests

          • Inferences About Variability

          • The Analysis of Paired Data

    • fulltext(10)

      • Chapter 11: The Analysis of Variance

        • 11.1 Single-Factor ANOVA

          • Notation and Assumptions

          • Sums of Squares and Mean Squares

          • The F Test

          • Computational Formulas

          • Testing for the Assumption of Equal Variances

        • 11.2 Multiple Comparisons in ANOVA

          • Tukey´s Procedure

          • The Interpretation of α in Tukey´s Procedure

          • Confidence Intervals for Other Parametric Functions

        • 11.3 More on Single-Factor ANOVA

          • An Alternative Description of the ANOVA Model

          • beta for the F Test

          • Relationship of the F Test to the t Test

          • Single-Factor ANOVA When Sample Sizes Are Unequal

          • Multiple Comparisons When Sample Sizes Are Unequal

          • Data Transformation

          • A Random Effects Model

        • 11.4 Two-Factor ANOVA with Kij=1

          • The Model

          • Test Procedures

          • Expected Mean Squares

          • Multiple Comparisons

          • Randomized Block Experiments

          • Models for Random Effects

        • 11.5 Two-Factor ANOVA with Kij1

          • Parameters for the Fixed Effects Model with Interaction

          • Notation, Model, and Analysis

          • Multiple Comparisons

          • Models with Mixed and Random Effects

        • Bibliography

    • fulltext(11)

      • Chapter 12: Regression and Correlation

        • 12.1 The Simple Linear and Logistic Regression Models

          • A Linear Probabilistic Model

          • The Logistic Regression Model

        • 12.2 Estimating Model Parameters

          • Estimating σ2 and σ

          • The Coefficient of Determination

          • Terminology and Scope of Regression Analysis

        • 12.3 Inferences About the Regression Coefficient β1

          • A Confidence Interval for β1

          • Hypothesis-Testing Procedures

          • Regression and ANOVA

          • Fitting the Logistic Regression Model

        • 12.4 Inferences Concerning μY.x* and the Prediction of Future Y Values

          • Inferences Concerning μY.x*

          • A Prediction Interval for a Future Value of Y

        • 12.5 Correlation

          • The Sample Correlation Coefficient r

          • Properties of r

          • The Population Correlation Coefficient rho and Inferences About Correlation

          • Other Inferences Concerning rho

        • 12.6 Assessing Model Adequacy

          • Residuals and Standardized Residuals

          • Diagnostic Plots

          • Difficulties and Remedies

        • 12.7 Multiple Regression Analysis

          • Estimating Parameters

          • σ2 and the Coefficient of Multiple Determination

          • A Model Utility Test

          • Inferences in Multiple Regression

          • Assessing Model Adequacy

          • Multiple Regression Models

          • Models with Predictors for Categorical Variables

        • 12.8 Regression with Matrices

          • The Normal Equations

          • Residuals, ANOVA, F, and R-Squared

          • Covariance Matrices

          • The Hat Matrix

        • Bibliography

    • fulltext(12)

      • Chapter 13: Goodness-of-Fit Tests and Categorical Data Analysis

        • 13.1 Goodness-of-Fit Tests When Category Probabilities Are Completely Specified

          • P-Values for Chi-Squared Tests

          • 2 When the pi´s Are Functions of Other Parameters

          • 2 When the Underlying Distribution Is Continuous

        • 13.2 Goodness-of-Fit Tests for Composite Hypotheses

          • 2 When Parameters Are Estimated

          • Goodness of Fit for Discrete Distributions

          • Goodness of Fit for Continuous Distributions

          • A Special Test for Normality

        • 13.3 Two-Way Contingency Tables

          • Testing for Homogeneity

          • Testing for Independence

          • Ordinal Factors and Logistic Regression

        • Bibliography

    • fulltext(13)

      • Chapter 14: Alternative Approaches to Inference

        • 14.1 The Wilcoxon Signed-Rank Test

          • A General Description of the Wilcoxon Signed-Rank Test

          • Paired Observations

          • Efficiency of the Wilcoxon Signed-Rank Test

        • 14.2 The Wilcoxon Rank-Sum Test

          • Development of the Test When m=3, n=4

          • General Description of the Rank-Sum Test

          • Efficiency of the Wilcoxon Rank-Sum Test

        • 14.3 Distribution-Free Confidence Intervals

          • The Wilcoxon Signed-Rank Interval

          • The Wilcoxon Rank-Sum Interval

        • 14.4 Bayesian Methods

        • Bibliography

    • back-matter

    • Index

    • Answers to Odd-Numbered Exercises

      • Chapter 1

      • Chapter 2

      • Chapter 3

      • Chapter 4

      • Chapter 5

      • Chapter 6

      • Chapter 7

      • Chapter 8

      • Chapter 9

      • Chapter 10

      • Chapter 11

      • Chapter 12

      • Chapter 13

      • Chapter 14

    • Appendix Tables

    • GetFullPageImage

    • front-matter

      • Modern Mathematical Statistics with Applications

        • About the Authors

        • Contents

        • Preface

    • fulltext

    • fulltext(1)

    • fulltext(2)

    • fulltext(3)

    • fulltext(4)

    • fulltext(5)

    • fulltext(6)

    • fulltext(7)

    • fulltext(8)

    • fulltext(9)

    • fulltext(10)

    • fulltext(11)

    • fulltext(12)

    • fulltext(13)

    • back-matter

    • Index

    • Answers to Odd-Numbered Exercises

    • Appendix Tables

  • Chapter 7 Point Estimation

    • GetFullPageImage

    • front-matter

      • Modern Mathematical Statistics with Applications

        • About the Authors

        • Contents

        • Preface

          • Purpose

          • Content

          • Mathematical Level

          • Recommended Coverage

          • Acknowledgments

          • A Final Thought

    • fulltext

      • Chapter 1: Overview and Descriptive Statistics

        • 1.1 Populations and Samples

          • Branches of Statistics

          • Collecting Data

        • 1.2 Pictorial and Tabular Methods in Descriptive Statistics

          • Notation

          • Stem-and-Leaf Displays

          • Dotplots

          • Histograms

          • Histogram Shapes

          • Qualitative Data

          • Multivariate Data

        • 1.3 Measures of Location

          • The Mean

          • The Median

          • Other Measures of Location: Quartiles, Percentiles, and Trimmed Means

          • Categorical Data and Sample Proportions

        • 1.4 Measures of Variability

          • Measures of Variability for Sample Data

          • Motivation for s2

          • A Computing Formula for s2

          • Boxplots

          • Boxplots That Show Outliers

          • Comparative Boxplots

        • Bibliography

    • fulltext(1)

      • Chapter 2: Probability

        • 2.1 Sample Spaces and Events

          • The Sample Space of an Experiment

          • Events

          • Some Relations from Set Theory

        • 2.2 Axioms, Interpretations, and Properties of Probability

          • Interpreting Probability

          • More Probability Properties

          • Determining Probabilities Systematically

          • Equally Likely Outcomes

        • 2.3 Counting Techniques

          • The Product Rule for Ordered Pairs

          • Tree Diagrams

          • A More General Product Rule

          • Permutations

          • Combinations

        • 2.4 Conditional Probability

          • The Definition of Conditional Probability

          • The Multiplication Rule for P(A ∩ B)

          • Bayes´ Theorem

        • 2.5 Independence

          • P(A ∩ B) When Events Are Independent

          • Independence of More Than Two Events

        • Bibliography

    • fulltext(2)

      • Chapter 3: Discrete Random Variables and Probability Distributions

        • 3.1 Random Variables

          • Two Types of Random Variables

        • 3.2 Probability Distributions for Discrete Random Variables

          • A Parameter of a Probability Distribution

          • The Cumulative Distribution Function

          • Another View of Probability Mass Functions

        • 3.3 Expected Values of Discrete Random Variables

          • The Expected Value of X

          • The Expected Value of a Function

          • The Variance of X

          • A Shortcut Formula for sigma2

          • Rules of Variance

        • 3.4 Moments and Moment Generating Functions

        • 3.5 The Binomial Probability Distribution

          • The Binomial Random Variable and Distribution

          • Using Binomial Tables

          • The Mean and Variance of X

          • The Moment Generating Function of X

        • 3.6 Hypergeometric and Negative Binomial Distributions

          • The Hypergeometric Distribution

          • The Negative Binomial Distribution

        • 3.7 The Poisson Probability Distribution

          • The Poisson Distribution as a Limit

          • The Mean, Variance and MGF of X

          • The Poisson Process

        • Bibliography

    • fulltext(3)

      • Chapter 4: Continuous Random Variables and Probability Distributions

        • 4.1 Probability Density Functions and Cumulative Distribution Functions

          • Probability Distributions for Continuous Variables

          • The Cumulative Distribution Function

          • Using F(x) to Compute Probabilities

          • Obtaining f(x) from F(x)

          • Percentiles of a Continuous Distribution

        • 4.2 Expected Values and Moment Generating Functions

          • Expected Values

          • The Variance and Standard Deviation

          • Approximating the Mean Value and Standard Deviation

          • Moment Generating Functions

        • 4.3 The Normal Distribution

          • The Standard Normal Distribution

          • Percentiles of the Standard Normal Distribution

            • zα Notation

          • Nonstandard Normal Distributions

            • Percentiles of an Arbitrary Normal Distribution

          • The Normal Distribution and Discrete Populations

          • Approximating the Binomial Distribution

          • The Normal Moment Generating Function

        • 4.4 The Gamma Distribution and Its Relatives

          • The Family of Gamma Distributions

          • The Exponential Distribution

          • The Chi-Squared Distribution

        • 4.5 Other Continuous Distributions

          • The Weibull Distribution

          • The Lognormal Distribution

          • The Beta Distribution

        • 4.6 Probability Plots

          • Sample Percentiles

          • A Probability Plot

          • Beyond Normality

        • 4.7 Transformations of a Random Variable

        • Bibliography

    • fulltext(4)

      • Chapter 5: Joint Probability Distributions

        • 5.1 Jointly Distributed Random Variables

          • The Joint Probability Mass Function for Two Discrete Random Variables

          • The Joint Probability Density Function for Two Continuous Random Variables

          • Independent Random Variables

          • More than Two Random Variables

        • 5.2 Expected Values, Covariance, and Correlation

          • Covariance

          • Correlation

        • 5.3 Conditional Distributions

          • Independence

          • The Bivariate Normal Distribution

          • Regression to the Mean

          • The Mean and Variance Via the Conditional Mean and Variance

        • 5.4 Transformations of Random Variables

          • The Joint Distribution of Two New Random Variables

          • The Joint Distribution of More than Two New Variables

        • 5.5 Order Statistics

          • The Distributions of Yn and Y1

          • The Joint Distribution of the n Order Statistics

          • The Distribution of a Single Order Statistic

          • The Joint Distribution of Two Order Statistics

          • An Intuitive Derivation of Order Statistic PDF´s

        • Bibliography

    • fulltext(5)

      • Chapter 6: Statistics and Sampling Distributions

        • 6.1 Statistics and Their Distributions

          • Random Samples

          • Deriving the Sampling Distribution of a Statistic

          • Simulation Experiments

        • 6.2 The Distribution of the Sample Mean

          • The Case of a Normal Population Distribution

          • The Central Limit Theorem

          • Other Applications of the Central Limit Theorem

          • The Law of Large Numbers

        • 6.3 The Mean, Variance, and MGF for Several Variables

          • The Difference Between Two Random Variables

          • The Case of Normal Random Variables

          • Moment Generating Functions for Linear Combinations

        • 6.4 Distributions Based on a Normal Random Sample

          • The Chi-Squared Distribution

          • The t Distribution

          • The F Distribution

          • Summary of Relationships

        • Bibliography

        • Appendix: Proof of the Central Limit Theorem

    • fulltext(6)

      • Chapter 7: Point Estimation

        • 7.1 General Concepts and Criteria

          • Mean Squared Error

          • Unbiased Estimators

          • Estimators with Minimum Variance

          • More Complications

          • Reporting a Point Estimate: The Standard Error

          • The Bootstrap

        • 7.2 Methods of Point Estimation

          • The Method of Moments

          • Maximum Likelihood Estimation

          • Some Properties of MLEs

          • Large-Sample Behavior of the MLE

          • Some Complications

        • 7.3 Sufficiency

          • The Factorization Theorem

          • Jointly Sufficient Statistics

          • Minimal Sufficiency

          • Improving an Estimator

          • Further Comments

        • 7.4 Information and Efficiency

          • Information in a Random Sample

          • The Cramér-Rao Inequality

          • Large Sample Properties of the MLE

        • Bibliography

    • fulltext(7)

      • Chapter 8: Statistical Intervals Based on a Single Sample

        • 8.1 Basic Properties of Confidence Intervals

          • Interpreting a Confidence Level

          • Other Levels of Confidence

          • Confidence Level, Precision, and Choice of Sample Size

          • Deriving a Confidence Interval

        • 8.2 Large-Sample Confidence Intervals for a Population Mean and Proportion

          • A Large-Sample Interval for μ

          • A General Large-Sample Confidence Interval

          • A Confidence Interval for a Population Proportion

          • One-Sided Confidence Intervals (Confidence Bounds)

        • 8.3 Intervals Based on a Normal Population Distribution

          • Properties of t Distributions

          • The One-Sample t Confidence Interval

          • A Prediction Interval for a Single Future Value

          • Tolerance Intervals

          • Intervals Based on Nonnormal Population Distributions

        • 8.4 Confidence Intervals for the Variance and Standard Deviation of a Normal Population

        • 8.5 Bootstrap Confidence Intervals

          • The Percentile Interval

          • A Refined Interval

          • Bootstrapping the Median

          • The Mean Versus the Median

        • Bibliography

    • fulltext(8)

      • Chapter 9: Tests of Hypotheses Based on a Single Sample

        • 9.1 Hypotheses and Test Procedures

          • Test Procedures

          • Errors in Hypothesis Testing

        • 9.2 Tests About a Population Mean

          • Case I: A Normal Population with Known sigma

          • Case II: Large-Sample Tests

          • Case III: A Normal Population Distribution with Unknown sigma

        • 9.3 Tests Concerning a Population Proportion

          • Large-Sample Tests

          • Small-Sample Tests

        • 9.4 P-Values

          • P-Values for z Tests

          • P-Values for t Tests

          • More on Interpreting P-Values

        • 9.5 Some Comments on Selecting a Test Procedure

          • Statistical Versus Practical Significance

          • Best Tests for Simple Hypotheses

          • Power and Uniformly Most Powerful Tests

          • Likelihood Ratio Tests

        • Bibliography

    • fulltext(9)

      • Chapter 10: Inferences Based on Two Samples

        • 10.1 z Tests and Confidence Intervals for a Difference Between Two Population Means

          • Test Procedures for Normal Populations with Known Variances

          • Using a Comparison to Identify Causality

          • beta and the Choice of Sample Size

          • Large-Sample Tests

          • Confidence Intervals for M1-M2

        • 10.2 The Two-Sample t Test and Confidence Interval

          • Pooled t Procedures

          • Type II Error Probabilities

        • 10.3 Analysis of Paired Data

          • The Paired t Test

          • A Confidence Interval for MD

            • Paired Data and Two-Sample t Procedures

            • Paired Versus Unpaired Experiments

        • 10.4 Inferences About Two Population Proportions

          • A Large-Sample Test Procedure

          • Type II Error Probabilities and Sample Sizes

          • A Large-Sample Confidence Interval for p1 - p2

          • Small-Sample Inferences

        • 10.5 Inferences About Two Population Variances

          • Testing Hypotheses

          • P-Values for F Tests

          • A Confidence Interval for s

            • Bibliography

        • 10.6 Comparisons Using the Bootstrap and Permutation Methods

          • The Bootstrap for Two Samples

          • Permutation Tests

          • Inferences About Variability

          • The Analysis of Paired Data

    • fulltext(10)

      • Chapter 11: The Analysis of Variance

        • 11.1 Single-Factor ANOVA

          • Notation and Assumptions

          • Sums of Squares and Mean Squares

          • The F Test

          • Computational Formulas

          • Testing for the Assumption of Equal Variances

        • 11.2 Multiple Comparisons in ANOVA

          • Tukey´s Procedure

          • The Interpretation of α in Tukey´s Procedure

          • Confidence Intervals for Other Parametric Functions

        • 11.3 More on Single-Factor ANOVA

          • An Alternative Description of the ANOVA Model

          • beta for the F Test

          • Relationship of the F Test to the t Test

          • Single-Factor ANOVA When Sample Sizes Are Unequal

          • Multiple Comparisons When Sample Sizes Are Unequal

          • Data Transformation

          • A Random Effects Model

        • 11.4 Two-Factor ANOVA with Kij=1

          • The Model

          • Test Procedures

          • Expected Mean Squares

          • Multiple Comparisons

          • Randomized Block Experiments

          • Models for Random Effects

        • 11.5 Two-Factor ANOVA with Kij1

          • Parameters for the Fixed Effects Model with Interaction

          • Notation, Model, and Analysis

          • Multiple Comparisons

          • Models with Mixed and Random Effects

        • Bibliography

    • fulltext(11)

      • Chapter 12: Regression and Correlation

        • 12.1 The Simple Linear and Logistic Regression Models

          • A Linear Probabilistic Model

          • The Logistic Regression Model

        • 12.2 Estimating Model Parameters

          • Estimating σ2 and σ

          • The Coefficient of Determination

          • Terminology and Scope of Regression Analysis

        • 12.3 Inferences About the Regression Coefficient β1

          • A Confidence Interval for β1

          • Hypothesis-Testing Procedures

          • Regression and ANOVA

          • Fitting the Logistic Regression Model

        • 12.4 Inferences Concerning μY.x* and the Prediction of Future Y Values

          • Inferences Concerning μY.x*

          • A Prediction Interval for a Future Value of Y

        • 12.5 Correlation

          • The Sample Correlation Coefficient r

          • Properties of r

          • The Population Correlation Coefficient rho and Inferences About Correlation

          • Other Inferences Concerning rho

        • 12.6 Assessing Model Adequacy

          • Residuals and Standardized Residuals

          • Diagnostic Plots

          • Difficulties and Remedies

        • 12.7 Multiple Regression Analysis

          • Estimating Parameters

          • σ2 and the Coefficient of Multiple Determination

          • A Model Utility Test

          • Inferences in Multiple Regression

          • Assessing Model Adequacy

          • Multiple Regression Models

          • Models with Predictors for Categorical Variables

        • 12.8 Regression with Matrices

          • The Normal Equations

          • Residuals, ANOVA, F, and R-Squared

          • Covariance Matrices

          • The Hat Matrix

        • Bibliography

    • fulltext(12)

      • Chapter 13: Goodness-of-Fit Tests and Categorical Data Analysis

        • 13.1 Goodness-of-Fit Tests When Category Probabilities Are Completely Specified

          • P-Values for Chi-Squared Tests

          • 2 When the pi´s Are Functions of Other Parameters

          • 2 When the Underlying Distribution Is Continuous

        • 13.2 Goodness-of-Fit Tests for Composite Hypotheses

          • 2 When Parameters Are Estimated

          • Goodness of Fit for Discrete Distributions

          • Goodness of Fit for Continuous Distributions

          • A Special Test for Normality

        • 13.3 Two-Way Contingency Tables

          • Testing for Homogeneity

          • Testing for Independence

          • Ordinal Factors and Logistic Regression

        • Bibliography

    • fulltext(13)

      • Chapter 14: Alternative Approaches to Inference

        • 14.1 The Wilcoxon Signed-Rank Test

          • A General Description of the Wilcoxon Signed-Rank Test

          • Paired Observations

          • Efficiency of the Wilcoxon Signed-Rank Test

        • 14.2 The Wilcoxon Rank-Sum Test

          • Development of the Test When m=3, n=4

          • General Description of the Rank-Sum Test

          • Efficiency of the Wilcoxon Rank-Sum Test

        • 14.3 Distribution-Free Confidence Intervals

          • The Wilcoxon Signed-Rank Interval

          • The Wilcoxon Rank-Sum Interval

        • 14.4 Bayesian Methods

        • Bibliography

    • back-matter

    • Index

    • Answers to Odd-Numbered Exercises

      • Chapter 1

      • Chapter 2

      • Chapter 3

      • Chapter 4

      • Chapter 5

      • Chapter 6

      • Chapter 7

      • Chapter 8

      • Chapter 9

      • Chapter 10

      • Chapter 11

      • Chapter 12

      • Chapter 13

      • Chapter 14

    • Appendix Tables

    • GetFullPageImage

    • front-matter

      • Modern Mathematical Statistics with Applications

        • About the Authors

        • Contents

        • Preface

    • fulltext

    • fulltext(1)

    • fulltext(2)

    • fulltext(3)

    • fulltext(4)

    • fulltext(5)

    • fulltext(6)

    • fulltext(7)

    • fulltext(8)

    • fulltext(9)

    • fulltext(10)

    • fulltext(11)

    • fulltext(12)

    • fulltext(13)

    • back-matter

    • Index

    • Answers to Odd-Numbered Exercises

    • Appendix Tables

  • Chapter 8 Statistical Intervals Based on a Single Sample

    • GetFullPageImage

    • front-matter

      • Modern Mathematical Statistics with Applications

        • About the Authors

        • Contents

        • Preface

          • Purpose

          • Content

          • Mathematical Level

          • Recommended Coverage

          • Acknowledgments

          • A Final Thought

    • fulltext

      • Chapter 1: Overview and Descriptive Statistics

        • 1.1 Populations and Samples

          • Branches of Statistics

          • Collecting Data

        • 1.2 Pictorial and Tabular Methods in Descriptive Statistics

          • Notation

          • Stem-and-Leaf Displays

          • Dotplots

          • Histograms

          • Histogram Shapes

          • Qualitative Data

          • Multivariate Data

        • 1.3 Measures of Location

          • The Mean

          • The Median

          • Other Measures of Location: Quartiles, Percentiles, and Trimmed Means

          • Categorical Data and Sample Proportions

        • 1.4 Measures of Variability

          • Measures of Variability for Sample Data

          • Motivation for s2

          • A Computing Formula for s2

          • Boxplots

          • Boxplots That Show Outliers

          • Comparative Boxplots

        • Bibliography

    • fulltext(1)

      • Chapter 2: Probability

        • 2.1 Sample Spaces and Events

          • The Sample Space of an Experiment

          • Events

          • Some Relations from Set Theory

        • 2.2 Axioms, Interpretations, and Properties of Probability

          • Interpreting Probability

          • More Probability Properties

          • Determining Probabilities Systematically

          • Equally Likely Outcomes

        • 2.3 Counting Techniques

          • The Product Rule for Ordered Pairs

          • Tree Diagrams

          • A More General Product Rule

          • Permutations

          • Combinations

        • 2.4 Conditional Probability

          • The Definition of Conditional Probability

          • The Multiplication Rule for P(A ∩ B)

          • Bayes´ Theorem

        • 2.5 Independence

          • P(A ∩ B) When Events Are Independent

          • Independence of More Than Two Events

        • Bibliography

    • fulltext(2)

      • Chapter 3: Discrete Random Variables and Probability Distributions

        • 3.1 Random Variables

          • Two Types of Random Variables

        • 3.2 Probability Distributions for Discrete Random Variables

          • A Parameter of a Probability Distribution

          • The Cumulative Distribution Function

          • Another View of Probability Mass Functions

        • 3.3 Expected Values of Discrete Random Variables

          • The Expected Value of X

          • The Expected Value of a Function

          • The Variance of X

          • A Shortcut Formula for sigma2

          • Rules of Variance

        • 3.4 Moments and Moment Generating Functions

        • 3.5 The Binomial Probability Distribution

          • The Binomial Random Variable and Distribution

          • Using Binomial Tables

          • The Mean and Variance of X

          • The Moment Generating Function of X

        • 3.6 Hypergeometric and Negative Binomial Distributions

          • The Hypergeometric Distribution

          • The Negative Binomial Distribution

        • 3.7 The Poisson Probability Distribution

          • The Poisson Distribution as a Limit

          • The Mean, Variance and MGF of X

          • The Poisson Process

        • Bibliography

    • fulltext(3)

      • Chapter 4: Continuous Random Variables and Probability Distributions

        • 4.1 Probability Density Functions and Cumulative Distribution Functions

          • Probability Distributions for Continuous Variables

          • The Cumulative Distribution Function

          • Using F(x) to Compute Probabilities

          • Obtaining f(x) from F(x)

          • Percentiles of a Continuous Distribution

        • 4.2 Expected Values and Moment Generating Functions

          • Expected Values

          • The Variance and Standard Deviation

          • Approximating the Mean Value and Standard Deviation

          • Moment Generating Functions

        • 4.3 The Normal Distribution

          • The Standard Normal Distribution

          • Percentiles of the Standard Normal Distribution

            • zα Notation

          • Nonstandard Normal Distributions

            • Percentiles of an Arbitrary Normal Distribution

          • The Normal Distribution and Discrete Populations

          • Approximating the Binomial Distribution

          • The Normal Moment Generating Function

        • 4.4 The Gamma Distribution and Its Relatives

          • The Family of Gamma Distributions

          • The Exponential Distribution

          • The Chi-Squared Distribution

        • 4.5 Other Continuous Distributions

          • The Weibull Distribution

          • The Lognormal Distribution

          • The Beta Distribution

        • 4.6 Probability Plots

          • Sample Percentiles

          • A Probability Plot

          • Beyond Normality

        • 4.7 Transformations of a Random Variable

        • Bibliography

    • fulltext(4)

      • Chapter 5: Joint Probability Distributions

        • 5.1 Jointly Distributed Random Variables

          • The Joint Probability Mass Function for Two Discrete Random Variables

          • The Joint Probability Density Function for Two Continuous Random Variables

          • Independent Random Variables

          • More than Two Random Variables

        • 5.2 Expected Values, Covariance, and Correlation

          • Covariance

          • Correlation

        • 5.3 Conditional Distributions

          • Independence

          • The Bivariate Normal Distribution

          • Regression to the Mean

          • The Mean and Variance Via the Conditional Mean and Variance

        • 5.4 Transformations of Random Variables

          • The Joint Distribution of Two New Random Variables

          • The Joint Distribution of More than Two New Variables

        • 5.5 Order Statistics

          • The Distributions of Yn and Y1

          • The Joint Distribution of the n Order Statistics

          • The Distribution of a Single Order Statistic

          • The Joint Distribution of Two Order Statistics

          • An Intuitive Derivation of Order Statistic PDF´s

        • Bibliography

    • fulltext(5)

      • Chapter 6: Statistics and Sampling Distributions

        • 6.1 Statistics and Their Distributions

          • Random Samples

          • Deriving the Sampling Distribution of a Statistic

          • Simulation Experiments

        • 6.2 The Distribution of the Sample Mean

          • The Case of a Normal Population Distribution

          • The Central Limit Theorem

          • Other Applications of the Central Limit Theorem

          • The Law of Large Numbers

        • 6.3 The Mean, Variance, and MGF for Several Variables

          • The Difference Between Two Random Variables

          • The Case of Normal Random Variables

          • Moment Generating Functions for Linear Combinations

        • 6.4 Distributions Based on a Normal Random Sample

          • The Chi-Squared Distribution

          • The t Distribution

          • The F Distribution

          • Summary of Relationships

        • Bibliography

        • Appendix: Proof of the Central Limit Theorem

    • fulltext(6)

      • Chapter 7: Point Estimation

        • 7.1 General Concepts and Criteria

          • Mean Squared Error

          • Unbiased Estimators

          • Estimators with Minimum Variance

          • More Complications

          • Reporting a Point Estimate: The Standard Error

          • The Bootstrap

        • 7.2 Methods of Point Estimation

          • The Method of Moments

          • Maximum Likelihood Estimation

          • Some Properties of MLEs

          • Large-Sample Behavior of the MLE

          • Some Complications

        • 7.3 Sufficiency

          • The Factorization Theorem

          • Jointly Sufficient Statistics

          • Minimal Sufficiency

          • Improving an Estimator

          • Further Comments

        • 7.4 Information and Efficiency

          • Information in a Random Sample

          • The Cramér-Rao Inequality

          • Large Sample Properties of the MLE

        • Bibliography

    • fulltext(7)

      • Chapter 8: Statistical Intervals Based on a Single Sample

        • 8.1 Basic Properties of Confidence Intervals

          • Interpreting a Confidence Level

          • Other Levels of Confidence

          • Confidence Level, Precision, and Choice of Sample Size

          • Deriving a Confidence Interval

        • 8.2 Large-Sample Confidence Intervals for a Population Mean and Proportion

          • A Large-Sample Interval for μ

          • A General Large-Sample Confidence Interval

          • A Confidence Interval for a Population Proportion

          • One-Sided Confidence Intervals (Confidence Bounds)

        • 8.3 Intervals Based on a Normal Population Distribution

          • Properties of t Distributions

          • The One-Sample t Confidence Interval

          • A Prediction Interval for a Single Future Value

          • Tolerance Intervals

          • Intervals Based on Nonnormal Population Distributions

        • 8.4 Confidence Intervals for the Variance and Standard Deviation of a Normal Population

        • 8.5 Bootstrap Confidence Intervals

          • The Percentile Interval

          • A Refined Interval

          • Bootstrapping the Median

          • The Mean Versus the Median

        • Bibliography

    • fulltext(8)

      • Chapter 9: Tests of Hypotheses Based on a Single Sample

        • 9.1 Hypotheses and Test Procedures

          • Test Procedures

          • Errors in Hypothesis Testing

        • 9.2 Tests About a Population Mean

          • Case I: A Normal Population with Known sigma

          • Case II: Large-Sample Tests

          • Case III: A Normal Population Distribution with Unknown sigma

        • 9.3 Tests Concerning a Population Proportion

          • Large-Sample Tests

          • Small-Sample Tests

        • 9.4 P-Values

          • P-Values for z Tests

          • P-Values for t Tests

          • More on Interpreting P-Values

        • 9.5 Some Comments on Selecting a Test Procedure

          • Statistical Versus Practical Significance

          • Best Tests for Simple Hypotheses

          • Power and Uniformly Most Powerful Tests

          • Likelihood Ratio Tests

        • Bibliography

    • fulltext(9)

      • Chapter 10: Inferences Based on Two Samples

        • 10.1 z Tests and Confidence Intervals for a Difference Between Two Population Means

          • Test Procedures for Normal Populations with Known Variances

          • Using a Comparison to Identify Causality

          • beta and the Choice of Sample Size

          • Large-Sample Tests

          • Confidence Intervals for M1-M2

        • 10.2 The Two-Sample t Test and Confidence Interval

          • Pooled t Procedures

          • Type II Error Probabilities

        • 10.3 Analysis of Paired Data

          • The Paired t Test

          • A Confidence Interval for MD

            • Paired Data and Two-Sample t Procedures

            • Paired Versus Unpaired Experiments

        • 10.4 Inferences About Two Population Proportions

          • A Large-Sample Test Procedure

          • Type II Error Probabilities and Sample Sizes

          • A Large-Sample Confidence Interval for p1 - p2

          • Small-Sample Inferences

        • 10.5 Inferences About Two Population Variances

          • Testing Hypotheses

          • P-Values for F Tests

          • A Confidence Interval for s

            • Bibliography

        • 10.6 Comparisons Using the Bootstrap and Permutation Methods

          • The Bootstrap for Two Samples

          • Permutation Tests

          • Inferences About Variability

          • The Analysis of Paired Data

    • fulltext(10)

      • Chapter 11: The Analysis of Variance

        • 11.1 Single-Factor ANOVA

          • Notation and Assumptions

          • Sums of Squares and Mean Squares

          • The F Test

          • Computational Formulas

          • Testing for the Assumption of Equal Variances

        • 11.2 Multiple Comparisons in ANOVA

          • Tukey´s Procedure

          • The Interpretation of α in Tukey´s Procedure

          • Confidence Intervals for Other Parametric Functions

        • 11.3 More on Single-Factor ANOVA

          • An Alternative Description of the ANOVA Model

          • beta for the F Test

          • Relationship of the F Test to the t Test

          • Single-Factor ANOVA When Sample Sizes Are Unequal

          • Multiple Comparisons When Sample Sizes Are Unequal

          • Data Transformation

          • A Random Effects Model

        • 11.4 Two-Factor ANOVA with Kij=1

          • The Model

          • Test Procedures

          • Expected Mean Squares

          • Multiple Comparisons

          • Randomized Block Experiments

          • Models for Random Effects

        • 11.5 Two-Factor ANOVA with Kij1

          • Parameters for the Fixed Effects Model with Interaction

          • Notation, Model, and Analysis

          • Multiple Comparisons

          • Models with Mixed and Random Effects

        • Bibliography

    • fulltext(11)

      • Chapter 12: Regression and Correlation

        • 12.1 The Simple Linear and Logistic Regression Models

          • A Linear Probabilistic Model

          • The Logistic Regression Model

        • 12.2 Estimating Model Parameters

          • Estimating σ2 and σ

          • The Coefficient of Determination

          • Terminology and Scope of Regression Analysis

        • 12.3 Inferences About the Regression Coefficient β1

          • A Confidence Interval for β1

          • Hypothesis-Testing Procedures

          • Regression and ANOVA

          • Fitting the Logistic Regression Model

        • 12.4 Inferences Concerning μY.x* and the Prediction of Future Y Values

          • Inferences Concerning μY.x*

          • A Prediction Interval for a Future Value of Y

        • 12.5 Correlation

          • The Sample Correlation Coefficient r

          • Properties of r

          • The Population Correlation Coefficient rho and Inferences About Correlation

          • Other Inferences Concerning rho

        • 12.6 Assessing Model Adequacy

          • Residuals and Standardized Residuals

          • Diagnostic Plots

          • Difficulties and Remedies

        • 12.7 Multiple Regression Analysis

          • Estimating Parameters

          • σ2 and the Coefficient of Multiple Determination

          • A Model Utility Test

          • Inferences in Multiple Regression

          • Assessing Model Adequacy

          • Multiple Regression Models

          • Models with Predictors for Categorical Variables

        • 12.8 Regression with Matrices

          • The Normal Equations

          • Residuals, ANOVA, F, and R-Squared

          • Covariance Matrices

          • The Hat Matrix

        • Bibliography

    • fulltext(12)

      • Chapter 13: Goodness-of-Fit Tests and Categorical Data Analysis

        • 13.1 Goodness-of-Fit Tests When Category Probabilities Are Completely Specified

          • P-Values for Chi-Squared Tests

          • 2 When the pi´s Are Functions of Other Parameters

          • 2 When the Underlying Distribution Is Continuous

        • 13.2 Goodness-of-Fit Tests for Composite Hypotheses

          • 2 When Parameters Are Estimated

          • Goodness of Fit for Discrete Distributions

          • Goodness of Fit for Continuous Distributions

          • A Special Test for Normality

        • 13.3 Two-Way Contingency Tables

          • Testing for Homogeneity

          • Testing for Independence

          • Ordinal Factors and Logistic Regression

        • Bibliography

    • fulltext(13)

      • Chapter 14: Alternative Approaches to Inference

        • 14.1 The Wilcoxon Signed-Rank Test

          • A General Description of the Wilcoxon Signed-Rank Test

          • Paired Observations

          • Efficiency of the Wilcoxon Signed-Rank Test

        • 14.2 The Wilcoxon Rank-Sum Test

          • Development of the Test When m=3, n=4

          • General Description of the Rank-Sum Test

          • Efficiency of the Wilcoxon Rank-Sum Test

        • 14.3 Distribution-Free Confidence Intervals

          • The Wilcoxon Signed-Rank Interval

          • The Wilcoxon Rank-Sum Interval

        • 14.4 Bayesian Methods

        • Bibliography

    • back-matter

    • Index

    • Answers to Odd-Numbered Exercises

      • Chapter 1

      • Chapter 2

      • Chapter 3

      • Chapter 4

      • Chapter 5

      • Chapter 6

      • Chapter 7

      • Chapter 8

      • Chapter 9

      • Chapter 10

      • Chapter 11

      • Chapter 12

      • Chapter 13

      • Chapter 14

    • Appendix Tables

    • GetFullPageImage

    • front-matter

      • Modern Mathematical Statistics with Applications

        • About the Authors

        • Contents

        • Preface

    • fulltext

    • fulltext(1)

    • fulltext(2)

    • fulltext(3)

    • fulltext(4)

    • fulltext(5)

    • fulltext(6)

    • fulltext(7)

    • fulltext(8)

    • fulltext(9)

    • fulltext(10)

    • fulltext(11)

    • fulltext(12)

    • fulltext(13)

    • back-matter

    • Index

    • Answers to Odd-Numbered Exercises

    • Appendix Tables

  • Chapter 9 Tests of Hypotheses Based on a Single Sample

    • GetFullPageImage

    • front-matter

      • Modern Mathematical Statistics with Applications

        • About the Authors

        • Contents

        • Preface

          • Purpose

          • Content

          • Mathematical Level

          • Recommended Coverage

          • Acknowledgments

          • A Final Thought

    • fulltext

      • Chapter 1: Overview and Descriptive Statistics

        • 1.1 Populations and Samples

          • Branches of Statistics

          • Collecting Data

        • 1.2 Pictorial and Tabular Methods in Descriptive Statistics

          • Notation

          • Stem-and-Leaf Displays

          • Dotplots

          • Histograms

          • Histogram Shapes

          • Qualitative Data

          • Multivariate Data

        • 1.3 Measures of Location

          • The Mean

          • The Median

          • Other Measures of Location: Quartiles, Percentiles, and Trimmed Means

          • Categorical Data and Sample Proportions

        • 1.4 Measures of Variability

          • Measures of Variability for Sample Data

          • Motivation for s2

          • A Computing Formula for s2

          • Boxplots

          • Boxplots That Show Outliers

          • Comparative Boxplots

        • Bibliography

    • fulltext(1)

      • Chapter 2: Probability

        • 2.1 Sample Spaces and Events

          • The Sample Space of an Experiment

          • Events

          • Some Relations from Set Theory

        • 2.2 Axioms, Interpretations, and Properties of Probability

          • Interpreting Probability

          • More Probability Properties

          • Determining Probabilities Systematically

          • Equally Likely Outcomes

        • 2.3 Counting Techniques

          • The Product Rule for Ordered Pairs

          • Tree Diagrams

          • A More General Product Rule

          • Permutations

          • Combinations

        • 2.4 Conditional Probability

          • The Definition of Conditional Probability

          • The Multiplication Rule for P(A ∩ B)

          • Bayes´ Theorem

        • 2.5 Independence

          • P(A ∩ B) When Events Are Independent

          • Independence of More Than Two Events

        • Bibliography

    • fulltext(2)

      • Chapter 3: Discrete Random Variables and Probability Distributions

        • 3.1 Random Variables

          • Two Types of Random Variables

        • 3.2 Probability Distributions for Discrete Random Variables

          • A Parameter of a Probability Distribution

          • The Cumulative Distribution Function

          • Another View of Probability Mass Functions

        • 3.3 Expected Values of Discrete Random Variables

          • The Expected Value of X

          • The Expected Value of a Function

          • The Variance of X

          • A Shortcut Formula for sigma2

          • Rules of Variance

        • 3.4 Moments and Moment Generating Functions

        • 3.5 The Binomial Probability Distribution

          • The Binomial Random Variable and Distribution

          • Using Binomial Tables

          • The Mean and Variance of X

          • The Moment Generating Function of X

        • 3.6 Hypergeometric and Negative Binomial Distributions

          • The Hypergeometric Distribution

          • The Negative Binomial Distribution

        • 3.7 The Poisson Probability Distribution

          • The Poisson Distribution as a Limit

          • The Mean, Variance and MGF of X

          • The Poisson Process

        • Bibliography

    • fulltext(3)

      • Chapter 4: Continuous Random Variables and Probability Distributions

        • 4.1 Probability Density Functions and Cumulative Distribution Functions

          • Probability Distributions for Continuous Variables

          • The Cumulative Distribution Function

          • Using F(x) to Compute Probabilities

          • Obtaining f(x) from F(x)

          • Percentiles of a Continuous Distribution

        • 4.2 Expected Values and Moment Generating Functions

          • Expected Values

          • The Variance and Standard Deviation

          • Approximating the Mean Value and Standard Deviation

          • Moment Generating Functions

        • 4.3 The Normal Distribution

          • The Standard Normal Distribution

          • Percentiles of the Standard Normal Distribution

            • zα Notation

          • Nonstandard Normal Distributions

            • Percentiles of an Arbitrary Normal Distribution

          • The Normal Distribution and Discrete Populations

          • Approximating the Binomial Distribution

          • The Normal Moment Generating Function

        • 4.4 The Gamma Distribution and Its Relatives

          • The Family of Gamma Distributions

          • The Exponential Distribution

          • The Chi-Squared Distribution

        • 4.5 Other Continuous Distributions

          • The Weibull Distribution

          • The Lognormal Distribution

          • The Beta Distribution

        • 4.6 Probability Plots

          • Sample Percentiles

          • A Probability Plot

          • Beyond Normality

        • 4.7 Transformations of a Random Variable

        • Bibliography

    • fulltext(4)

      • Chapter 5: Joint Probability Distributions

        • 5.1 Jointly Distributed Random Variables

          • The Joint Probability Mass Function for Two Discrete Random Variables

          • The Joint Probability Density Function for Two Continuous Random Variables

          • Independent Random Variables

          • More than Two Random Variables

        • 5.2 Expected Values, Covariance, and Correlation

          • Covariance

          • Correlation

        • 5.3 Conditional Distributions

          • Independence

          • The Bivariate Normal Distribution

          • Regression to the Mean

          • The Mean and Variance Via the Conditional Mean and Variance

        • 5.4 Transformations of Random Variables

          • The Joint Distribution of Two New Random Variables

          • The Joint Distribution of More than Two New Variables

        • 5.5 Order Statistics

          • The Distributions of Yn and Y1

          • The Joint Distribution of the n Order Statistics

          • The Distribution of a Single Order Statistic

          • The Joint Distribution of Two Order Statistics

          • An Intuitive Derivation of Order Statistic PDF´s

        • Bibliography

    • fulltext(5)

      • Chapter 6: Statistics and Sampling Distributions

        • 6.1 Statistics and Their Distributions

          • Random Samples

          • Deriving the Sampling Distribution of a Statistic

          • Simulation Experiments

        • 6.2 The Distribution of the Sample Mean

          • The Case of a Normal Population Distribution

          • The Central Limit Theorem

          • Other Applications of the Central Limit Theorem

          • The Law of Large Numbers

        • 6.3 The Mean, Variance, and MGF for Several Variables

          • The Difference Between Two Random Variables

          • The Case of Normal Random Variables

          • Moment Generating Functions for Linear Combinations

        • 6.4 Distributions Based on a Normal Random Sample

          • The Chi-Squared Distribution

          • The t Distribution

          • The F Distribution

          • Summary of Relationships

        • Bibliography

        • Appendix: Proof of the Central Limit Theorem

    • fulltext(6)

      • Chapter 7: Point Estimation

        • 7.1 General Concepts and Criteria

          • Mean Squared Error

          • Unbiased Estimators

          • Estimators with Minimum Variance

          • More Complications

          • Reporting a Point Estimate: The Standard Error

          • The Bootstrap

        • 7.2 Methods of Point Estimation

          • The Method of Moments

          • Maximum Likelihood Estimation

          • Some Properties of MLEs

          • Large-Sample Behavior of the MLE

          • Some Complications

        • 7.3 Sufficiency

          • The Factorization Theorem

          • Jointly Sufficient Statistics

          • Minimal Sufficiency

          • Improving an Estimator

          • Further Comments

        • 7.4 Information and Efficiency

          • Information in a Random Sample

          • The Cramér-Rao Inequality

          • Large Sample Properties of the MLE

        • Bibliography

    • fulltext(7)

      • Chapter 8: Statistical Intervals Based on a Single Sample

        • 8.1 Basic Properties of Confidence Intervals

          • Interpreting a Confidence Level

          • Other Levels of Confidence

          • Confidence Level, Precision, and Choice of Sample Size

          • Deriving a Confidence Interval

        • 8.2 Large-Sample Confidence Intervals for a Population Mean and Proportion

          • A Large-Sample Interval for μ

          • A General Large-Sample Confidence Interval

          • A Confidence Interval for a Population Proportion

          • One-Sided Confidence Intervals (Confidence Bounds)

        • 8.3 Intervals Based on a Normal Population Distribution

          • Properties of t Distributions

          • The One-Sample t Confidence Interval

          • A Prediction Interval for a Single Future Value

          • Tolerance Intervals

          • Intervals Based on Nonnormal Population Distributions

        • 8.4 Confidence Intervals for the Variance and Standard Deviation of a Normal Population

        • 8.5 Bootstrap Confidence Intervals

          • The Percentile Interval

          • A Refined Interval

          • Bootstrapping the Median

          • The Mean Versus the Median

        • Bibliography

    • fulltext(8)

      • Chapter 9: Tests of Hypotheses Based on a Single Sample

        • 9.1 Hypotheses and Test Procedures

          • Test Procedures

          • Errors in Hypothesis Testing

        • 9.2 Tests About a Population Mean

          • Case I: A Normal Population with Known sigma

          • Case II: Large-Sample Tests

          • Case III: A Normal Population Distribution with Unknown sigma

        • 9.3 Tests Concerning a Population Proportion

          • Large-Sample Tests

          • Small-Sample Tests

        • 9.4 P-Values

          • P-Values for z Tests

          • P-Values for t Tests

          • More on Interpreting P-Values

        • 9.5 Some Comments on Selecting a Test Procedure

          • Statistical Versus Practical Significance

          • Best Tests for Simple Hypotheses

          • Power and Uniformly Most Powerful Tests

          • Likelihood Ratio Tests

        • Bibliography

    • fulltext(9)

      • Chapter 10: Inferences Based on Two Samples

        • 10.1 z Tests and Confidence Intervals for a Difference Between Two Population Means

          • Test Procedures for Normal Populations with Known Variances

          • Using a Comparison to Identify Causality

          • beta and the Choice of Sample Size

          • Large-Sample Tests

          • Confidence Intervals for M1-M2

        • 10.2 The Two-Sample t Test and Confidence Interval

          • Pooled t Procedures

          • Type II Error Probabilities

        • 10.3 Analysis of Paired Data

          • The Paired t Test

          • A Confidence Interval for MD

            • Paired Data and Two-Sample t Procedures

            • Paired Versus Unpaired Experiments

        • 10.4 Inferences About Two Population Proportions

          • A Large-Sample Test Procedure

          • Type II Error Probabilities and Sample Sizes

          • A Large-Sample Confidence Interval for p1 - p2

          • Small-Sample Inferences

        • 10.5 Inferences About Two Population Variances

          • Testing Hypotheses

          • P-Values for F Tests

          • A Confidence Interval for s

            • Bibliography

        • 10.6 Comparisons Using the Bootstrap and Permutation Methods

          • The Bootstrap for Two Samples

          • Permutation Tests

          • Inferences About Variability

          • The Analysis of Paired Data

    • fulltext(10)

      • Chapter 11: The Analysis of Variance

        • 11.1 Single-Factor ANOVA

          • Notation and Assumptions

          • Sums of Squares and Mean Squares

          • The F Test

          • Computational Formulas

          • Testing for the Assumption of Equal Variances

        • 11.2 Multiple Comparisons in ANOVA

          • Tukey´s Procedure

          • The Interpretation of α in Tukey´s Procedure

          • Confidence Intervals for Other Parametric Functions

        • 11.3 More on Single-Factor ANOVA

          • An Alternative Description of the ANOVA Model

          • beta for the F Test

          • Relationship of the F Test to the t Test

          • Single-Factor ANOVA When Sample Sizes Are Unequal

          • Multiple Comparisons When Sample Sizes Are Unequal

          • Data Transformation

          • A Random Effects Model

        • 11.4 Two-Factor ANOVA with Kij=1

          • The Model

          • Test Procedures

          • Expected Mean Squares

          • Multiple Comparisons

          • Randomized Block Experiments

          • Models for Random Effects

        • 11.5 Two-Factor ANOVA with Kij1

          • Parameters for the Fixed Effects Model with Interaction

          • Notation, Model, and Analysis

          • Multiple Comparisons

          • Models with Mixed and Random Effects

        • Bibliography

    • fulltext(11)

      • Chapter 12: Regression and Correlation

        • 12.1 The Simple Linear and Logistic Regression Models

          • A Linear Probabilistic Model

          • The Logistic Regression Model

        • 12.2 Estimating Model Parameters

          • Estimating σ2 and σ

          • The Coefficient of Determination

          • Terminology and Scope of Regression Analysis

        • 12.3 Inferences About the Regression Coefficient β1

          • A Confidence Interval for β1

          • Hypothesis-Testing Procedures

          • Regression and ANOVA

          • Fitting the Logistic Regression Model

        • 12.4 Inferences Concerning μY.x* and the Prediction of Future Y Values

          • Inferences Concerning μY.x*

          • A Prediction Interval for a Future Value of Y

        • 12.5 Correlation

          • The Sample Correlation Coefficient r

          • Properties of r

          • The Population Correlation Coefficient rho and Inferences About Correlation

          • Other Inferences Concerning rho

        • 12.6 Assessing Model Adequacy

          • Residuals and Standardized Residuals

          • Diagnostic Plots

          • Difficulties and Remedies

        • 12.7 Multiple Regression Analysis

          • Estimating Parameters

          • σ2 and the Coefficient of Multiple Determination

          • A Model Utility Test

          • Inferences in Multiple Regression

          • Assessing Model Adequacy

          • Multiple Regression Models

          • Models with Predictors for Categorical Variables

        • 12.8 Regression with Matrices

          • The Normal Equations

          • Residuals, ANOVA, F, and R-Squared

          • Covariance Matrices

          • The Hat Matrix

        • Bibliography

    • fulltext(12)

      • Chapter 13: Goodness-of-Fit Tests and Categorical Data Analysis

        • 13.1 Goodness-of-Fit Tests When Category Probabilities Are Completely Specified

          • P-Values for Chi-Squared Tests

          • 2 When the pi´s Are Functions of Other Parameters

          • 2 When the Underlying Distribution Is Continuous

        • 13.2 Goodness-of-Fit Tests for Composite Hypotheses

          • 2 When Parameters Are Estimated

          • Goodness of Fit for Discrete Distributions

          • Goodness of Fit for Continuous Distributions

          • A Special Test for Normality

        • 13.3 Two-Way Contingency Tables

          • Testing for Homogeneity

          • Testing for Independence

          • Ordinal Factors and Logistic Regression

        • Bibliography

    • fulltext(13)

      • Chapter 14: Alternative Approaches to Inference

        • 14.1 The Wilcoxon Signed-Rank Test

          • A General Description of the Wilcoxon Signed-Rank Test

          • Paired Observations

          • Efficiency of the Wilcoxon Signed-Rank Test

        • 14.2 The Wilcoxon Rank-Sum Test

          • Development of the Test When m=3, n=4

          • General Description of the Rank-Sum Test

          • Efficiency of the Wilcoxon Rank-Sum Test

        • 14.3 Distribution-Free Confidence Intervals

          • The Wilcoxon Signed-Rank Interval

          • The Wilcoxon Rank-Sum Interval

        • 14.4 Bayesian Methods

        • Bibliography

    • back-matter

    • Index

    • Answers to Odd-Numbered Exercises

      • Chapter 1

      • Chapter 2

      • Chapter 3

      • Chapter 4

      • Chapter 5

      • Chapter 6

      • Chapter 7

      • Chapter 8

      • Chapter 9

      • Chapter 10

      • Chapter 11

      • Chapter 12

      • Chapter 13

      • Chapter 14

    • Appendix Tables

    • GetFullPageImage

    • front-matter

      • Modern Mathematical Statistics with Applications

        • About the Authors

        • Contents

        • Preface

    • fulltext

    • fulltext(1)

    • fulltext(2)

    • fulltext(3)

    • fulltext(4)

    • fulltext(5)

    • fulltext(6)

    • fulltext(7)

    • fulltext(8)

    • fulltext(9)

    • fulltext(10)

    • fulltext(11)

    • fulltext(12)

    • fulltext(13)

    • back-matter

    • Index

    • Answers to Odd-Numbered Exercises

    • Appendix Tables

  • Chapter 10 Inferences Based on Two Samples

    • GetFullPageImage

    • front-matter

      • Modern Mathematical Statistics with Applications

        • About the Authors

        • Contents

        • Preface

          • Purpose

          • Content

          • Mathematical Level

          • Recommended Coverage

          • Acknowledgments

          • A Final Thought

    • fulltext

      • Chapter 1: Overview and Descriptive Statistics

        • 1.1 Populations and Samples

          • Branches of Statistics

          • Collecting Data

        • 1.2 Pictorial and Tabular Methods in Descriptive Statistics

          • Notation

          • Stem-and-Leaf Displays

          • Dotplots

          • Histograms

          • Histogram Shapes

          • Qualitative Data

          • Multivariate Data

        • 1.3 Measures of Location

          • The Mean

          • The Median

          • Other Measures of Location: Quartiles, Percentiles, and Trimmed Means

          • Categorical Data and Sample Proportions

        • 1.4 Measures of Variability

          • Measures of Variability for Sample Data

          • Motivation for s2

          • A Computing Formula for s2

          • Boxplots

          • Boxplots That Show Outliers

          • Comparative Boxplots

        • Bibliography

    • fulltext(1)

      • Chapter 2: Probability

        • 2.1 Sample Spaces and Events

          • The Sample Space of an Experiment

          • Events

          • Some Relations from Set Theory

        • 2.2 Axioms, Interpretations, and Properties of Probability

          • Interpreting Probability

          • More Probability Properties

          • Determining Probabilities Systematically

          • Equally Likely Outcomes

        • 2.3 Counting Techniques

          • The Product Rule for Ordered Pairs

          • Tree Diagrams

          • A More General Product Rule

          • Permutations

          • Combinations

        • 2.4 Conditional Probability

          • The Definition of Conditional Probability

          • The Multiplication Rule for P(A ∩ B)

          • Bayes´ Theorem

        • 2.5 Independence

          • P(A ∩ B) When Events Are Independent

          • Independence of More Than Two Events

        • Bibliography

    • fulltext(2)

      • Chapter 3: Discrete Random Variables and Probability Distributions

        • 3.1 Random Variables

          • Two Types of Random Variables

        • 3.2 Probability Distributions for Discrete Random Variables

          • A Parameter of a Probability Distribution

          • The Cumulative Distribution Function

          • Another View of Probability Mass Functions

        • 3.3 Expected Values of Discrete Random Variables

          • The Expected Value of X

          • The Expected Value of a Function

          • The Variance of X

          • A Shortcut Formula for sigma2

          • Rules of Variance

        • 3.4 Moments and Moment Generating Functions

        • 3.5 The Binomial Probability Distribution

          • The Binomial Random Variable and Distribution

          • Using Binomial Tables

          • The Mean and Variance of X

          • The Moment Generating Function of X

        • 3.6 Hypergeometric and Negative Binomial Distributions

          • The Hypergeometric Distribution

          • The Negative Binomial Distribution

        • 3.7 The Poisson Probability Distribution

          • The Poisson Distribution as a Limit

          • The Mean, Variance and MGF of X

          • The Poisson Process

        • Bibliography

    • fulltext(3)

      • Chapter 4: Continuous Random Variables and Probability Distributions

        • 4.1 Probability Density Functions and Cumulative Distribution Functions

          • Probability Distributions for Continuous Variables

          • The Cumulative Distribution Function

          • Using F(x) to Compute Probabilities

          • Obtaining f(x) from F(x)

          • Percentiles of a Continuous Distribution

        • 4.2 Expected Values and Moment Generating Functions

          • Expected Values

          • The Variance and Standard Deviation

          • Approximating the Mean Value and Standard Deviation

          • Moment Generating Functions

        • 4.3 The Normal Distribution

          • The Standard Normal Distribution

          • Percentiles of the Standard Normal Distribution

            • zα Notation

          • Nonstandard Normal Distributions

            • Percentiles of an Arbitrary Normal Distribution

          • The Normal Distribution and Discrete Populations

          • Approximating the Binomial Distribution

          • The Normal Moment Generating Function

        • 4.4 The Gamma Distribution and Its Relatives

          • The Family of Gamma Distributions

          • The Exponential Distribution

          • The Chi-Squared Distribution

        • 4.5 Other Continuous Distributions

          • The Weibull Distribution

          • The Lognormal Distribution

          • The Beta Distribution

        • 4.6 Probability Plots

          • Sample Percentiles

          • A Probability Plot

          • Beyond Normality

        • 4.7 Transformations of a Random Variable

        • Bibliography

    • fulltext(4)

      • Chapter 5: Joint Probability Distributions

        • 5.1 Jointly Distributed Random Variables

          • The Joint Probability Mass Function for Two Discrete Random Variables

          • The Joint Probability Density Function for Two Continuous Random Variables

          • Independent Random Variables

          • More than Two Random Variables

        • 5.2 Expected Values, Covariance, and Correlation

          • Covariance

          • Correlation

        • 5.3 Conditional Distributions

          • Independence

          • The Bivariate Normal Distribution

          • Regression to the Mean

          • The Mean and Variance Via the Conditional Mean and Variance

        • 5.4 Transformations of Random Variables

          • The Joint Distribution of Two New Random Variables

          • The Joint Distribution of More than Two New Variables

        • 5.5 Order Statistics

          • The Distributions of Yn and Y1

          • The Joint Distribution of the n Order Statistics

          • The Distribution of a Single Order Statistic

          • The Joint Distribution of Two Order Statistics

          • An Intuitive Derivation of Order Statistic PDF´s

        • Bibliography

    • fulltext(5)

      • Chapter 6: Statistics and Sampling Distributions

        • 6.1 Statistics and Their Distributions

          • Random Samples

          • Deriving the Sampling Distribution of a Statistic

          • Simulation Experiments

        • 6.2 The Distribution of the Sample Mean

          • The Case of a Normal Population Distribution

          • The Central Limit Theorem

          • Other Applications of the Central Limit Theorem

          • The Law of Large Numbers

        • 6.3 The Mean, Variance, and MGF for Several Variables

          • The Difference Between Two Random Variables

          • The Case of Normal Random Variables

          • Moment Generating Functions for Linear Combinations

        • 6.4 Distributions Based on a Normal Random Sample

          • The Chi-Squared Distribution

          • The t Distribution

          • The F Distribution

          • Summary of Relationships

        • Bibliography

        • Appendix: Proof of the Central Limit Theorem

    • fulltext(6)

      • Chapter 7: Point Estimation

        • 7.1 General Concepts and Criteria

          • Mean Squared Error

          • Unbiased Estimators

          • Estimators with Minimum Variance

          • More Complications

          • Reporting a Point Estimate: The Standard Error

          • The Bootstrap

        • 7.2 Methods of Point Estimation

          • The Method of Moments

          • Maximum Likelihood Estimation

          • Some Properties of MLEs

          • Large-Sample Behavior of the MLE

          • Some Complications

        • 7.3 Sufficiency

          • The Factorization Theorem

          • Jointly Sufficient Statistics

          • Minimal Sufficiency

          • Improving an Estimator

          • Further Comments

        • 7.4 Information and Efficiency

          • Information in a Random Sample

          • The Cramér-Rao Inequality

          • Large Sample Properties of the MLE

        • Bibliography

    • fulltext(7)

      • Chapter 8: Statistical Intervals Based on a Single Sample

        • 8.1 Basic Properties of Confidence Intervals

          • Interpreting a Confidence Level

          • Other Levels of Confidence

          • Confidence Level, Precision, and Choice of Sample Size

          • Deriving a Confidence Interval

        • 8.2 Large-Sample Confidence Intervals for a Population Mean and Proportion

          • A Large-Sample Interval for μ

          • A General Large-Sample Confidence Interval

          • A Confidence Interval for a Population Proportion

          • One-Sided Confidence Intervals (Confidence Bounds)

        • 8.3 Intervals Based on a Normal Population Distribution

          • Properties of t Distributions

          • The One-Sample t Confidence Interval

          • A Prediction Interval for a Single Future Value

          • Tolerance Intervals

          • Intervals Based on Nonnormal Population Distributions

        • 8.4 Confidence Intervals for the Variance and Standard Deviation of a Normal Population

        • 8.5 Bootstrap Confidence Intervals

          • The Percentile Interval

          • A Refined Interval

          • Bootstrapping the Median

          • The Mean Versus the Median

        • Bibliography

    • fulltext(8)

      • Chapter 9: Tests of Hypotheses Based on a Single Sample

        • 9.1 Hypotheses and Test Procedures

          • Test Procedures

          • Errors in Hypothesis Testing

        • 9.2 Tests About a Population Mean

          • Case I: A Normal Population with Known sigma

          • Case II: Large-Sample Tests

          • Case III: A Normal Population Distribution with Unknown sigma

        • 9.3 Tests Concerning a Population Proportion

          • Large-Sample Tests

          • Small-Sample Tests

        • 9.4 P-Values

          • P-Values for z Tests

          • P-Values for t Tests

          • More on Interpreting P-Values

        • 9.5 Some Comments on Selecting a Test Procedure

          • Statistical Versus Practical Significance

          • Best Tests for Simple Hypotheses

          • Power and Uniformly Most Powerful Tests

          • Likelihood Ratio Tests

        • Bibliography

    • fulltext(9)

      • Chapter 10: Inferences Based on Two Samples

        • 10.1 z Tests and Confidence Intervals for a Difference Between Two Population Means

          • Test Procedures for Normal Populations with Known Variances

          • Using a Comparison to Identify Causality

          • beta and the Choice of Sample Size

          • Large-Sample Tests

          • Confidence Intervals for M1-M2

        • 10.2 The Two-Sample t Test and Confidence Interval

          • Pooled t Procedures

          • Type II Error Probabilities

        • 10.3 Analysis of Paired Data

          • The Paired t Test

          • A Confidence Interval for MD

            • Paired Data and Two-Sample t Procedures

            • Paired Versus Unpaired Experiments

        • 10.4 Inferences About Two Population Proportions

          • A Large-Sample Test Procedure

          • Type II Error Probabilities and Sample Sizes

          • A Large-Sample Confidence Interval for p1 - p2

          • Small-Sample Inferences

        • 10.5 Inferences About Two Population Variances

          • Testing Hypotheses

          • P-Values for F Tests

          • A Confidence Interval for s

            • Bibliography

        • 10.6 Comparisons Using the Bootstrap and Permutation Methods

          • The Bootstrap for Two Samples

          • Permutation Tests

          • Inferences About Variability

          • The Analysis of Paired Data

    • fulltext(10)

      • Chapter 11: The Analysis of Variance

        • 11.1 Single-Factor ANOVA

          • Notation and Assumptions

          • Sums of Squares and Mean Squares

          • The F Test

          • Computational Formulas

          • Testing for the Assumption of Equal Variances

        • 11.2 Multiple Comparisons in ANOVA

          • Tukey´s Procedure

          • The Interpretation of α in Tukey´s Procedure

          • Confidence Intervals for Other Parametric Functions

        • 11.3 More on Single-Factor ANOVA

          • An Alternative Description of the ANOVA Model

          • beta for the F Test

          • Relationship of the F Test to the t Test

          • Single-Factor ANOVA When Sample Sizes Are Unequal

          • Multiple Comparisons When Sample Sizes Are Unequal

          • Data Transformation

          • A Random Effects Model

        • 11.4 Two-Factor ANOVA with Kij=1

          • The Model

          • Test Procedures

          • Expected Mean Squares

          • Multiple Comparisons

          • Randomized Block Experiments

          • Models for Random Effects

        • 11.5 Two-Factor ANOVA with Kij1

          • Parameters for the Fixed Effects Model with Interaction

          • Notation, Model, and Analysis

          • Multiple Comparisons

          • Models with Mixed and Random Effects

        • Bibliography

    • fulltext(11)

      • Chapter 12: Regression and Correlation

        • 12.1 The Simple Linear and Logistic Regression Models

          • A Linear Probabilistic Model

          • The Logistic Regression Model

        • 12.2 Estimating Model Parameters

          • Estimating σ2 and σ

          • The Coefficient of Determination

          • Terminology and Scope of Regression Analysis

        • 12.3 Inferences About the Regression Coefficient β1

          • A Confidence Interval for β1

          • Hypothesis-Testing Procedures

          • Regression and ANOVA

          • Fitting the Logistic Regression Model

        • 12.4 Inferences Concerning μY.x* and the Prediction of Future Y Values

          • Inferences Concerning μY.x*

          • A Prediction Interval for a Future Value of Y

        • 12.5 Correlation

          • The Sample Correlation Coefficient r

          • Properties of r

          • The Population Correlation Coefficient rho and Inferences About Correlation

          • Other Inferences Concerning rho

        • 12.6 Assessing Model Adequacy

          • Residuals and Standardized Residuals

          • Diagnostic Plots

          • Difficulties and Remedies

        • 12.7 Multiple Regression Analysis

          • Estimating Parameters

          • σ2 and the Coefficient of Multiple Determination

          • A Model Utility Test

          • Inferences in Multiple Regression

          • Assessing Model Adequacy

          • Multiple Regression Models

          • Models with Predictors for Categorical Variables

        • 12.8 Regression with Matrices

          • The Normal Equations

          • Residuals, ANOVA, F, and R-Squared

          • Covariance Matrices

          • The Hat Matrix

        • Bibliography

    • fulltext(12)

      • Chapter 13: Goodness-of-Fit Tests and Categorical Data Analysis

        • 13.1 Goodness-of-Fit Tests When Category Probabilities Are Completely Specified

          • P-Values for Chi-Squared Tests

          • 2 When the pi´s Are Functions of Other Parameters

          • 2 When the Underlying Distribution Is Continuous

        • 13.2 Goodness-of-Fit Tests for Composite Hypotheses

          • 2 When Parameters Are Estimated

          • Goodness of Fit for Discrete Distributions

          • Goodness of Fit for Continuous Distributions

          • A Special Test for Normality

        • 13.3 Two-Way Contingency Tables

          • Testing for Homogeneity

          • Testing for Independence

          • Ordinal Factors and Logistic Regression

        • Bibliography

    • fulltext(13)

      • Chapter 14: Alternative Approaches to Inference

        • 14.1 The Wilcoxon Signed-Rank Test

          • A General Description of the Wilcoxon Signed-Rank Test

          • Paired Observations

          • Efficiency of the Wilcoxon Signed-Rank Test

        • 14.2 The Wilcoxon Rank-Sum Test

          • Development of the Test When m=3, n=4

          • General Description of the Rank-Sum Test

          • Efficiency of the Wilcoxon Rank-Sum Test

        • 14.3 Distribution-Free Confidence Intervals

          • The Wilcoxon Signed-Rank Interval

          • The Wilcoxon Rank-Sum Interval

        • 14.4 Bayesian Methods

        • Bibliography

    • back-matter

    • Index

    • Answers to Odd-Numbered Exercises

      • Chapter 1

      • Chapter 2

      • Chapter 3

      • Chapter 4

      • Chapter 5

      • Chapter 6

      • Chapter 7

      • Chapter 8

      • Chapter 9

      • Chapter 10

      • Chapter 11

      • Chapter 12

      • Chapter 13

      • Chapter 14

    • Appendix Tables

    • GetFullPageImage

    • front-matter

      • Modern Mathematical Statistics with Applications

        • About the Authors

        • Contents

        • Preface

    • fulltext

    • fulltext(1)

    • fulltext(2)

    • fulltext(3)

    • fulltext(4)

    • fulltext(5)

    • fulltext(6)

    • fulltext(7)

    • fulltext(8)

    • fulltext(9)

    • fulltext(10)

    • fulltext(11)

    • fulltext(12)

    • fulltext(13)

    • back-matter

    • Index

    • Answers to Odd-Numbered Exercises

    • Appendix Tables

  • Chapter 11 The Analysis of Variance

    • GetFullPageImage

    • front-matter

      • Modern Mathematical Statistics with Applications

        • About the Authors

        • Contents

        • Preface

          • Purpose

          • Content

          • Mathematical Level

          • Recommended Coverage

          • Acknowledgments

          • A Final Thought

    • fulltext

      • Chapter 1: Overview and Descriptive Statistics

        • 1.1 Populations and Samples

          • Branches of Statistics

          • Collecting Data

        • 1.2 Pictorial and Tabular Methods in Descriptive Statistics

          • Notation

          • Stem-and-Leaf Displays

          • Dotplots

          • Histograms

          • Histogram Shapes

          • Qualitative Data

          • Multivariate Data

        • 1.3 Measures of Location

          • The Mean

          • The Median

          • Other Measures of Location: Quartiles, Percentiles, and Trimmed Means

          • Categorical Data and Sample Proportions

        • 1.4 Measures of Variability

          • Measures of Variability for Sample Data

          • Motivation for s2

          • A Computing Formula for s2

          • Boxplots

          • Boxplots That Show Outliers

          • Comparative Boxplots

        • Bibliography

    • fulltext(1)

      • Chapter 2: Probability

        • 2.1 Sample Spaces and Events

          • The Sample Space of an Experiment

          • Events

          • Some Relations from Set Theory

        • 2.2 Axioms, Interpretations, and Properties of Probability

          • Interpreting Probability

          • More Probability Properties

          • Determining Probabilities Systematically

          • Equally Likely Outcomes

        • 2.3 Counting Techniques

          • The Product Rule for Ordered Pairs

          • Tree Diagrams

          • A More General Product Rule

          • Permutations

          • Combinations

        • 2.4 Conditional Probability

          • The Definition of Conditional Probability

          • The Multiplication Rule for P(A ∩ B)

          • Bayes´ Theorem

        • 2.5 Independence

          • P(A ∩ B) When Events Are Independent

          • Independence of More Than Two Events

        • Bibliography

    • fulltext(2)

      • Chapter 3: Discrete Random Variables and Probability Distributions

        • 3.1 Random Variables

          • Two Types of Random Variables

        • 3.2 Probability Distributions for Discrete Random Variables

          • A Parameter of a Probability Distribution

          • The Cumulative Distribution Function

          • Another View of Probability Mass Functions

        • 3.3 Expected Values of Discrete Random Variables

          • The Expected Value of X

          • The Expected Value of a Function

          • The Variance of X

          • A Shortcut Formula for sigma2

          • Rules of Variance

        • 3.4 Moments and Moment Generating Functions

        • 3.5 The Binomial Probability Distribution

          • The Binomial Random Variable and Distribution

          • Using Binomial Tables

          • The Mean and Variance of X

          • The Moment Generating Function of X

        • 3.6 Hypergeometric and Negative Binomial Distributions

          • The Hypergeometric Distribution

          • The Negative Binomial Distribution

        • 3.7 The Poisson Probability Distribution

          • The Poisson Distribution as a Limit

          • The Mean, Variance and MGF of X

          • The Poisson Process

        • Bibliography

    • fulltext(3)

      • Chapter 4: Continuous Random Variables and Probability Distributions

        • 4.1 Probability Density Functions and Cumulative Distribution Functions

          • Probability Distributions for Continuous Variables

          • The Cumulative Distribution Function

          • Using F(x) to Compute Probabilities

          • Obtaining f(x) from F(x)

          • Percentiles of a Continuous Distribution

        • 4.2 Expected Values and Moment Generating Functions

          • Expected Values

          • The Variance and Standard Deviation

          • Approximating the Mean Value and Standard Deviation

          • Moment Generating Functions

        • 4.3 The Normal Distribution

          • The Standard Normal Distribution

          • Percentiles of the Standard Normal Distribution

            • zα Notation

          • Nonstandard Normal Distributions

            • Percentiles of an Arbitrary Normal Distribution

          • The Normal Distribution and Discrete Populations

          • Approximating the Binomial Distribution

          • The Normal Moment Generating Function

        • 4.4 The Gamma Distribution and Its Relatives

          • The Family of Gamma Distributions

          • The Exponential Distribution

          • The Chi-Squared Distribution

        • 4.5 Other Continuous Distributions

          • The Weibull Distribution

          • The Lognormal Distribution

          • The Beta Distribution

        • 4.6 Probability Plots

          • Sample Percentiles

          • A Probability Plot

          • Beyond Normality

        • 4.7 Transformations of a Random Variable

        • Bibliography

    • fulltext(4)

      • Chapter 5: Joint Probability Distributions

        • 5.1 Jointly Distributed Random Variables

          • The Joint Probability Mass Function for Two Discrete Random Variables

          • The Joint Probability Density Function for Two Continuous Random Variables

          • Independent Random Variables

          • More than Two Random Variables

        • 5.2 Expected Values, Covariance, and Correlation

          • Covariance

          • Correlation

        • 5.3 Conditional Distributions

          • Independence

          • The Bivariate Normal Distribution

          • Regression to the Mean

          • The Mean and Variance Via the Conditional Mean and Variance

        • 5.4 Transformations of Random Variables

          • The Joint Distribution of Two New Random Variables

          • The Joint Distribution of More than Two New Variables

        • 5.5 Order Statistics

          • The Distributions of Yn and Y1

          • The Joint Distribution of the n Order Statistics

          • The Distribution of a Single Order Statistic

          • The Joint Distribution of Two Order Statistics

          • An Intuitive Derivation of Order Statistic PDF´s

        • Bibliography

    • fulltext(5)

      • Chapter 6: Statistics and Sampling Distributions

        • 6.1 Statistics and Their Distributions

          • Random Samples

          • Deriving the Sampling Distribution of a Statistic

          • Simulation Experiments

        • 6.2 The Distribution of the Sample Mean

          • The Case of a Normal Population Distribution

          • The Central Limit Theorem

          • Other Applications of the Central Limit Theorem

          • The Law of Large Numbers

        • 6.3 The Mean, Variance, and MGF for Several Variables

          • The Difference Between Two Random Variables

          • The Case of Normal Random Variables

          • Moment Generating Functions for Linear Combinations

        • 6.4 Distributions Based on a Normal Random Sample

          • The Chi-Squared Distribution

          • The t Distribution

          • The F Distribution

          • Summary of Relationships

        • Bibliography

        • Appendix: Proof of the Central Limit Theorem

    • fulltext(6)

      • Chapter 7: Point Estimation

        • 7.1 General Concepts and Criteria

          • Mean Squared Error

          • Unbiased Estimators

          • Estimators with Minimum Variance

          • More Complications

          • Reporting a Point Estimate: The Standard Error

          • The Bootstrap

        • 7.2 Methods of Point Estimation

          • The Method of Moments

          • Maximum Likelihood Estimation

          • Some Properties of MLEs

          • Large-Sample Behavior of the MLE

          • Some Complications

        • 7.3 Sufficiency

          • The Factorization Theorem

          • Jointly Sufficient Statistics

          • Minimal Sufficiency

          • Improving an Estimator

          • Further Comments

        • 7.4 Information and Efficiency

          • Information in a Random Sample

          • The Cramér-Rao Inequality

          • Large Sample Properties of the MLE

        • Bibliography

    • fulltext(7)

      • Chapter 8: Statistical Intervals Based on a Single Sample

        • 8.1 Basic Properties of Confidence Intervals

          • Interpreting a Confidence Level

          • Other Levels of Confidence

          • Confidence Level, Precision, and Choice of Sample Size

          • Deriving a Confidence Interval

        • 8.2 Large-Sample Confidence Intervals for a Population Mean and Proportion

          • A Large-Sample Interval for μ

          • A General Large-Sample Confidence Interval

          • A Confidence Interval for a Population Proportion

          • One-Sided Confidence Intervals (Confidence Bounds)

        • 8.3 Intervals Based on a Normal Population Distribution

          • Properties of t Distributions

          • The One-Sample t Confidence Interval

          • A Prediction Interval for a Single Future Value

          • Tolerance Intervals

          • Intervals Based on Nonnormal Population Distributions

        • 8.4 Confidence Intervals for the Variance and Standard Deviation of a Normal Population

        • 8.5 Bootstrap Confidence Intervals

          • The Percentile Interval

          • A Refined Interval

          • Bootstrapping the Median

          • The Mean Versus the Median

        • Bibliography

    • fulltext(8)

      • Chapter 9: Tests of Hypotheses Based on a Single Sample

        • 9.1 Hypotheses and Test Procedures

          • Test Procedures

          • Errors in Hypothesis Testing

        • 9.2 Tests About a Population Mean

          • Case I: A Normal Population with Known sigma

          • Case II: Large-Sample Tests

          • Case III: A Normal Population Distribution with Unknown sigma

        • 9.3 Tests Concerning a Population Proportion

          • Large-Sample Tests

          • Small-Sample Tests

        • 9.4 P-Values

          • P-Values for z Tests

          • P-Values for t Tests

          • More on Interpreting P-Values

        • 9.5 Some Comments on Selecting a Test Procedure

          • Statistical Versus Practical Significance

          • Best Tests for Simple Hypotheses

          • Power and Uniformly Most Powerful Tests

          • Likelihood Ratio Tests

        • Bibliography

    • fulltext(9)

      • Chapter 10: Inferences Based on Two Samples

        • 10.1 z Tests and Confidence Intervals for a Difference Between Two Population Means

          • Test Procedures for Normal Populations with Known Variances

          • Using a Comparison to Identify Causality

          • beta and the Choice of Sample Size

          • Large-Sample Tests

          • Confidence Intervals for M1-M2

        • 10.2 The Two-Sample t Test and Confidence Interval

          • Pooled t Procedures

          • Type II Error Probabilities

        • 10.3 Analysis of Paired Data

          • The Paired t Test

          • A Confidence Interval for MD

            • Paired Data and Two-Sample t Procedures

            • Paired Versus Unpaired Experiments

        • 10.4 Inferences About Two Population Proportions

          • A Large-Sample Test Procedure

          • Type II Error Probabilities and Sample Sizes

          • A Large-Sample Confidence Interval for p1 - p2

          • Small-Sample Inferences

        • 10.5 Inferences About Two Population Variances

          • Testing Hypotheses

          • P-Values for F Tests

          • A Confidence Interval for s

            • Bibliography

        • 10.6 Comparisons Using the Bootstrap and Permutation Methods

          • The Bootstrap for Two Samples

          • Permutation Tests

          • Inferences About Variability

          • The Analysis of Paired Data

    • fulltext(10)

      • Chapter 11: The Analysis of Variance

        • 11.1 Single-Factor ANOVA

          • Notation and Assumptions

          • Sums of Squares and Mean Squares

          • The F Test

          • Computational Formulas

          • Testing for the Assumption of Equal Variances

        • 11.2 Multiple Comparisons in ANOVA

          • Tukey´s Procedure

          • The Interpretation of α in Tukey´s Procedure

          • Confidence Intervals for Other Parametric Functions

        • 11.3 More on Single-Factor ANOVA

          • An Alternative Description of the ANOVA Model

          • beta for the F Test

          • Relationship of the F Test to the t Test

          • Single-Factor ANOVA When Sample Sizes Are Unequal

          • Multiple Comparisons When Sample Sizes Are Unequal

          • Data Transformation

          • A Random Effects Model

        • 11.4 Two-Factor ANOVA with Kij=1

          • The Model

          • Test Procedures

          • Expected Mean Squares

          • Multiple Comparisons

          • Randomized Block Experiments

          • Models for Random Effects

        • 11.5 Two-Factor ANOVA with Kij1

          • Parameters for the Fixed Effects Model with Interaction

          • Notation, Model, and Analysis

          • Multiple Comparisons

          • Models with Mixed and Random Effects

        • Bibliography

    • fulltext(11)

      • Chapter 12: Regression and Correlation

        • 12.1 The Simple Linear and Logistic Regression Models

          • A Linear Probabilistic Model

          • The Logistic Regression Model

        • 12.2 Estimating Model Parameters

          • Estimating σ2 and σ

          • The Coefficient of Determination

          • Terminology and Scope of Regression Analysis

        • 12.3 Inferences About the Regression Coefficient β1

          • A Confidence Interval for β1

          • Hypothesis-Testing Procedures

          • Regression and ANOVA

          • Fitting the Logistic Regression Model

        • 12.4 Inferences Concerning μY.x* and the Prediction of Future Y Values

          • Inferences Concerning μY.x*

          • A Prediction Interval for a Future Value of Y

        • 12.5 Correlation

          • The Sample Correlation Coefficient r

          • Properties of r

          • The Population Correlation Coefficient rho and Inferences About Correlation

          • Other Inferences Concerning rho

        • 12.6 Assessing Model Adequacy

          • Residuals and Standardized Residuals

          • Diagnostic Plots

          • Difficulties and Remedies

        • 12.7 Multiple Regression Analysis

          • Estimating Parameters

          • σ2 and the Coefficient of Multiple Determination

          • A Model Utility Test

          • Inferences in Multiple Regression

          • Assessing Model Adequacy

          • Multiple Regression Models

          • Models with Predictors for Categorical Variables

        • 12.8 Regression with Matrices

          • The Normal Equations

          • Residuals, ANOVA, F, and R-Squared

          • Covariance Matrices

          • The Hat Matrix

        • Bibliography

    • fulltext(12)

      • Chapter 13: Goodness-of-Fit Tests and Categorical Data Analysis

        • 13.1 Goodness-of-Fit Tests When Category Probabilities Are Completely Specified

          • P-Values for Chi-Squared Tests

          • 2 When the pi´s Are Functions of Other Parameters

          • 2 When the Underlying Distribution Is Continuous

        • 13.2 Goodness-of-Fit Tests for Composite Hypotheses

          • 2 When Parameters Are Estimated

          • Goodness of Fit for Discrete Distributions

          • Goodness of Fit for Continuous Distributions

          • A Special Test for Normality

        • 13.3 Two-Way Contingency Tables

          • Testing for Homogeneity

          • Testing for Independence

          • Ordinal Factors and Logistic Regression

        • Bibliography

    • fulltext(13)

      • Chapter 14: Alternative Approaches to Inference

        • 14.1 The Wilcoxon Signed-Rank Test

          • A General Description of the Wilcoxon Signed-Rank Test

          • Paired Observations

          • Efficiency of the Wilcoxon Signed-Rank Test

        • 14.2 The Wilcoxon Rank-Sum Test

          • Development of the Test When m=3, n=4

          • General Description of the Rank-Sum Test

          • Efficiency of the Wilcoxon Rank-Sum Test

        • 14.3 Distribution-Free Confidence Intervals

          • The Wilcoxon Signed-Rank Interval

          • The Wilcoxon Rank-Sum Interval

        • 14.4 Bayesian Methods

        • Bibliography

    • back-matter

    • Index

    • Answers to Odd-Numbered Exercises

      • Chapter 1

      • Chapter 2

      • Chapter 3

      • Chapter 4

      • Chapter 5

      • Chapter 6

      • Chapter 7

      • Chapter 8

      • Chapter 9

      • Chapter 10

      • Chapter 11

      • Chapter 12

      • Chapter 13

      • Chapter 14

    • Appendix Tables

    • GetFullPageImage

    • front-matter

      • Modern Mathematical Statistics with Applications

        • About the Authors

        • Contents

        • Preface

    • fulltext

    • fulltext(1)

    • fulltext(2)

    • fulltext(3)

    • fulltext(4)

    • fulltext(5)

    • fulltext(6)

    • fulltext(7)

    • fulltext(8)

    • fulltext(9)

    • fulltext(10)

    • fulltext(11)

    • fulltext(12)

    • fulltext(13)

    • back-matter

    • Index

    • Answers to Odd-Numbered Exercises

    • Appendix Tables

  • Chapter 12 Regression and Correlation

    • GetFullPageImage

    • front-matter

      • Modern Mathematical Statistics with Applications

        • About the Authors

        • Contents

        • Preface

          • Purpose

          • Content

          • Mathematical Level

          • Recommended Coverage

          • Acknowledgments

          • A Final Thought

    • fulltext

      • Chapter 1: Overview and Descriptive Statistics

        • 1.1 Populations and Samples

          • Branches of Statistics

          • Collecting Data

        • 1.2 Pictorial and Tabular Methods in Descriptive Statistics

          • Notation

          • Stem-and-Leaf Displays

          • Dotplots

          • Histograms

          • Histogram Shapes

          • Qualitative Data

          • Multivariate Data

        • 1.3 Measures of Location

          • The Mean

          • The Median

          • Other Measures of Location: Quartiles, Percentiles, and Trimmed Means

          • Categorical Data and Sample Proportions

        • 1.4 Measures of Variability

          • Measures of Variability for Sample Data

          • Motivation for s2

          • A Computing Formula for s2

          • Boxplots

          • Boxplots That Show Outliers

          • Comparative Boxplots

        • Bibliography

    • fulltext(1)

      • Chapter 2: Probability

        • 2.1 Sample Spaces and Events

          • The Sample Space of an Experiment

          • Events

          • Some Relations from Set Theory

        • 2.2 Axioms, Interpretations, and Properties of Probability

          • Interpreting Probability

          • More Probability Properties

          • Determining Probabilities Systematically

          • Equally Likely Outcomes

        • 2.3 Counting Techniques

          • The Product Rule for Ordered Pairs

          • Tree Diagrams

          • A More General Product Rule

          • Permutations

          • Combinations

        • 2.4 Conditional Probability

          • The Definition of Conditional Probability

          • The Multiplication Rule for P(A ∩ B)

          • Bayes´ Theorem

        • 2.5 Independence

          • P(A ∩ B) When Events Are Independent

          • Independence of More Than Two Events

        • Bibliography

    • fulltext(2)

      • Chapter 3: Discrete Random Variables and Probability Distributions

        • 3.1 Random Variables

          • Two Types of Random Variables

        • 3.2 Probability Distributions for Discrete Random Variables

          • A Parameter of a Probability Distribution

          • The Cumulative Distribution Function

          • Another View of Probability Mass Functions

        • 3.3 Expected Values of Discrete Random Variables

          • The Expected Value of X

          • The Expected Value of a Function

          • The Variance of X

          • A Shortcut Formula for sigma2

          • Rules of Variance

        • 3.4 Moments and Moment Generating Functions

        • 3.5 The Binomial Probability Distribution

          • The Binomial Random Variable and Distribution

          • Using Binomial Tables

          • The Mean and Variance of X

          • The Moment Generating Function of X

        • 3.6 Hypergeometric and Negative Binomial Distributions

          • The Hypergeometric Distribution

          • The Negative Binomial Distribution

        • 3.7 The Poisson Probability Distribution

          • The Poisson Distribution as a Limit

          • The Mean, Variance and MGF of X

          • The Poisson Process

        • Bibliography

    • fulltext(3)

      • Chapter 4: Continuous Random Variables and Probability Distributions

        • 4.1 Probability Density Functions and Cumulative Distribution Functions

          • Probability Distributions for Continuous Variables

          • The Cumulative Distribution Function

          • Using F(x) to Compute Probabilities

          • Obtaining f(x) from F(x)

          • Percentiles of a Continuous Distribution

        • 4.2 Expected Values and Moment Generating Functions

          • Expected Values

          • The Variance and Standard Deviation

          • Approximating the Mean Value and Standard Deviation

          • Moment Generating Functions

        • 4.3 The Normal Distribution

          • The Standard Normal Distribution

          • Percentiles of the Standard Normal Distribution

            • zα Notation

          • Nonstandard Normal Distributions

            • Percentiles of an Arbitrary Normal Distribution

          • The Normal Distribution and Discrete Populations

          • Approximating the Binomial Distribution

          • The Normal Moment Generating Function

        • 4.4 The Gamma Distribution and Its Relatives

          • The Family of Gamma Distributions

          • The Exponential Distribution

          • The Chi-Squared Distribution

        • 4.5 Other Continuous Distributions

          • The Weibull Distribution

          • The Lognormal Distribution

          • The Beta Distribution

        • 4.6 Probability Plots

          • Sample Percentiles

          • A Probability Plot

          • Beyond Normality

        • 4.7 Transformations of a Random Variable

        • Bibliography

    • fulltext(4)

      • Chapter 5: Joint Probability Distributions

        • 5.1 Jointly Distributed Random Variables

          • The Joint Probability Mass Function for Two Discrete Random Variables

          • The Joint Probability Density Function for Two Continuous Random Variables

          • Independent Random Variables

          • More than Two Random Variables

        • 5.2 Expected Values, Covariance, and Correlation

          • Covariance

          • Correlation

        • 5.3 Conditional Distributions

          • Independence

          • The Bivariate Normal Distribution

          • Regression to the Mean

          • The Mean and Variance Via the Conditional Mean and Variance

        • 5.4 Transformations of Random Variables

          • The Joint Distribution of Two New Random Variables

          • The Joint Distribution of More than Two New Variables

        • 5.5 Order Statistics

          • The Distributions of Yn and Y1

          • The Joint Distribution of the n Order Statistics

          • The Distribution of a Single Order Statistic

          • The Joint Distribution of Two Order Statistics

          • An Intuitive Derivation of Order Statistic PDF´s

        • Bibliography

    • fulltext(5)

      • Chapter 6: Statistics and Sampling Distributions

        • 6.1 Statistics and Their Distributions

          • Random Samples

          • Deriving the Sampling Distribution of a Statistic

          • Simulation Experiments

        • 6.2 The Distribution of the Sample Mean

          • The Case of a Normal Population Distribution

          • The Central Limit Theorem

          • Other Applications of the Central Limit Theorem

          • The Law of Large Numbers

        • 6.3 The Mean, Variance, and MGF for Several Variables

          • The Difference Between Two Random Variables

          • The Case of Normal Random Variables

          • Moment Generating Functions for Linear Combinations

        • 6.4 Distributions Based on a Normal Random Sample

          • The Chi-Squared Distribution

          • The t Distribution

          • The F Distribution

          • Summary of Relationships

        • Bibliography

        • Appendix: Proof of the Central Limit Theorem

    • fulltext(6)

      • Chapter 7: Point Estimation

        • 7.1 General Concepts and Criteria

          • Mean Squared Error

          • Unbiased Estimators

          • Estimators with Minimum Variance

          • More Complications

          • Reporting a Point Estimate: The Standard Error

          • The Bootstrap

        • 7.2 Methods of Point Estimation

          • The Method of Moments

          • Maximum Likelihood Estimation

          • Some Properties of MLEs

          • Large-Sample Behavior of the MLE

          • Some Complications

        • 7.3 Sufficiency

          • The Factorization Theorem

          • Jointly Sufficient Statistics

          • Minimal Sufficiency

          • Improving an Estimator

          • Further Comments

        • 7.4 Information and Efficiency

          • Information in a Random Sample

          • The Cramér-Rao Inequality

          • Large Sample Properties of the MLE

        • Bibliography

    • fulltext(7)

      • Chapter 8: Statistical Intervals Based on a Single Sample

        • 8.1 Basic Properties of Confidence Intervals

          • Interpreting a Confidence Level

          • Other Levels of Confidence

          • Confidence Level, Precision, and Choice of Sample Size

          • Deriving a Confidence Interval

        • 8.2 Large-Sample Confidence Intervals for a Population Mean and Proportion

          • A Large-Sample Interval for μ

          • A General Large-Sample Confidence Interval

          • A Confidence Interval for a Population Proportion

          • One-Sided Confidence Intervals (Confidence Bounds)

        • 8.3 Intervals Based on a Normal Population Distribution

          • Properties of t Distributions

          • The One-Sample t Confidence Interval

          • A Prediction Interval for a Single Future Value

          • Tolerance Intervals

          • Intervals Based on Nonnormal Population Distributions

        • 8.4 Confidence Intervals for the Variance and Standard Deviation of a Normal Population

        • 8.5 Bootstrap Confidence Intervals

          • The Percentile Interval

          • A Refined Interval

          • Bootstrapping the Median

          • The Mean Versus the Median

        • Bibliography

    • fulltext(8)

      • Chapter 9: Tests of Hypotheses Based on a Single Sample

        • 9.1 Hypotheses and Test Procedures

          • Test Procedures

          • Errors in Hypothesis Testing

        • 9.2 Tests About a Population Mean

          • Case I: A Normal Population with Known sigma

          • Case II: Large-Sample Tests

          • Case III: A Normal Population Distribution with Unknown sigma

        • 9.3 Tests Concerning a Population Proportion

          • Large-Sample Tests

          • Small-Sample Tests

        • 9.4 P-Values

          • P-Values for z Tests

          • P-Values for t Tests

          • More on Interpreting P-Values

        • 9.5 Some Comments on Selecting a Test Procedure

          • Statistical Versus Practical Significance

          • Best Tests for Simple Hypotheses

          • Power and Uniformly Most Powerful Tests

          • Likelihood Ratio Tests

        • Bibliography

    • fulltext(9)

      • Chapter 10: Inferences Based on Two Samples

        • 10.1 z Tests and Confidence Intervals for a Difference Between Two Population Means

          • Test Procedures for Normal Populations with Known Variances

          • Using a Comparison to Identify Causality

          • beta and the Choice of Sample Size

          • Large-Sample Tests

          • Confidence Intervals for M1-M2

        • 10.2 The Two-Sample t Test and Confidence Interval

          • Pooled t Procedures

          • Type II Error Probabilities

        • 10.3 Analysis of Paired Data

          • The Paired t Test

          • A Confidence Interval for MD

            • Paired Data and Two-Sample t Procedures

            • Paired Versus Unpaired Experiments

        • 10.4 Inferences About Two Population Proportions

          • A Large-Sample Test Procedure

          • Type II Error Probabilities and Sample Sizes

          • A Large-Sample Confidence Interval for p1 - p2

          • Small-Sample Inferences

        • 10.5 Inferences About Two Population Variances

          • Testing Hypotheses

          • P-Values for F Tests

          • A Confidence Interval for s

            • Bibliography

        • 10.6 Comparisons Using the Bootstrap and Permutation Methods

          • The Bootstrap for Two Samples

          • Permutation Tests

          • Inferences About Variability

          • The Analysis of Paired Data

    • fulltext(10)

      • Chapter 11: The Analysis of Variance

        • 11.1 Single-Factor ANOVA

          • Notation and Assumptions

          • Sums of Squares and Mean Squares

          • The F Test

          • Computational Formulas

          • Testing for the Assumption of Equal Variances

        • 11.2 Multiple Comparisons in ANOVA

          • Tukey´s Procedure

          • The Interpretation of α in Tukey´s Procedure

          • Confidence Intervals for Other Parametric Functions

        • 11.3 More on Single-Factor ANOVA

          • An Alternative Description of the ANOVA Model

          • beta for the F Test

          • Relationship of the F Test to the t Test

          • Single-Factor ANOVA When Sample Sizes Are Unequal

          • Multiple Comparisons When Sample Sizes Are Unequal

          • Data Transformation

          • A Random Effects Model

        • 11.4 Two-Factor ANOVA with Kij=1

          • The Model

          • Test Procedures

          • Expected Mean Squares

          • Multiple Comparisons

          • Randomized Block Experiments

          • Models for Random Effects

        • 11.5 Two-Factor ANOVA with Kij1

          • Parameters for the Fixed Effects Model with Interaction

          • Notation, Model, and Analysis

          • Multiple Comparisons

          • Models with Mixed and Random Effects

        • Bibliography

    • fulltext(11)

      • Chapter 12: Regression and Correlation

        • 12.1 The Simple Linear and Logistic Regression Models

          • A Linear Probabilistic Model

          • The Logistic Regression Model

        • 12.2 Estimating Model Parameters

          • Estimating σ2 and σ

          • The Coefficient of Determination

          • Terminology and Scope of Regression Analysis

        • 12.3 Inferences About the Regression Coefficient β1

          • A Confidence Interval for β1

          • Hypothesis-Testing Procedures

          • Regression and ANOVA

          • Fitting the Logistic Regression Model

        • 12.4 Inferences Concerning μY.x* and the Prediction of Future Y Values

          • Inferences Concerning μY.x*

          • A Prediction Interval for a Future Value of Y

        • 12.5 Correlation

          • The Sample Correlation Coefficient r

          • Properties of r

          • The Population Correlation Coefficient rho and Inferences About Correlation

          • Other Inferences Concerning rho

        • 12.6 Assessing Model Adequacy

          • Residuals and Standardized Residuals

          • Diagnostic Plots

          • Difficulties and Remedies

        • 12.7 Multiple Regression Analysis

          • Estimating Parameters

          • σ2 and the Coefficient of Multiple Determination

          • A Model Utility Test

          • Inferences in Multiple Regression

          • Assessing Model Adequacy

          • Multiple Regression Models

          • Models with Predictors for Categorical Variables

        • 12.8 Regression with Matrices

          • The Normal Equations

          • Residuals, ANOVA, F, and R-Squared

          • Covariance Matrices

          • The Hat Matrix

        • Bibliography

    • fulltext(12)

      • Chapter 13: Goodness-of-Fit Tests and Categorical Data Analysis

        • 13.1 Goodness-of-Fit Tests When Category Probabilities Are Completely Specified

          • P-Values for Chi-Squared Tests

          • 2 When the pi´s Are Functions of Other Parameters

          • 2 When the Underlying Distribution Is Continuous

        • 13.2 Goodness-of-Fit Tests for Composite Hypotheses

          • 2 When Parameters Are Estimated

          • Goodness of Fit for Discrete Distributions

          • Goodness of Fit for Continuous Distributions

          • A Special Test for Normality

        • 13.3 Two-Way Contingency Tables

          • Testing for Homogeneity

          • Testing for Independence

          • Ordinal Factors and Logistic Regression

        • Bibliography

    • fulltext(13)

      • Chapter 14: Alternative Approaches to Inference

        • 14.1 The Wilcoxon Signed-Rank Test

          • A General Description of the Wilcoxon Signed-Rank Test

          • Paired Observations

          • Efficiency of the Wilcoxon Signed-Rank Test

        • 14.2 The Wilcoxon Rank-Sum Test

          • Development of the Test When m=3, n=4

          • General Description of the Rank-Sum Test

          • Efficiency of the Wilcoxon Rank-Sum Test

        • 14.3 Distribution-Free Confidence Intervals

          • The Wilcoxon Signed-Rank Interval

          • The Wilcoxon Rank-Sum Interval

        • 14.4 Bayesian Methods

        • Bibliography

    • back-matter

    • Index

    • Answers to Odd-Numbered Exercises

      • Chapter 1

      • Chapter 2

      • Chapter 3

      • Chapter 4

      • Chapter 5

      • Chapter 6

      • Chapter 7

      • Chapter 8

      • Chapter 9

      • Chapter 10

      • Chapter 11

      • Chapter 12

      • Chapter 13

      • Chapter 14

    • Appendix Tables

    • GetFullPageImage

    • front-matter

      • Modern Mathematical Statistics with Applications

        • About the Authors

        • Contents

        • Preface

    • fulltext

    • fulltext(1)

    • fulltext(2)

    • fulltext(3)

    • fulltext(4)

    • fulltext(5)

    • fulltext(6)

    • fulltext(7)

    • fulltext(8)

    • fulltext(9)

    • fulltext(10)

    • fulltext(11)

    • fulltext(12)

    • fulltext(13)

    • back-matter

    • Index

    • Answers to Odd-Numbered Exercises

    • Appendix Tables

  • Chapter 13 Goodness-of-Fit Tests and Categorical Data Analysis

    • GetFullPageImage

    • front-matter

      • Modern Mathematical Statistics with Applications

        • About the Authors

        • Contents

        • Preface

          • Purpose

          • Content

          • Mathematical Level

          • Recommended Coverage

          • Acknowledgments

          • A Final Thought

    • fulltext

      • Chapter 1: Overview and Descriptive Statistics

        • 1.1 Populations and Samples

          • Branches of Statistics

          • Collecting Data

        • 1.2 Pictorial and Tabular Methods in Descriptive Statistics

          • Notation

          • Stem-and-Leaf Displays

          • Dotplots

          • Histograms

          • Histogram Shapes

          • Qualitative Data

          • Multivariate Data

        • 1.3 Measures of Location

          • The Mean

          • The Median

          • Other Measures of Location: Quartiles, Percentiles, and Trimmed Means

          • Categorical Data and Sample Proportions

        • 1.4 Measures of Variability

          • Measures of Variability for Sample Data

          • Motivation for s2

          • A Computing Formula for s2

          • Boxplots

          • Boxplots That Show Outliers

          • Comparative Boxplots

        • Bibliography

    • fulltext(1)

      • Chapter 2: Probability

        • 2.1 Sample Spaces and Events

          • The Sample Space of an Experiment

          • Events

          • Some Relations from Set Theory

        • 2.2 Axioms, Interpretations, and Properties of Probability

          • Interpreting Probability

          • More Probability Properties

          • Determining Probabilities Systematically

          • Equally Likely Outcomes

        • 2.3 Counting Techniques

          • The Product Rule for Ordered Pairs

          • Tree Diagrams

          • A More General Product Rule

          • Permutations

          • Combinations

        • 2.4 Conditional Probability

          • The Definition of Conditional Probability

          • The Multiplication Rule for P(A ∩ B)

          • Bayes´ Theorem

        • 2.5 Independence

          • P(A ∩ B) When Events Are Independent

          • Independence of More Than Two Events

        • Bibliography

    • fulltext(2)

      • Chapter 3: Discrete Random Variables and Probability Distributions

        • 3.1 Random Variables

          • Two Types of Random Variables

        • 3.2 Probability Distributions for Discrete Random Variables

          • A Parameter of a Probability Distribution

          • The Cumulative Distribution Function

          • Another View of Probability Mass Functions

        • 3.3 Expected Values of Discrete Random Variables

          • The Expected Value of X

          • The Expected Value of a Function

          • The Variance of X

          • A Shortcut Formula for sigma2

          • Rules of Variance

        • 3.4 Moments and Moment Generating Functions

        • 3.5 The Binomial Probability Distribution

          • The Binomial Random Variable and Distribution

          • Using Binomial Tables

          • The Mean and Variance of X

          • The Moment Generating Function of X

        • 3.6 Hypergeometric and Negative Binomial Distributions

          • The Hypergeometric Distribution

          • The Negative Binomial Distribution

        • 3.7 The Poisson Probability Distribution

          • The Poisson Distribution as a Limit

          • The Mean, Variance and MGF of X

          • The Poisson Process

        • Bibliography

    • fulltext(3)

      • Chapter 4: Continuous Random Variables and Probability Distributions

        • 4.1 Probability Density Functions and Cumulative Distribution Functions

          • Probability Distributions for Continuous Variables

          • The Cumulative Distribution Function

          • Using F(x) to Compute Probabilities

          • Obtaining f(x) from F(x)

          • Percentiles of a Continuous Distribution

        • 4.2 Expected Values and Moment Generating Functions

          • Expected Values

          • The Variance and Standard Deviation

          • Approximating the Mean Value and Standard Deviation

          • Moment Generating Functions

        • 4.3 The Normal Distribution

          • The Standard Normal Distribution

          • Percentiles of the Standard Normal Distribution

            • zα Notation

          • Nonstandard Normal Distributions

            • Percentiles of an Arbitrary Normal Distribution

          • The Normal Distribution and Discrete Populations

          • Approximating the Binomial Distribution

          • The Normal Moment Generating Function

        • 4.4 The Gamma Distribution and Its Relatives

          • The Family of Gamma Distributions

          • The Exponential Distribution

          • The Chi-Squared Distribution

        • 4.5 Other Continuous Distributions

          • The Weibull Distribution

          • The Lognormal Distribution

          • The Beta Distribution

        • 4.6 Probability Plots

          • Sample Percentiles

          • A Probability Plot

          • Beyond Normality

        • 4.7 Transformations of a Random Variable

        • Bibliography

    • fulltext(4)

      • Chapter 5: Joint Probability Distributions

        • 5.1 Jointly Distributed Random Variables

          • The Joint Probability Mass Function for Two Discrete Random Variables

          • The Joint Probability Density Function for Two Continuous Random Variables

          • Independent Random Variables

          • More than Two Random Variables

        • 5.2 Expected Values, Covariance, and Correlation

          • Covariance

          • Correlation

        • 5.3 Conditional Distributions

          • Independence

          • The Bivariate Normal Distribution

          • Regression to the Mean

          • The Mean and Variance Via the Conditional Mean and Variance

        • 5.4 Transformations of Random Variables

          • The Joint Distribution of Two New Random Variables

          • The Joint Distribution of More than Two New Variables

        • 5.5 Order Statistics

          • The Distributions of Yn and Y1

          • The Joint Distribution of the n Order Statistics

          • The Distribution of a Single Order Statistic

          • The Joint Distribution of Two Order Statistics

          • An Intuitive Derivation of Order Statistic PDF´s

        • Bibliography

    • fulltext(5)

      • Chapter 6: Statistics and Sampling Distributions

        • 6.1 Statistics and Their Distributions

          • Random Samples

          • Deriving the Sampling Distribution of a Statistic

          • Simulation Experiments

        • 6.2 The Distribution of the Sample Mean

          • The Case of a Normal Population Distribution

          • The Central Limit Theorem

          • Other Applications of the Central Limit Theorem

          • The Law of Large Numbers

        • 6.3 The Mean, Variance, and MGF for Several Variables

          • The Difference Between Two Random Variables

          • The Case of Normal Random Variables

          • Moment Generating Functions for Linear Combinations

        • 6.4 Distributions Based on a Normal Random Sample

          • The Chi-Squared Distribution

          • The t Distribution

          • The F Distribution

          • Summary of Relationships

        • Bibliography

        • Appendix: Proof of the Central Limit Theorem

    • fulltext(6)

      • Chapter 7: Point Estimation

        • 7.1 General Concepts and Criteria

          • Mean Squared Error

          • Unbiased Estimators

          • Estimators with Minimum Variance

          • More Complications

          • Reporting a Point Estimate: The Standard Error

          • The Bootstrap

        • 7.2 Methods of Point Estimation

          • The Method of Moments

          • Maximum Likelihood Estimation

          • Some Properties of MLEs

          • Large-Sample Behavior of the MLE

          • Some Complications

        • 7.3 Sufficiency

          • The Factorization Theorem

          • Jointly Sufficient Statistics

          • Minimal Sufficiency

          • Improving an Estimator

          • Further Comments

        • 7.4 Information and Efficiency

          • Information in a Random Sample

          • The Cramér-Rao Inequality

          • Large Sample Properties of the MLE

        • Bibliography

    • fulltext(7)

      • Chapter 8: Statistical Intervals Based on a Single Sample

        • 8.1 Basic Properties of Confidence Intervals

          • Interpreting a Confidence Level

          • Other Levels of Confidence

          • Confidence Level, Precision, and Choice of Sample Size

          • Deriving a Confidence Interval

        • 8.2 Large-Sample Confidence Intervals for a Population Mean and Proportion

          • A Large-Sample Interval for μ

          • A General Large-Sample Confidence Interval

          • A Confidence Interval for a Population Proportion

          • One-Sided Confidence Intervals (Confidence Bounds)

        • 8.3 Intervals Based on a Normal Population Distribution

          • Properties of t Distributions

          • The One-Sample t Confidence Interval

          • A Prediction Interval for a Single Future Value

          • Tolerance Intervals

          • Intervals Based on Nonnormal Population Distributions

        • 8.4 Confidence Intervals for the Variance and Standard Deviation of a Normal Population

        • 8.5 Bootstrap Confidence Intervals

          • The Percentile Interval

          • A Refined Interval

          • Bootstrapping the Median

          • The Mean Versus the Median

        • Bibliography

    • fulltext(8)

      • Chapter 9: Tests of Hypotheses Based on a Single Sample

        • 9.1 Hypotheses and Test Procedures

          • Test Procedures

          • Errors in Hypothesis Testing

        • 9.2 Tests About a Population Mean

          • Case I: A Normal Population with Known sigma

          • Case II: Large-Sample Tests

          • Case III: A Normal Population Distribution with Unknown sigma

        • 9.3 Tests Concerning a Population Proportion

          • Large-Sample Tests

          • Small-Sample Tests

        • 9.4 P-Values

          • P-Values for z Tests

          • P-Values for t Tests

          • More on Interpreting P-Values

        • 9.5 Some Comments on Selecting a Test Procedure

          • Statistical Versus Practical Significance

          • Best Tests for Simple Hypotheses

          • Power and Uniformly Most Powerful Tests

          • Likelihood Ratio Tests

        • Bibliography

    • fulltext(9)

      • Chapter 10: Inferences Based on Two Samples

        • 10.1 z Tests and Confidence Intervals for a Difference Between Two Population Means

          • Test Procedures for Normal Populations with Known Variances

          • Using a Comparison to Identify Causality

          • beta and the Choice of Sample Size

          • Large-Sample Tests

          • Confidence Intervals for M1-M2

        • 10.2 The Two-Sample t Test and Confidence Interval

          • Pooled t Procedures

          • Type II Error Probabilities

        • 10.3 Analysis of Paired Data

          • The Paired t Test

          • A Confidence Interval for MD

            • Paired Data and Two-Sample t Procedures

            • Paired Versus Unpaired Experiments

        • 10.4 Inferences About Two Population Proportions

          • A Large-Sample Test Procedure

          • Type II Error Probabilities and Sample Sizes

          • A Large-Sample Confidence Interval for p1 - p2

          • Small-Sample Inferences

        • 10.5 Inferences About Two Population Variances

          • Testing Hypotheses

          • P-Values for F Tests

          • A Confidence Interval for s

            • Bibliography

        • 10.6 Comparisons Using the Bootstrap and Permutation Methods

          • The Bootstrap for Two Samples

          • Permutation Tests

          • Inferences About Variability

          • The Analysis of Paired Data

    • fulltext(10)

      • Chapter 11: The Analysis of Variance

        • 11.1 Single-Factor ANOVA

          • Notation and Assumptions

          • Sums of Squares and Mean Squares

          • The F Test

          • Computational Formulas

          • Testing for the Assumption of Equal Variances

        • 11.2 Multiple Comparisons in ANOVA

          • Tukey´s Procedure

          • The Interpretation of α in Tukey´s Procedure

          • Confidence Intervals for Other Parametric Functions

        • 11.3 More on Single-Factor ANOVA

          • An Alternative Description of the ANOVA Model

          • beta for the F Test

          • Relationship of the F Test to the t Test

          • Single-Factor ANOVA When Sample Sizes Are Unequal

          • Multiple Comparisons When Sample Sizes Are Unequal

          • Data Transformation

          • A Random Effects Model

        • 11.4 Two-Factor ANOVA with Kij=1

          • The Model

          • Test Procedures

          • Expected Mean Squares

          • Multiple Comparisons

          • Randomized Block Experiments

          • Models for Random Effects

        • 11.5 Two-Factor ANOVA with Kij1

          • Parameters for the Fixed Effects Model with Interaction

          • Notation, Model, and Analysis

          • Multiple Comparisons

          • Models with Mixed and Random Effects

        • Bibliography

    • fulltext(11)

      • Chapter 12: Regression and Correlation

        • 12.1 The Simple Linear and Logistic Regression Models

          • A Linear Probabilistic Model

          • The Logistic Regression Model

        • 12.2 Estimating Model Parameters

          • Estimating σ2 and σ

          • The Coefficient of Determination

          • Terminology and Scope of Regression Analysis

        • 12.3 Inferences About the Regression Coefficient β1

          • A Confidence Interval for β1

          • Hypothesis-Testing Procedures

          • Regression and ANOVA

          • Fitting the Logistic Regression Model

        • 12.4 Inferences Concerning μY.x* and the Prediction of Future Y Values

          • Inferences Concerning μY.x*

          • A Prediction Interval for a Future Value of Y

        • 12.5 Correlation

          • The Sample Correlation Coefficient r

          • Properties of r

          • The Population Correlation Coefficient rho and Inferences About Correlation

          • Other Inferences Concerning rho

        • 12.6 Assessing Model Adequacy

          • Residuals and Standardized Residuals

          • Diagnostic Plots

          • Difficulties and Remedies

        • 12.7 Multiple Regression Analysis

          • Estimating Parameters

          • σ2 and the Coefficient of Multiple Determination

          • A Model Utility Test

          • Inferences in Multiple Regression

          • Assessing Model Adequacy

          • Multiple Regression Models

          • Models with Predictors for Categorical Variables

        • 12.8 Regression with Matrices

          • The Normal Equations

          • Residuals, ANOVA, F, and R-Squared

          • Covariance Matrices

          • The Hat Matrix

        • Bibliography

    • fulltext(12)

      • Chapter 13: Goodness-of-Fit Tests and Categorical Data Analysis

        • 13.1 Goodness-of-Fit Tests When Category Probabilities Are Completely Specified

          • P-Values for Chi-Squared Tests

          • 2 When the pi´s Are Functions of Other Parameters

          • 2 When the Underlying Distribution Is Continuous

        • 13.2 Goodness-of-Fit Tests for Composite Hypotheses

          • 2 When Parameters Are Estimated

          • Goodness of Fit for Discrete Distributions

          • Goodness of Fit for Continuous Distributions

          • A Special Test for Normality

        • 13.3 Two-Way Contingency Tables

          • Testing for Homogeneity

          • Testing for Independence

          • Ordinal Factors and Logistic Regression

        • Bibliography

    • fulltext(13)

      • Chapter 14: Alternative Approaches to Inference

        • 14.1 The Wilcoxon Signed-Rank Test

          • A General Description of the Wilcoxon Signed-Rank Test

          • Paired Observations

          • Efficiency of the Wilcoxon Signed-Rank Test

        • 14.2 The Wilcoxon Rank-Sum Test

          • Development of the Test When m=3, n=4

          • General Description of the Rank-Sum Test

          • Efficiency of the Wilcoxon Rank-Sum Test

        • 14.3 Distribution-Free Confidence Intervals

          • The Wilcoxon Signed-Rank Interval

          • The Wilcoxon Rank-Sum Interval

        • 14.4 Bayesian Methods

        • Bibliography

    • back-matter

    • Index

    • Answers to Odd-Numbered Exercises

      • Chapter 1

      • Chapter 2

      • Chapter 3

      • Chapter 4

      • Chapter 5

      • Chapter 6

      • Chapter 7

      • Chapter 8

      • Chapter 9

      • Chapter 10

      • Chapter 11

      • Chapter 12

      • Chapter 13

      • Chapter 14

    • Appendix Tables

    • GetFullPageImage

    • front-matter

      • Modern Mathematical Statistics with Applications

        • About the Authors

        • Contents

        • Preface

    • fulltext

    • fulltext(1)

    • fulltext(2)

    • fulltext(3)

    • fulltext(4)

    • fulltext(5)

    • fulltext(6)

    • fulltext(7)

    • fulltext(8)

    • fulltext(9)

    • fulltext(10)

    • fulltext(11)

    • fulltext(12)

    • fulltext(13)

    • back-matter

    • Index

    • Answers to Odd-Numbered Exercises

    • Appendix Tables

  • Chapter 14 Alternative Approaches to Inference

    • GetFullPageImage

    • front-matter

      • Modern Mathematical Statistics with Applications

        • About the Authors

        • Contents

        • Preface

          • Purpose

          • Content

          • Mathematical Level

          • Recommended Coverage

          • Acknowledgments

          • A Final Thought

    • fulltext

      • Chapter 1: Overview and Descriptive Statistics

        • 1.1 Populations and Samples

          • Branches of Statistics

          • Collecting Data

        • 1.2 Pictorial and Tabular Methods in Descriptive Statistics

          • Notation

          • Stem-and-Leaf Displays

          • Dotplots

          • Histograms

          • Histogram Shapes

          • Qualitative Data

          • Multivariate Data

        • 1.3 Measures of Location

          • The Mean

          • The Median

          • Other Measures of Location: Quartiles, Percentiles, and Trimmed Means

          • Categorical Data and Sample Proportions

        • 1.4 Measures of Variability

          • Measures of Variability for Sample Data

          • Motivation for s2

          • A Computing Formula for s2

          • Boxplots

          • Boxplots That Show Outliers

          • Comparative Boxplots

        • Bibliography

    • fulltext(1)

      • Chapter 2: Probability

        • 2.1 Sample Spaces and Events

          • The Sample Space of an Experiment

          • Events

          • Some Relations from Set Theory

        • 2.2 Axioms, Interpretations, and Properties of Probability

          • Interpreting Probability

          • More Probability Properties

          • Determining Probabilities Systematically

          • Equally Likely Outcomes

        • 2.3 Counting Techniques

          • The Product Rule for Ordered Pairs

          • Tree Diagrams

          • A More General Product Rule

          • Permutations

          • Combinations

        • 2.4 Conditional Probability

          • The Definition of Conditional Probability

          • The Multiplication Rule for P(A ∩ B)

          • Bayes´ Theorem

        • 2.5 Independence

          • P(A ∩ B) When Events Are Independent

          • Independence of More Than Two Events

        • Bibliography

    • fulltext(2)

      • Chapter 3: Discrete Random Variables and Probability Distributions

        • 3.1 Random Variables

          • Two Types of Random Variables

        • 3.2 Probability Distributions for Discrete Random Variables

          • A Parameter of a Probability Distribution

          • The Cumulative Distribution Function

          • Another View of Probability Mass Functions

        • 3.3 Expected Values of Discrete Random Variables

          • The Expected Value of X

          • The Expected Value of a Function

          • The Variance of X

          • A Shortcut Formula for sigma2

          • Rules of Variance

        • 3.4 Moments and Moment Generating Functions

        • 3.5 The Binomial Probability Distribution

          • The Binomial Random Variable and Distribution

          • Using Binomial Tables

          • The Mean and Variance of X

          • The Moment Generating Function of X

        • 3.6 Hypergeometric and Negative Binomial Distributions

          • The Hypergeometric Distribution

          • The Negative Binomial Distribution

        • 3.7 The Poisson Probability Distribution

          • The Poisson Distribution as a Limit

          • The Mean, Variance and MGF of X

          • The Poisson Process

        • Bibliography

    • fulltext(3)

      • Chapter 4: Continuous Random Variables and Probability Distributions

        • 4.1 Probability Density Functions and Cumulative Distribution Functions

          • Probability Distributions for Continuous Variables

          • The Cumulative Distribution Function

          • Using F(x) to Compute Probabilities

          • Obtaining f(x) from F(x)

          • Percentiles of a Continuous Distribution

        • 4.2 Expected Values and Moment Generating Functions

          • Expected Values

          • The Variance and Standard Deviation

          • Approximating the Mean Value and Standard Deviation

          • Moment Generating Functions

        • 4.3 The Normal Distribution

          • The Standard Normal Distribution

          • Percentiles of the Standard Normal Distribution

            • zα Notation

          • Nonstandard Normal Distributions

            • Percentiles of an Arbitrary Normal Distribution

          • The Normal Distribution and Discrete Populations

          • Approximating the Binomial Distribution

          • The Normal Moment Generating Function

        • 4.4 The Gamma Distribution and Its Relatives

          • The Family of Gamma Distributions

          • The Exponential Distribution

          • The Chi-Squared Distribution

        • 4.5 Other Continuous Distributions

          • The Weibull Distribution

          • The Lognormal Distribution

          • The Beta Distribution

        • 4.6 Probability Plots

          • Sample Percentiles

          • A Probability Plot

          • Beyond Normality

        • 4.7 Transformations of a Random Variable

        • Bibliography

    • fulltext(4)

      • Chapter 5: Joint Probability Distributions

        • 5.1 Jointly Distributed Random Variables

          • The Joint Probability Mass Function for Two Discrete Random Variables

          • The Joint Probability Density Function for Two Continuous Random Variables

          • Independent Random Variables

          • More than Two Random Variables

        • 5.2 Expected Values, Covariance, and Correlation

          • Covariance

          • Correlation

        • 5.3 Conditional Distributions

          • Independence

          • The Bivariate Normal Distribution

          • Regression to the Mean

          • The Mean and Variance Via the Conditional Mean and Variance

        • 5.4 Transformations of Random Variables

          • The Joint Distribution of Two New Random Variables

          • The Joint Distribution of More than Two New Variables

        • 5.5 Order Statistics

          • The Distributions of Yn and Y1

          • The Joint Distribution of the n Order Statistics

          • The Distribution of a Single Order Statistic

          • The Joint Distribution of Two Order Statistics

          • An Intuitive Derivation of Order Statistic PDF´s

        • Bibliography

    • fulltext(5)

      • Chapter 6: Statistics and Sampling Distributions

        • 6.1 Statistics and Their Distributions

          • Random Samples

          • Deriving the Sampling Distribution of a Statistic

          • Simulation Experiments

        • 6.2 The Distribution of the Sample Mean

          • The Case of a Normal Population Distribution

          • The Central Limit Theorem

          • Other Applications of the Central Limit Theorem

          • The Law of Large Numbers

        • 6.3 The Mean, Variance, and MGF for Several Variables

          • The Difference Between Two Random Variables

          • The Case of Normal Random Variables

          • Moment Generating Functions for Linear Combinations

        • 6.4 Distributions Based on a Normal Random Sample

          • The Chi-Squared Distribution

          • The t Distribution

          • The F Distribution

          • Summary of Relationships

        • Bibliography

        • Appendix: Proof of the Central Limit Theorem

    • fulltext(6)

      • Chapter 7: Point Estimation

        • 7.1 General Concepts and Criteria

          • Mean Squared Error

          • Unbiased Estimators

          • Estimators with Minimum Variance

          • More Complications

          • Reporting a Point Estimate: The Standard Error

          • The Bootstrap

        • 7.2 Methods of Point Estimation

          • The Method of Moments

          • Maximum Likelihood Estimation

          • Some Properties of MLEs

          • Large-Sample Behavior of the MLE

          • Some Complications

        • 7.3 Sufficiency

          • The Factorization Theorem

          • Jointly Sufficient Statistics

          • Minimal Sufficiency

          • Improving an Estimator

          • Further Comments

        • 7.4 Information and Efficiency

          • Information in a Random Sample

          • The Cramér-Rao Inequality

          • Large Sample Properties of the MLE

        • Bibliography

    • fulltext(7)

      • Chapter 8: Statistical Intervals Based on a Single Sample

        • 8.1 Basic Properties of Confidence Intervals

          • Interpreting a Confidence Level

          • Other Levels of Confidence

          • Confidence Level, Precision, and Choice of Sample Size

          • Deriving a Confidence Interval

        • 8.2 Large-Sample Confidence Intervals for a Population Mean and Proportion

          • A Large-Sample Interval for μ

          • A General Large-Sample Confidence Interval

          • A Confidence Interval for a Population Proportion

          • One-Sided Confidence Intervals (Confidence Bounds)

        • 8.3 Intervals Based on a Normal Population Distribution

          • Properties of t Distributions

          • The One-Sample t Confidence Interval

          • A Prediction Interval for a Single Future Value

          • Tolerance Intervals

          • Intervals Based on Nonnormal Population Distributions

        • 8.4 Confidence Intervals for the Variance and Standard Deviation of a Normal Population

        • 8.5 Bootstrap Confidence Intervals

          • The Percentile Interval

          • A Refined Interval

          • Bootstrapping the Median

          • The Mean Versus the Median

        • Bibliography

    • fulltext(8)

      • Chapter 9: Tests of Hypotheses Based on a Single Sample

        • 9.1 Hypotheses and Test Procedures

          • Test Procedures

          • Errors in Hypothesis Testing

        • 9.2 Tests About a Population Mean

          • Case I: A Normal Population with Known sigma

          • Case II: Large-Sample Tests

          • Case III: A Normal Population Distribution with Unknown sigma

        • 9.3 Tests Concerning a Population Proportion

          • Large-Sample Tests

          • Small-Sample Tests

        • 9.4 P-Values

          • P-Values for z Tests

          • P-Values for t Tests

          • More on Interpreting P-Values

        • 9.5 Some Comments on Selecting a Test Procedure

          • Statistical Versus Practical Significance

          • Best Tests for Simple Hypotheses

          • Power and Uniformly Most Powerful Tests

          • Likelihood Ratio Tests

        • Bibliography

    • fulltext(9)

      • Chapter 10: Inferences Based on Two Samples

        • 10.1 z Tests and Confidence Intervals for a Difference Between Two Population Means

          • Test Procedures for Normal Populations with Known Variances

          • Using a Comparison to Identify Causality

          • beta and the Choice of Sample Size

          • Large-Sample Tests

          • Confidence Intervals for M1-M2

        • 10.2 The Two-Sample t Test and Confidence Interval

          • Pooled t Procedures

          • Type II Error Probabilities

        • 10.3 Analysis of Paired Data

          • The Paired t Test

          • A Confidence Interval for MD

            • Paired Data and Two-Sample t Procedures

            • Paired Versus Unpaired Experiments

        • 10.4 Inferences About Two Population Proportions

          • A Large-Sample Test Procedure

          • Type II Error Probabilities and Sample Sizes

          • A Large-Sample Confidence Interval for p1 - p2

          • Small-Sample Inferences

        • 10.5 Inferences About Two Population Variances

          • Testing Hypotheses

          • P-Values for F Tests

          • A Confidence Interval for s

            • Bibliography

        • 10.6 Comparisons Using the Bootstrap and Permutation Methods

          • The Bootstrap for Two Samples

          • Permutation Tests

          • Inferences About Variability

          • The Analysis of Paired Data

    • fulltext(10)

      • Chapter 11: The Analysis of Variance

        • 11.1 Single-Factor ANOVA

          • Notation and Assumptions

          • Sums of Squares and Mean Squares

          • The F Test

          • Computational Formulas

          • Testing for the Assumption of Equal Variances

        • 11.2 Multiple Comparisons in ANOVA

          • Tukey´s Procedure

          • The Interpretation of α in Tukey´s Procedure

          • Confidence Intervals for Other Parametric Functions

        • 11.3 More on Single-Factor ANOVA

          • An Alternative Description of the ANOVA Model

          • beta for the F Test

          • Relationship of the F Test to the t Test

          • Single-Factor ANOVA When Sample Sizes Are Unequal

          • Multiple Comparisons When Sample Sizes Are Unequal

          • Data Transformation

          • A Random Effects Model

        • 11.4 Two-Factor ANOVA with Kij=1

          • The Model

          • Test Procedures

          • Expected Mean Squares

          • Multiple Comparisons

          • Randomized Block Experiments

          • Models for Random Effects

        • 11.5 Two-Factor ANOVA with Kij1

          • Parameters for the Fixed Effects Model with Interaction

          • Notation, Model, and Analysis

          • Multiple Comparisons

          • Models with Mixed and Random Effects

        • Bibliography

    • fulltext(11)

      • Chapter 12: Regression and Correlation

        • 12.1 The Simple Linear and Logistic Regression Models

          • A Linear Probabilistic Model

          • The Logistic Regression Model

        • 12.2 Estimating Model Parameters

          • Estimating σ2 and σ

          • The Coefficient of Determination

          • Terminology and Scope of Regression Analysis

        • 12.3 Inferences About the Regression Coefficient β1

          • A Confidence Interval for β1

          • Hypothesis-Testing Procedures

          • Regression and ANOVA

          • Fitting the Logistic Regression Model

        • 12.4 Inferences Concerning μY.x* and the Prediction of Future Y Values

          • Inferences Concerning μY.x*

          • A Prediction Interval for a Future Value of Y

        • 12.5 Correlation

          • The Sample Correlation Coefficient r

          • Properties of r

          • The Population Correlation Coefficient rho and Inferences About Correlation

          • Other Inferences Concerning rho

        • 12.6 Assessing Model Adequacy

          • Residuals and Standardized Residuals

          • Diagnostic Plots

          • Difficulties and Remedies

        • 12.7 Multiple Regression Analysis

          • Estimating Parameters

          • σ2 and the Coefficient of Multiple Determination

          • A Model Utility Test

          • Inferences in Multiple Regression

          • Assessing Model Adequacy

          • Multiple Regression Models

          • Models with Predictors for Categorical Variables

        • 12.8 Regression with Matrices

          • The Normal Equations

          • Residuals, ANOVA, F, and R-Squared

          • Covariance Matrices

          • The Hat Matrix

        • Bibliography

    • fulltext(12)

      • Chapter 13: Goodness-of-Fit Tests and Categorical Data Analysis

        • 13.1 Goodness-of-Fit Tests When Category Probabilities Are Completely Specified

          • P-Values for Chi-Squared Tests

          • 2 When the pi´s Are Functions of Other Parameters

          • 2 When the Underlying Distribution Is Continuous

        • 13.2 Goodness-of-Fit Tests for Composite Hypotheses

          • 2 When Parameters Are Estimated

          • Goodness of Fit for Discrete Distributions

          • Goodness of Fit for Continuous Distributions

          • A Special Test for Normality

        • 13.3 Two-Way Contingency Tables

          • Testing for Homogeneity

          • Testing for Independence

          • Ordinal Factors and Logistic Regression

        • Bibliography

    • fulltext(13)

      • Chapter 14: Alternative Approaches to Inference

        • 14.1 The Wilcoxon Signed-Rank Test

          • A General Description of the Wilcoxon Signed-Rank Test

          • Paired Observations

          • Efficiency of the Wilcoxon Signed-Rank Test

        • 14.2 The Wilcoxon Rank-Sum Test

          • Development of the Test When m=3, n=4

          • General Description of the Rank-Sum Test

          • Efficiency of the Wilcoxon Rank-Sum Test

        • 14.3 Distribution-Free Confidence Intervals

          • The Wilcoxon Signed-Rank Interval

          • The Wilcoxon Rank-Sum Interval

        • 14.4 Bayesian Methods

        • Bibliography

    • back-matter

    • Index

    • Answers to Odd-Numbered Exercises

      • Chapter 1

      • Chapter 2

      • Chapter 3

      • Chapter 4

      • Chapter 5

      • Chapter 6

      • Chapter 7

      • Chapter 8

      • Chapter 9

      • Chapter 10

      • Chapter 11

      • Chapter 12

      • Chapter 13

      • Chapter 14

    • Appendix Tables

    • GetFullPageImage

    • front-matter

      • Modern Mathematical Statistics with Applications

        • About the Authors

        • Contents

        • Preface

    • fulltext

    • fulltext(1)

    • fulltext(2)

    • fulltext(3)

    • fulltext(4)

    • fulltext(5)

    • fulltext(6)

    • fulltext(7)

    • fulltext(8)

    • fulltext(9)

    • fulltext(10)

    • fulltext(11)

    • fulltext(12)

    • fulltext(13)

    • back-matter

    • Index

    • Answers to Odd-Numbered Exercises

    • Appendix Tables

  • Index - Modern Mathematical Statistics with Applications (2nd Edition)

    • GetFullPageImage

    • front-matter

      • Modern Mathematical Statistics with Applications

        • About the Authors

        • Contents

        • Preface

          • Purpose

          • Content

          • Mathematical Level

          • Recommended Coverage

          • Acknowledgments

          • A Final Thought

    • fulltext

      • Chapter 1: Overview and Descriptive Statistics

        • 1.1 Populations and Samples

          • Branches of Statistics

          • Collecting Data

        • 1.2 Pictorial and Tabular Methods in Descriptive Statistics

          • Notation

          • Stem-and-Leaf Displays

          • Dotplots

          • Histograms

          • Histogram Shapes

          • Qualitative Data

          • Multivariate Data

        • 1.3 Measures of Location

          • The Mean

          • The Median

          • Other Measures of Location: Quartiles, Percentiles, and Trimmed Means

          • Categorical Data and Sample Proportions

        • 1.4 Measures of Variability

          • Measures of Variability for Sample Data

          • Motivation for s2

          • A Computing Formula for s2

          • Boxplots

          • Boxplots That Show Outliers

          • Comparative Boxplots

        • Bibliography

    • fulltext(1)

      • Chapter 2: Probability

        • 2.1 Sample Spaces and Events

          • The Sample Space of an Experiment

          • Events

          • Some Relations from Set Theory

        • 2.2 Axioms, Interpretations, and Properties of Probability

          • Interpreting Probability

          • More Probability Properties

          • Determining Probabilities Systematically

          • Equally Likely Outcomes

        • 2.3 Counting Techniques

          • The Product Rule for Ordered Pairs

          • Tree Diagrams

          • A More General Product Rule

          • Permutations

          • Combinations

        • 2.4 Conditional Probability

          • The Definition of Conditional Probability

          • The Multiplication Rule for P(A ∩ B)

          • Bayes´ Theorem

        • 2.5 Independence

          • P(A ∩ B) When Events Are Independent

          • Independence of More Than Two Events

        • Bibliography

    • fulltext(2)

      • Chapter 3: Discrete Random Variables and Probability Distributions

        • 3.1 Random Variables

          • Two Types of Random Variables

        • 3.2 Probability Distributions for Discrete Random Variables

          • A Parameter of a Probability Distribution

          • The Cumulative Distribution Function

          • Another View of Probability Mass Functions

        • 3.3 Expected Values of Discrete Random Variables

          • The Expected Value of X

          • The Expected Value of a Function

          • The Variance of X

          • A Shortcut Formula for sigma2

          • Rules of Variance

        • 3.4 Moments and Moment Generating Functions

        • 3.5 The Binomial Probability Distribution

          • The Binomial Random Variable and Distribution

          • Using Binomial Tables

          • The Mean and Variance of X

          • The Moment Generating Function of X

        • 3.6 Hypergeometric and Negative Binomial Distributions

          • The Hypergeometric Distribution

          • The Negative Binomial Distribution

        • 3.7 The Poisson Probability Distribution

          • The Poisson Distribution as a Limit

          • The Mean, Variance and MGF of X

          • The Poisson Process

        • Bibliography

    • fulltext(3)

      • Chapter 4: Continuous Random Variables and Probability Distributions

        • 4.1 Probability Density Functions and Cumulative Distribution Functions

          • Probability Distributions for Continuous Variables

          • The Cumulative Distribution Function

          • Using F(x) to Compute Probabilities

          • Obtaining f(x) from F(x)

          • Percentiles of a Continuous Distribution

        • 4.2 Expected Values and Moment Generating Functions

          • Expected Values

          • The Variance and Standard Deviation

          • Approximating the Mean Value and Standard Deviation

          • Moment Generating Functions

        • 4.3 The Normal Distribution

          • The Standard Normal Distribution

          • Percentiles of the Standard Normal Distribution

            • zα Notation

          • Nonstandard Normal Distributions

            • Percentiles of an Arbitrary Normal Distribution

          • The Normal Distribution and Discrete Populations

          • Approximating the Binomial Distribution

          • The Normal Moment Generating Function

        • 4.4 The Gamma Distribution and Its Relatives

          • The Family of Gamma Distributions

          • The Exponential Distribution

          • The Chi-Squared Distribution

        • 4.5 Other Continuous Distributions

          • The Weibull Distribution

          • The Lognormal Distribution

          • The Beta Distribution

        • 4.6 Probability Plots

          • Sample Percentiles

          • A Probability Plot

          • Beyond Normality

        • 4.7 Transformations of a Random Variable

        • Bibliography

    • fulltext(4)

      • Chapter 5: Joint Probability Distributions

        • 5.1 Jointly Distributed Random Variables

          • The Joint Probability Mass Function for Two Discrete Random Variables

          • The Joint Probability Density Function for Two Continuous Random Variables

          • Independent Random Variables

          • More than Two Random Variables

        • 5.2 Expected Values, Covariance, and Correlation

          • Covariance

          • Correlation

        • 5.3 Conditional Distributions

          • Independence

          • The Bivariate Normal Distribution

          • Regression to the Mean

          • The Mean and Variance Via the Conditional Mean and Variance

        • 5.4 Transformations of Random Variables

          • The Joint Distribution of Two New Random Variables

          • The Joint Distribution of More than Two New Variables

        • 5.5 Order Statistics

          • The Distributions of Yn and Y1

          • The Joint Distribution of the n Order Statistics

          • The Distribution of a Single Order Statistic

          • The Joint Distribution of Two Order Statistics

          • An Intuitive Derivation of Order Statistic PDF´s

        • Bibliography

    • fulltext(5)

      • Chapter 6: Statistics and Sampling Distributions

        • 6.1 Statistics and Their Distributions

          • Random Samples

          • Deriving the Sampling Distribution of a Statistic

          • Simulation Experiments

        • 6.2 The Distribution of the Sample Mean

          • The Case of a Normal Population Distribution

          • The Central Limit Theorem

          • Other Applications of the Central Limit Theorem

          • The Law of Large Numbers

        • 6.3 The Mean, Variance, and MGF for Several Variables

          • The Difference Between Two Random Variables

          • The Case of Normal Random Variables

          • Moment Generating Functions for Linear Combinations

        • 6.4 Distributions Based on a Normal Random Sample

          • The Chi-Squared Distribution

          • The t Distribution

          • The F Distribution

          • Summary of Relationships

        • Bibliography

        • Appendix: Proof of the Central Limit Theorem

    • fulltext(6)

      • Chapter 7: Point Estimation

        • 7.1 General Concepts and Criteria

          • Mean Squared Error

          • Unbiased Estimators

          • Estimators with Minimum Variance

          • More Complications

          • Reporting a Point Estimate: The Standard Error

          • The Bootstrap

        • 7.2 Methods of Point Estimation

          • The Method of Moments

          • Maximum Likelihood Estimation

          • Some Properties of MLEs

          • Large-Sample Behavior of the MLE

          • Some Complications

        • 7.3 Sufficiency

          • The Factorization Theorem

          • Jointly Sufficient Statistics

          • Minimal Sufficiency

          • Improving an Estimator

          • Further Comments

        • 7.4 Information and Efficiency

          • Information in a Random Sample

          • The Cramér-Rao Inequality

          • Large Sample Properties of the MLE

        • Bibliography

    • fulltext(7)

      • Chapter 8: Statistical Intervals Based on a Single Sample

        • 8.1 Basic Properties of Confidence Intervals

          • Interpreting a Confidence Level

          • Other Levels of Confidence

          • Confidence Level, Precision, and Choice of Sample Size

          • Deriving a Confidence Interval

        • 8.2 Large-Sample Confidence Intervals for a Population Mean and Proportion

          • A Large-Sample Interval for μ

          • A General Large-Sample Confidence Interval

          • A Confidence Interval for a Population Proportion

          • One-Sided Confidence Intervals (Confidence Bounds)

        • 8.3 Intervals Based on a Normal Population Distribution

          • Properties of t Distributions

          • The One-Sample t Confidence Interval

          • A Prediction Interval for a Single Future Value

          • Tolerance Intervals

          • Intervals Based on Nonnormal Population Distributions

        • 8.4 Confidence Intervals for the Variance and Standard Deviation of a Normal Population

        • 8.5 Bootstrap Confidence Intervals

          • The Percentile Interval

          • A Refined Interval

          • Bootstrapping the Median

          • The Mean Versus the Median

        • Bibliography

    • fulltext(8)

      • Chapter 9: Tests of Hypotheses Based on a Single Sample

        • 9.1 Hypotheses and Test Procedures

          • Test Procedures

          • Errors in Hypothesis Testing

        • 9.2 Tests About a Population Mean

          • Case I: A Normal Population with Known sigma

          • Case II: Large-Sample Tests

          • Case III: A Normal Population Distribution with Unknown sigma

        • 9.3 Tests Concerning a Population Proportion

          • Large-Sample Tests

          • Small-Sample Tests

        • 9.4 P-Values

          • P-Values for z Tests

          • P-Values for t Tests

          • More on Interpreting P-Values

        • 9.5 Some Comments on Selecting a Test Procedure

          • Statistical Versus Practical Significance

          • Best Tests for Simple Hypotheses

          • Power and Uniformly Most Powerful Tests

          • Likelihood Ratio Tests

        • Bibliography

    • fulltext(9)

      • Chapter 10: Inferences Based on Two Samples

        • 10.1 z Tests and Confidence Intervals for a Difference Between Two Population Means

          • Test Procedures for Normal Populations with Known Variances

          • Using a Comparison to Identify Causality

          • beta and the Choice of Sample Size

          • Large-Sample Tests

          • Confidence Intervals for M1-M2

        • 10.2 The Two-Sample t Test and Confidence Interval

          • Pooled t Procedures

          • Type II Error Probabilities

        • 10.3 Analysis of Paired Data

          • The Paired t Test

          • A Confidence Interval for MD

            • Paired Data and Two-Sample t Procedures

            • Paired Versus Unpaired Experiments

        • 10.4 Inferences About Two Population Proportions

          • A Large-Sample Test Procedure

          • Type II Error Probabilities and Sample Sizes

          • A Large-Sample Confidence Interval for p1 - p2

          • Small-Sample Inferences

        • 10.5 Inferences About Two Population Variances

          • Testing Hypotheses

          • P-Values for F Tests

          • A Confidence Interval for s

            • Bibliography

        • 10.6 Comparisons Using the Bootstrap and Permutation Methods

          • The Bootstrap for Two Samples

          • Permutation Tests

          • Inferences About Variability

          • The Analysis of Paired Data

    • fulltext(10)

      • Chapter 11: The Analysis of Variance

        • 11.1 Single-Factor ANOVA

          • Notation and Assumptions

          • Sums of Squares and Mean Squares

          • The F Test

          • Computational Formulas

          • Testing for the Assumption of Equal Variances

        • 11.2 Multiple Comparisons in ANOVA

          • Tukey´s Procedure

          • The Interpretation of α in Tukey´s Procedure

          • Confidence Intervals for Other Parametric Functions

        • 11.3 More on Single-Factor ANOVA

          • An Alternative Description of the ANOVA Model

          • beta for the F Test

          • Relationship of the F Test to the t Test

          • Single-Factor ANOVA When Sample Sizes Are Unequal

          • Multiple Comparisons When Sample Sizes Are Unequal

          • Data Transformation

          • A Random Effects Model

        • 11.4 Two-Factor ANOVA with Kij=1

          • The Model

          • Test Procedures

          • Expected Mean Squares

          • Multiple Comparisons

          • Randomized Block Experiments

          • Models for Random Effects

        • 11.5 Two-Factor ANOVA with Kij1

          • Parameters for the Fixed Effects Model with Interaction

          • Notation, Model, and Analysis

          • Multiple Comparisons

          • Models with Mixed and Random Effects

        • Bibliography

    • fulltext(11)

      • Chapter 12: Regression and Correlation

        • 12.1 The Simple Linear and Logistic Regression Models

          • A Linear Probabilistic Model

          • The Logistic Regression Model

        • 12.2 Estimating Model Parameters

          • Estimating σ2 and σ

          • The Coefficient of Determination

          • Terminology and Scope of Regression Analysis

        • 12.3 Inferences About the Regression Coefficient β1

          • A Confidence Interval for β1

          • Hypothesis-Testing Procedures

          • Regression and ANOVA

          • Fitting the Logistic Regression Model

        • 12.4 Inferences Concerning μY.x* and the Prediction of Future Y Values

          • Inferences Concerning μY.x*

          • A Prediction Interval for a Future Value of Y

        • 12.5 Correlation

          • The Sample Correlation Coefficient r

          • Properties of r

          • The Population Correlation Coefficient rho and Inferences About Correlation

          • Other Inferences Concerning rho

        • 12.6 Assessing Model Adequacy

          • Residuals and Standardized Residuals

          • Diagnostic Plots

          • Difficulties and Remedies

        • 12.7 Multiple Regression Analysis

          • Estimating Parameters

          • σ2 and the Coefficient of Multiple Determination

          • A Model Utility Test

          • Inferences in Multiple Regression

          • Assessing Model Adequacy

          • Multiple Regression Models

          • Models with Predictors for Categorical Variables

        • 12.8 Regression with Matrices

          • The Normal Equations

          • Residuals, ANOVA, F, and R-Squared

          • Covariance Matrices

          • The Hat Matrix

        • Bibliography

    • fulltext(12)

      • Chapter 13: Goodness-of-Fit Tests and Categorical Data Analysis

        • 13.1 Goodness-of-Fit Tests When Category Probabilities Are Completely Specified

          • P-Values for Chi-Squared Tests

          • 2 When the pi´s Are Functions of Other Parameters

          • 2 When the Underlying Distribution Is Continuous

        • 13.2 Goodness-of-Fit Tests for Composite Hypotheses

          • 2 When Parameters Are Estimated

          • Goodness of Fit for Discrete Distributions

          • Goodness of Fit for Continuous Distributions

          • A Special Test for Normality

        • 13.3 Two-Way Contingency Tables

          • Testing for Homogeneity

          • Testing for Independence

          • Ordinal Factors and Logistic Regression

        • Bibliography

    • fulltext(13)

      • Chapter 14: Alternative Approaches to Inference

        • 14.1 The Wilcoxon Signed-Rank Test

          • A General Description of the Wilcoxon Signed-Rank Test

          • Paired Observations

          • Efficiency of the Wilcoxon Signed-Rank Test

        • 14.2 The Wilcoxon Rank-Sum Test

          • Development of the Test When m=3, n=4

          • General Description of the Rank-Sum Test

          • Efficiency of the Wilcoxon Rank-Sum Test

        • 14.3 Distribution-Free Confidence Intervals

          • The Wilcoxon Signed-Rank Interval

          • The Wilcoxon Rank-Sum Interval

        • 14.4 Bayesian Methods

        • Bibliography

    • back-matter

    • Index

    • Answers to Odd-Numbered Exercises

      • Chapter 1

      • Chapter 2

      • Chapter 3

      • Chapter 4

      • Chapter 5

      • Chapter 6

      • Chapter 7

      • Chapter 8

      • Chapter 9

      • Chapter 10

      • Chapter 11

      • Chapter 12

      • Chapter 13

      • Chapter 14

    • Appendix Tables

    • GetFullPageImage

    • front-matter

      • Modern Mathematical Statistics with Applications

        • About the Authors

        • Contents

        • Preface

    • fulltext

    • fulltext(1)

    • fulltext(2)

    • fulltext(3)

    • fulltext(4)

    • fulltext(5)

    • fulltext(6)

    • fulltext(7)

    • fulltext(8)

    • fulltext(9)

    • fulltext(10)

    • fulltext(11)

    • fulltext(12)

    • fulltext(13)

    • back-matter

    • Index

    • Answers to Odd-Numbered Exercises

    • Appendix Tables

  • Answers to Odd-Numbered Exercises - Modern Mathematical Statistics with Applications (2nd Edition)

    • GetFullPageImage

    • front-matter

      • Modern Mathematical Statistics with Applications

        • About the Authors

        • Contents

        • Preface

          • Purpose

          • Content

          • Mathematical Level

          • Recommended Coverage

          • Acknowledgments

          • A Final Thought

    • fulltext

      • Chapter 1: Overview and Descriptive Statistics

        • 1.1 Populations and Samples

          • Branches of Statistics

          • Collecting Data

        • 1.2 Pictorial and Tabular Methods in Descriptive Statistics

          • Notation

          • Stem-and-Leaf Displays

          • Dotplots

          • Histograms

          • Histogram Shapes

          • Qualitative Data

          • Multivariate Data

        • 1.3 Measures of Location

          • The Mean

          • The Median

          • Other Measures of Location: Quartiles, Percentiles, and Trimmed Means

          • Categorical Data and Sample Proportions

        • 1.4 Measures of Variability

          • Measures of Variability for Sample Data

          • Motivation for s2

          • A Computing Formula for s2

          • Boxplots

          • Boxplots That Show Outliers

          • Comparative Boxplots

        • Bibliography

    • fulltext(1)

      • Chapter 2: Probability

        • 2.1 Sample Spaces and Events

          • The Sample Space of an Experiment

          • Events

          • Some Relations from Set Theory

        • 2.2 Axioms, Interpretations, and Properties of Probability

          • Interpreting Probability

          • More Probability Properties

          • Determining Probabilities Systematically

          • Equally Likely Outcomes

        • 2.3 Counting Techniques

          • The Product Rule for Ordered Pairs

          • Tree Diagrams

          • A More General Product Rule

          • Permutations

          • Combinations

        • 2.4 Conditional Probability

          • The Definition of Conditional Probability

          • The Multiplication Rule for P(A ∩ B)

          • Bayes´ Theorem

        • 2.5 Independence

          • P(A ∩ B) When Events Are Independent

          • Independence of More Than Two Events

        • Bibliography

    • fulltext(2)

      • Chapter 3: Discrete Random Variables and Probability Distributions

        • 3.1 Random Variables

          • Two Types of Random Variables

        • 3.2 Probability Distributions for Discrete Random Variables

          • A Parameter of a Probability Distribution

          • The Cumulative Distribution Function

          • Another View of Probability Mass Functions

        • 3.3 Expected Values of Discrete Random Variables

          • The Expected Value of X

          • The Expected Value of a Function

          • The Variance of X

          • A Shortcut Formula for sigma2

          • Rules of Variance

        • 3.4 Moments and Moment Generating Functions

        • 3.5 The Binomial Probability Distribution

          • The Binomial Random Variable and Distribution

          • Using Binomial Tables

          • The Mean and Variance of X

          • The Moment Generating Function of X

        • 3.6 Hypergeometric and Negative Binomial Distributions

          • The Hypergeometric Distribution

          • The Negative Binomial Distribution

        • 3.7 The Poisson Probability Distribution

          • The Poisson Distribution as a Limit

          • The Mean, Variance and MGF of X

          • The Poisson Process

        • Bibliography

    • fulltext(3)

      • Chapter 4: Continuous Random Variables and Probability Distributions

        • 4.1 Probability Density Functions and Cumulative Distribution Functions

          • Probability Distributions for Continuous Variables

          • The Cumulative Distribution Function

          • Using F(x) to Compute Probabilities

          • Obtaining f(x) from F(x)

          • Percentiles of a Continuous Distribution

        • 4.2 Expected Values and Moment Generating Functions

          • Expected Values

          • The Variance and Standard Deviation

          • Approximating the Mean Value and Standard Deviation

          • Moment Generating Functions

        • 4.3 The Normal Distribution

          • The Standard Normal Distribution

          • Percentiles of the Standard Normal Distribution

            • zα Notation

          • Nonstandard Normal Distributions

            • Percentiles of an Arbitrary Normal Distribution

          • The Normal Distribution and Discrete Populations

          • Approximating the Binomial Distribution

          • The Normal Moment Generating Function

        • 4.4 The Gamma Distribution and Its Relatives

          • The Family of Gamma Distributions

          • The Exponential Distribution

          • The Chi-Squared Distribution

        • 4.5 Other Continuous Distributions

          • The Weibull Distribution

          • The Lognormal Distribution

          • The Beta Distribution

        • 4.6 Probability Plots

          • Sample Percentiles

          • A Probability Plot

          • Beyond Normality

        • 4.7 Transformations of a Random Variable

        • Bibliography

    • fulltext(4)

      • Chapter 5: Joint Probability Distributions

        • 5.1 Jointly Distributed Random Variables

          • The Joint Probability Mass Function for Two Discrete Random Variables

          • The Joint Probability Density Function for Two Continuous Random Variables

          • Independent Random Variables

          • More than Two Random Variables

        • 5.2 Expected Values, Covariance, and Correlation

          • Covariance

          • Correlation

        • 5.3 Conditional Distributions

          • Independence

          • The Bivariate Normal Distribution

          • Regression to the Mean

          • The Mean and Variance Via the Conditional Mean and Variance

        • 5.4 Transformations of Random Variables

          • The Joint Distribution of Two New Random Variables

          • The Joint Distribution of More than Two New Variables

        • 5.5 Order Statistics

          • The Distributions of Yn and Y1

          • The Joint Distribution of the n Order Statistics

          • The Distribution of a Single Order Statistic

          • The Joint Distribution of Two Order Statistics

          • An Intuitive Derivation of Order Statistic PDF´s

        • Bibliography

    • fulltext(5)

      • Chapter 6: Statistics and Sampling Distributions

        • 6.1 Statistics and Their Distributions

          • Random Samples

          • Deriving the Sampling Distribution of a Statistic

          • Simulation Experiments

        • 6.2 The Distribution of the Sample Mean

          • The Case of a Normal Population Distribution

          • The Central Limit Theorem

          • Other Applications of the Central Limit Theorem

          • The Law of Large Numbers

        • 6.3 The Mean, Variance, and MGF for Several Variables

          • The Difference Between Two Random Variables

          • The Case of Normal Random Variables

          • Moment Generating Functions for Linear Combinations

        • 6.4 Distributions Based on a Normal Random Sample

          • The Chi-Squared Distribution

          • The t Distribution

          • The F Distribution

          • Summary of Relationships

        • Bibliography

        • Appendix: Proof of the Central Limit Theorem

    • fulltext(6)

      • Chapter 7: Point Estimation

        • 7.1 General Concepts and Criteria

          • Mean Squared Error

          • Unbiased Estimators

          • Estimators with Minimum Variance

          • More Complications

          • Reporting a Point Estimate: The Standard Error

          • The Bootstrap

        • 7.2 Methods of Point Estimation

          • The Method of Moments

          • Maximum Likelihood Estimation

          • Some Properties of MLEs

          • Large-Sample Behavior of the MLE

          • Some Complications

        • 7.3 Sufficiency

          • The Factorization Theorem

          • Jointly Sufficient Statistics

          • Minimal Sufficiency

          • Improving an Estimator

          • Further Comments

        • 7.4 Information and Efficiency

          • Information in a Random Sample

          • The Cramér-Rao Inequality

          • Large Sample Properties of the MLE

        • Bibliography

    • fulltext(7)

      • Chapter 8: Statistical Intervals Based on a Single Sample

        • 8.1 Basic Properties of Confidence Intervals

          • Interpreting a Confidence Level

          • Other Levels of Confidence

          • Confidence Level, Precision, and Choice of Sample Size

          • Deriving a Confidence Interval

        • 8.2 Large-Sample Confidence Intervals for a Population Mean and Proportion

          • A Large-Sample Interval for μ

          • A General Large-Sample Confidence Interval

          • A Confidence Interval for a Population Proportion

          • One-Sided Confidence Intervals (Confidence Bounds)

        • 8.3 Intervals Based on a Normal Population Distribution

          • Properties of t Distributions

          • The One-Sample t Confidence Interval

          • A Prediction Interval for a Single Future Value

          • Tolerance Intervals

          • Intervals Based on Nonnormal Population Distributions

        • 8.4 Confidence Intervals for the Variance and Standard Deviation of a Normal Population

        • 8.5 Bootstrap Confidence Intervals

          • The Percentile Interval

          • A Refined Interval

          • Bootstrapping the Median

          • The Mean Versus the Median

        • Bibliography

    • fulltext(8)

      • Chapter 9: Tests of Hypotheses Based on a Single Sample

        • 9.1 Hypotheses and Test Procedures

          • Test Procedures

          • Errors in Hypothesis Testing

        • 9.2 Tests About a Population Mean

          • Case I: A Normal Population with Known sigma

          • Case II: Large-Sample Tests

          • Case III: A Normal Population Distribution with Unknown sigma

        • 9.3 Tests Concerning a Population Proportion

          • Large-Sample Tests

          • Small-Sample Tests

        • 9.4 P-Values

          • P-Values for z Tests

          • P-Values for t Tests

          • More on Interpreting P-Values

        • 9.5 Some Comments on Selecting a Test Procedure

          • Statistical Versus Practical Significance

          • Best Tests for Simple Hypotheses

          • Power and Uniformly Most Powerful Tests

          • Likelihood Ratio Tests

        • Bibliography

    • fulltext(9)

      • Chapter 10: Inferences Based on Two Samples

        • 10.1 z Tests and Confidence Intervals for a Difference Between Two Population Means

          • Test Procedures for Normal Populations with Known Variances

          • Using a Comparison to Identify Causality

          • beta and the Choice of Sample Size

          • Large-Sample Tests

          • Confidence Intervals for M1-M2

        • 10.2 The Two-Sample t Test and Confidence Interval

          • Pooled t Procedures

          • Type II Error Probabilities

        • 10.3 Analysis of Paired Data

          • The Paired t Test

          • A Confidence Interval for MD

            • Paired Data and Two-Sample t Procedures

            • Paired Versus Unpaired Experiments

        • 10.4 Inferences About Two Population Proportions

          • A Large-Sample Test Procedure

          • Type II Error Probabilities and Sample Sizes

          • A Large-Sample Confidence Interval for p1 - p2

          • Small-Sample Inferences

        • 10.5 Inferences About Two Population Variances

          • Testing Hypotheses

          • P-Values for F Tests

          • A Confidence Interval for s

            • Bibliography

        • 10.6 Comparisons Using the Bootstrap and Permutation Methods

          • The Bootstrap for Two Samples

          • Permutation Tests

          • Inferences About Variability

          • The Analysis of Paired Data

    • fulltext(10)

      • Chapter 11: The Analysis of Variance

        • 11.1 Single-Factor ANOVA

          • Notation and Assumptions

          • Sums of Squares and Mean Squares

          • The F Test

          • Computational Formulas

          • Testing for the Assumption of Equal Variances

        • 11.2 Multiple Comparisons in ANOVA

          • Tukey´s Procedure

          • The Interpretation of α in Tukey´s Procedure

          • Confidence Intervals for Other Parametric Functions

        • 11.3 More on Single-Factor ANOVA

          • An Alternative Description of the ANOVA Model

          • beta for the F Test

          • Relationship of the F Test to the t Test

          • Single-Factor ANOVA When Sample Sizes Are Unequal

          • Multiple Comparisons When Sample Sizes Are Unequal

          • Data Transformation

          • A Random Effects Model

        • 11.4 Two-Factor ANOVA with Kij=1

          • The Model

          • Test Procedures

          • Expected Mean Squares

          • Multiple Comparisons

          • Randomized Block Experiments

          • Models for Random Effects

        • 11.5 Two-Factor ANOVA with Kij1

          • Parameters for the Fixed Effects Model with Interaction

          • Notation, Model, and Analysis

          • Multiple Comparisons

          • Models with Mixed and Random Effects

        • Bibliography

    • fulltext(11)

      • Chapter 12: Regression and Correlation

        • 12.1 The Simple Linear and Logistic Regression Models

          • A Linear Probabilistic Model

          • The Logistic Regression Model

        • 12.2 Estimating Model Parameters

          • Estimating σ2 and σ

          • The Coefficient of Determination

          • Terminology and Scope of Regression Analysis

        • 12.3 Inferences About the Regression Coefficient β1

          • A Confidence Interval for β1

          • Hypothesis-Testing Procedures

          • Regression and ANOVA

          • Fitting the Logistic Regression Model

        • 12.4 Inferences Concerning μY.x* and the Prediction of Future Y Values

          • Inferences Concerning μY.x*

          • A Prediction Interval for a Future Value of Y

        • 12.5 Correlation

          • The Sample Correlation Coefficient r

          • Properties of r

          • The Population Correlation Coefficient rho and Inferences About Correlation

          • Other Inferences Concerning rho

        • 12.6 Assessing Model Adequacy

          • Residuals and Standardized Residuals

          • Diagnostic Plots

          • Difficulties and Remedies

        • 12.7 Multiple Regression Analysis

          • Estimating Parameters

          • σ2 and the Coefficient of Multiple Determination

          • A Model Utility Test

          • Inferences in Multiple Regression

          • Assessing Model Adequacy

          • Multiple Regression Models

          • Models with Predictors for Categorical Variables

        • 12.8 Regression with Matrices

          • The Normal Equations

          • Residuals, ANOVA, F, and R-Squared

          • Covariance Matrices

          • The Hat Matrix

        • Bibliography

    • fulltext(12)

      • Chapter 13: Goodness-of-Fit Tests and Categorical Data Analysis

        • 13.1 Goodness-of-Fit Tests When Category Probabilities Are Completely Specified

          • P-Values for Chi-Squared Tests

          • 2 When the pi´s Are Functions of Other Parameters

          • 2 When the Underlying Distribution Is Continuous

        • 13.2 Goodness-of-Fit Tests for Composite Hypotheses

          • 2 When Parameters Are Estimated

          • Goodness of Fit for Discrete Distributions

          • Goodness of Fit for Continuous Distributions

          • A Special Test for Normality

        • 13.3 Two-Way Contingency Tables

          • Testing for Homogeneity

          • Testing for Independence

          • Ordinal Factors and Logistic Regression

        • Bibliography

    • fulltext(13)

      • Chapter 14: Alternative Approaches to Inference

        • 14.1 The Wilcoxon Signed-Rank Test

          • A General Description of the Wilcoxon Signed-Rank Test

          • Paired Observations

          • Efficiency of the Wilcoxon Signed-Rank Test

        • 14.2 The Wilcoxon Rank-Sum Test

          • Development of the Test When m=3, n=4

          • General Description of the Rank-Sum Test

          • Efficiency of the Wilcoxon Rank-Sum Test

        • 14.3 Distribution-Free Confidence Intervals

          • The Wilcoxon Signed-Rank Interval

          • The Wilcoxon Rank-Sum Interval

        • 14.4 Bayesian Methods

        • Bibliography

    • back-matter

    • Index

    • Answers to Odd-Numbered Exercises

      • Chapter 1

      • Chapter 2

      • Chapter 3

      • Chapter 4

      • Chapter 5

      • Chapter 6

      • Chapter 7

      • Chapter 8

      • Chapter 9

      • Chapter 10

      • Chapter 11

      • Chapter 12

      • Chapter 13

      • Chapter 14

    • Appendix Tables

    • GetFullPageImage

    • front-matter

      • Modern Mathematical Statistics with Applications

        • About the Authors

        • Contents

        • Preface

    • fulltext

    • fulltext(1)

    • fulltext(2)

    • fulltext(3)

    • fulltext(4)

    • fulltext(5)

    • fulltext(6)

    • fulltext(7)

    • fulltext(8)

    • fulltext(9)

    • fulltext(10)

    • fulltext(11)

    • fulltext(12)

    • fulltext(13)

    • back-matter

    • Index

    • Answers to Odd-Numbered Exercises

    • Appendix Tables

  • Appendix Tables - Modern Mathematical Statistics with Applications (2nd Edition)

    • GetFullPageImage

    • front-matter

      • Modern Mathematical Statistics with Applications

        • About the Authors

        • Contents

        • Preface

          • Purpose

          • Content

          • Mathematical Level

          • Recommended Coverage

          • Acknowledgments

          • A Final Thought

    • fulltext

      • Chapter 1: Overview and Descriptive Statistics

        • 1.1 Populations and Samples

          • Branches of Statistics

          • Collecting Data

        • 1.2 Pictorial and Tabular Methods in Descriptive Statistics

          • Notation

          • Stem-and-Leaf Displays

          • Dotplots

          • Histograms

          • Histogram Shapes

          • Qualitative Data

          • Multivariate Data

        • 1.3 Measures of Location

          • The Mean

          • The Median

          • Other Measures of Location: Quartiles, Percentiles, and Trimmed Means

          • Categorical Data and Sample Proportions

        • 1.4 Measures of Variability

          • Measures of Variability for Sample Data

          • Motivation for s2

          • A Computing Formula for s2

          • Boxplots

          • Boxplots That Show Outliers

          • Comparative Boxplots

        • Bibliography

    • fulltext(1)

      • Chapter 2: Probability

        • 2.1 Sample Spaces and Events

          • The Sample Space of an Experiment

          • Events

          • Some Relations from Set Theory

        • 2.2 Axioms, Interpretations, and Properties of Probability

          • Interpreting Probability

          • More Probability Properties

          • Determining Probabilities Systematically

          • Equally Likely Outcomes

        • 2.3 Counting Techniques

          • The Product Rule for Ordered Pairs

          • Tree Diagrams

          • A More General Product Rule

          • Permutations

          • Combinations

        • 2.4 Conditional Probability

          • The Definition of Conditional Probability

          • The Multiplication Rule for P(A ∩ B)

          • Bayes´ Theorem

        • 2.5 Independence

          • P(A ∩ B) When Events Are Independent

          • Independence of More Than Two Events

        • Bibliography

    • fulltext(2)

      • Chapter 3: Discrete Random Variables and Probability Distributions

        • 3.1 Random Variables

          • Two Types of Random Variables

        • 3.2 Probability Distributions for Discrete Random Variables

          • A Parameter of a Probability Distribution

          • The Cumulative Distribution Function

          • Another View of Probability Mass Functions

        • 3.3 Expected Values of Discrete Random Variables

          • The Expected Value of X

          • The Expected Value of a Function

          • The Variance of X

          • A Shortcut Formula for sigma2

          • Rules of Variance

        • 3.4 Moments and Moment Generating Functions

        • 3.5 The Binomial Probability Distribution

          • The Binomial Random Variable and Distribution

          • Using Binomial Tables

          • The Mean and Variance of X

          • The Moment Generating Function of X

        • 3.6 Hypergeometric and Negative Binomial Distributions

          • The Hypergeometric Distribution

          • The Negative Binomial Distribution

        • 3.7 The Poisson Probability Distribution

          • The Poisson Distribution as a Limit

          • The Mean, Variance and MGF of X

          • The Poisson Process

        • Bibliography

    • fulltext(3)

      • Chapter 4: Continuous Random Variables and Probability Distributions

        • 4.1 Probability Density Functions and Cumulative Distribution Functions

          • Probability Distributions for Continuous Variables

          • The Cumulative Distribution Function

          • Using F(x) to Compute Probabilities

          • Obtaining f(x) from F(x)

          • Percentiles of a Continuous Distribution

        • 4.2 Expected Values and Moment Generating Functions

          • Expected Values

          • The Variance and Standard Deviation

          • Approximating the Mean Value and Standard Deviation

          • Moment Generating Functions

        • 4.3 The Normal Distribution

          • The Standard Normal Distribution

          • Percentiles of the Standard Normal Distribution

            • zα Notation

          • Nonstandard Normal Distributions

            • Percentiles of an Arbitrary Normal Distribution

          • The Normal Distribution and Discrete Populations

          • Approximating the Binomial Distribution

          • The Normal Moment Generating Function

        • 4.4 The Gamma Distribution and Its Relatives

          • The Family of Gamma Distributions

          • The Exponential Distribution

          • The Chi-Squared Distribution

        • 4.5 Other Continuous Distributions

          • The Weibull Distribution

          • The Lognormal Distribution

          • The Beta Distribution

        • 4.6 Probability Plots

          • Sample Percentiles

          • A Probability Plot

          • Beyond Normality

        • 4.7 Transformations of a Random Variable

        • Bibliography

    • fulltext(4)

      • Chapter 5: Joint Probability Distributions

        • 5.1 Jointly Distributed Random Variables

          • The Joint Probability Mass Function for Two Discrete Random Variables

          • The Joint Probability Density Function for Two Continuous Random Variables

          • Independent Random Variables

          • More than Two Random Variables

        • 5.2 Expected Values, Covariance, and Correlation

          • Covariance

          • Correlation

        • 5.3 Conditional Distributions

          • Independence

          • The Bivariate Normal Distribution

          • Regression to the Mean

          • The Mean and Variance Via the Conditional Mean and Variance

        • 5.4 Transformations of Random Variables

          • The Joint Distribution of Two New Random Variables

          • The Joint Distribution of More than Two New Variables

        • 5.5 Order Statistics

          • The Distributions of Yn and Y1

          • The Joint Distribution of the n Order Statistics

          • The Distribution of a Single Order Statistic

          • The Joint Distribution of Two Order Statistics

          • An Intuitive Derivation of Order Statistic PDF´s

        • Bibliography

    • fulltext(5)

      • Chapter 6: Statistics and Sampling Distributions

        • 6.1 Statistics and Their Distributions

          • Random Samples

          • Deriving the Sampling Distribution of a Statistic

          • Simulation Experiments

        • 6.2 The Distribution of the Sample Mean

          • The Case of a Normal Population Distribution

          • The Central Limit Theorem

          • Other Applications of the Central Limit Theorem

          • The Law of Large Numbers

        • 6.3 The Mean, Variance, and MGF for Several Variables

          • The Difference Between Two Random Variables

          • The Case of Normal Random Variables

          • Moment Generating Functions for Linear Combinations

        • 6.4 Distributions Based on a Normal Random Sample

          • The Chi-Squared Distribution

          • The t Distribution

          • The F Distribution

          • Summary of Relationships

        • Bibliography

        • Appendix: Proof of the Central Limit Theorem

    • fulltext(6)

      • Chapter 7: Point Estimation

        • 7.1 General Concepts and Criteria

          • Mean Squared Error

          • Unbiased Estimators

          • Estimators with Minimum Variance

          • More Complications

          • Reporting a Point Estimate: The Standard Error

          • The Bootstrap

        • 7.2 Methods of Point Estimation

          • The Method of Moments

          • Maximum Likelihood Estimation

          • Some Properties of MLEs

          • Large-Sample Behavior of the MLE

          • Some Complications

        • 7.3 Sufficiency

          • The Factorization Theorem

          • Jointly Sufficient Statistics

          • Minimal Sufficiency

          • Improving an Estimator

          • Further Comments

        • 7.4 Information and Efficiency

          • Information in a Random Sample

          • The Cramér-Rao Inequality

          • Large Sample Properties of the MLE

        • Bibliography

    • fulltext(7)

      • Chapter 8: Statistical Intervals Based on a Single Sample

        • 8.1 Basic Properties of Confidence Intervals

          • Interpreting a Confidence Level

          • Other Levels of Confidence

          • Confidence Level, Precision, and Choice of Sample Size

          • Deriving a Confidence Interval

        • 8.2 Large-Sample Confidence Intervals for a Population Mean and Proportion

          • A Large-Sample Interval for μ

          • A General Large-Sample Confidence Interval

          • A Confidence Interval for a Population Proportion

          • One-Sided Confidence Intervals (Confidence Bounds)

        • 8.3 Intervals Based on a Normal Population Distribution

          • Properties of t Distributions

          • The One-Sample t Confidence Interval

          • A Prediction Interval for a Single Future Value

          • Tolerance Intervals

          • Intervals Based on Nonnormal Population Distributions

        • 8.4 Confidence Intervals for the Variance and Standard Deviation of a Normal Population

        • 8.5 Bootstrap Confidence Intervals

          • The Percentile Interval

          • A Refined Interval

          • Bootstrapping the Median

          • The Mean Versus the Median

        • Bibliography

    • fulltext(8)

      • Chapter 9: Tests of Hypotheses Based on a Single Sample

        • 9.1 Hypotheses and Test Procedures

          • Test Procedures

          • Errors in Hypothesis Testing

        • 9.2 Tests About a Population Mean

          • Case I: A Normal Population with Known sigma

          • Case II: Large-Sample Tests

          • Case III: A Normal Population Distribution with Unknown sigma

        • 9.3 Tests Concerning a Population Proportion

          • Large-Sample Tests

          • Small-Sample Tests

        • 9.4 P-Values

          • P-Values for z Tests

          • P-Values for t Tests

          • More on Interpreting P-Values

        • 9.5 Some Comments on Selecting a Test Procedure

          • Statistical Versus Practical Significance

          • Best Tests for Simple Hypotheses

          • Power and Uniformly Most Powerful Tests

          • Likelihood Ratio Tests

        • Bibliography

    • fulltext(9)

      • Chapter 10: Inferences Based on Two Samples

        • 10.1 z Tests and Confidence Intervals for a Difference Between Two Population Means

          • Test Procedures for Normal Populations with Known Variances

          • Using a Comparison to Identify Causality

          • beta and the Choice of Sample Size

          • Large-Sample Tests

          • Confidence Intervals for M1-M2

        • 10.2 The Two-Sample t Test and Confidence Interval

          • Pooled t Procedures

          • Type II Error Probabilities

        • 10.3 Analysis of Paired Data

          • The Paired t Test

          • A Confidence Interval for MD

            • Paired Data and Two-Sample t Procedures

            • Paired Versus Unpaired Experiments

        • 10.4 Inferences About Two Population Proportions

          • A Large-Sample Test Procedure

          • Type II Error Probabilities and Sample Sizes

          • A Large-Sample Confidence Interval for p1 - p2

          • Small-Sample Inferences

        • 10.5 Inferences About Two Population Variances

          • Testing Hypotheses

          • P-Values for F Tests

          • A Confidence Interval for s

            • Bibliography

        • 10.6 Comparisons Using the Bootstrap and Permutation Methods

          • The Bootstrap for Two Samples

          • Permutation Tests

          • Inferences About Variability

          • The Analysis of Paired Data

    • fulltext(10)

      • Chapter 11: The Analysis of Variance

        • 11.1 Single-Factor ANOVA

          • Notation and Assumptions

          • Sums of Squares and Mean Squares

          • The F Test

          • Computational Formulas

          • Testing for the Assumption of Equal Variances

        • 11.2 Multiple Comparisons in ANOVA

          • Tukey´s Procedure

          • The Interpretation of α in Tukey´s Procedure

          • Confidence Intervals for Other Parametric Functions

        • 11.3 More on Single-Factor ANOVA

          • An Alternative Description of the ANOVA Model

          • beta for the F Test

          • Relationship of the F Test to the t Test

          • Single-Factor ANOVA When Sample Sizes Are Unequal

          • Multiple Comparisons When Sample Sizes Are Unequal

          • Data Transformation

          • A Random Effects Model

        • 11.4 Two-Factor ANOVA with Kij=1

          • The Model

          • Test Procedures

          • Expected Mean Squares

          • Multiple Comparisons

          • Randomized Block Experiments

          • Models for Random Effects

        • 11.5 Two-Factor ANOVA with Kij1

          • Parameters for the Fixed Effects Model with Interaction

          • Notation, Model, and Analysis

          • Multiple Comparisons

          • Models with Mixed and Random Effects

        • Bibliography

    • fulltext(11)

      • Chapter 12: Regression and Correlation

        • 12.1 The Simple Linear and Logistic Regression Models

          • A Linear Probabilistic Model

          • The Logistic Regression Model

        • 12.2 Estimating Model Parameters

          • Estimating σ2 and σ

          • The Coefficient of Determination

          • Terminology and Scope of Regression Analysis

        • 12.3 Inferences About the Regression Coefficient β1

          • A Confidence Interval for β1

          • Hypothesis-Testing Procedures

          • Regression and ANOVA

          • Fitting the Logistic Regression Model

        • 12.4 Inferences Concerning μY.x* and the Prediction of Future Y Values

          • Inferences Concerning μY.x*

          • A Prediction Interval for a Future Value of Y

        • 12.5 Correlation

          • The Sample Correlation Coefficient r

          • Properties of r

          • The Population Correlation Coefficient rho and Inferences About Correlation

          • Other Inferences Concerning rho

        • 12.6 Assessing Model Adequacy

          • Residuals and Standardized Residuals

          • Diagnostic Plots

          • Difficulties and Remedies

        • 12.7 Multiple Regression Analysis

          • Estimating Parameters

          • σ2 and the Coefficient of Multiple Determination

          • A Model Utility Test

          • Inferences in Multiple Regression

          • Assessing Model Adequacy

          • Multiple Regression Models

          • Models with Predictors for Categorical Variables

        • 12.8 Regression with Matrices

          • The Normal Equations

          • Residuals, ANOVA, F, and R-Squared

          • Covariance Matrices

          • The Hat Matrix

        • Bibliography

    • fulltext(12)

      • Chapter 13: Goodness-of-Fit Tests and Categorical Data Analysis

        • 13.1 Goodness-of-Fit Tests When Category Probabilities Are Completely Specified

          • P-Values for Chi-Squared Tests

          • 2 When the pi´s Are Functions of Other Parameters

          • 2 When the Underlying Distribution Is Continuous

        • 13.2 Goodness-of-Fit Tests for Composite Hypotheses

          • 2 When Parameters Are Estimated

          • Goodness of Fit for Discrete Distributions

          • Goodness of Fit for Continuous Distributions

          • A Special Test for Normality

        • 13.3 Two-Way Contingency Tables

          • Testing for Homogeneity

          • Testing for Independence

          • Ordinal Factors and Logistic Regression

        • Bibliography

    • fulltext(13)

      • Chapter 14: Alternative Approaches to Inference

        • 14.1 The Wilcoxon Signed-Rank Test

          • A General Description of the Wilcoxon Signed-Rank Test

          • Paired Observations

          • Efficiency of the Wilcoxon Signed-Rank Test

        • 14.2 The Wilcoxon Rank-Sum Test

          • Development of the Test When m=3, n=4

          • General Description of the Rank-Sum Test

          • Efficiency of the Wilcoxon Rank-Sum Test

        • 14.3 Distribution-Free Confidence Intervals

          • The Wilcoxon Signed-Rank Interval

          • The Wilcoxon Rank-Sum Interval

        • 14.4 Bayesian Methods

        • Bibliography

    • back-matter

    • Index

    • Answers to Odd-Numbered Exercises

      • Chapter 1

      • Chapter 2

      • Chapter 3

      • Chapter 4

      • Chapter 5

      • Chapter 6

      • Chapter 7

      • Chapter 8

      • Chapter 9

      • Chapter 10

      • Chapter 11

      • Chapter 12

      • Chapter 13

      • Chapter 14

    • Appendix Tables

    • GetFullPageImage

    • front-matter

      • Modern Mathematical Statistics with Applications

        • About the Authors

        • Contents

        • Preface

    • fulltext

    • fulltext(1)

    • fulltext(2)

    • fulltext(3)

    • fulltext(4)

    • fulltext(5)

    • fulltext(6)

    • fulltext(7)

    • fulltext(8)

    • fulltext(9)

    • fulltext(10)

    • fulltext(11)

    • fulltext(12)

    • fulltext(13)

    • back-matter

    • Index

    • Answers to Odd-Numbered Exercises

    • Appendix Tables

Tài liệu cùng người dùng

  • Đang cập nhật ...

Tài liệu liên quan