Pearson statistics informed decisions using data global 5th edition 0134133536

1K 681 4
Pearson statistics informed decisions using data global 5th edition 0134133536

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

Sullivan’s Guide to Putting It Together Putting It Together Sections Objective Page(s) 5.6 Putting It Together: Which Method Do I Use? ❶ Determine the appropriate probability rule to use ❷ Determine the appropriate counting technique to use ❶ Determine the appropriate confidence interval to construct 330–331 331–333 10.6 Putting It Together: Which Method Do I Use? ❶ Determine the appropriate hypothesis test to perform (one sample) 538 11.5 Putting It Together: Which Method Do I Use? ❶ Determine the appropriate hypothesis test to perform (two samples) 595–596 Putting It Together Exercises Skills Utilized Page(s) 9.5 Putting It Together: Which Method Do I Use? 483–484 Section(s) Covered 1.2.24 Passive Smoke Variables, observational studies, designed experiments 1.1, 1.2 49 1.4.37 Comparing Sampling Methods Simple random sampling and other sampling techniques 1.3, 1.4 64 1.4.38 Thinking about Randomness Random sampling 1.3, 1.4 64 2.1.29 Online Homework Variables, designed experiments, bar graphs 1.1, 1.2, 1.6, 2.1 103 2.2.47 Time Viewing a Webpage Graphing data 2.2 121 2.2.48 Which Graphical Summary? Choosing the best graphical summary 2.1, 2.2 121 2.3.19 Rates of Return on Stocks Relative frequency distributions, relative frequency histograms, relative frequency polygons, ogives 2.2, 2.3 128 2.3.20 Shark! Graphing data 2.3 128 3.1.41 Shape, Mean, and Median Discrete vs continuous data, histograms, shape of a distribution, mean, median, mode, bias 1.1, 1.4, 2.2, 3.1 158 3.5.17 Earthquakes Mean, median, range, standard deviation, relative frequency histogram, boxplots, outliers 2.2, 3.1, 3.2, 3.4, 3.5 199 3.5.18 Paternal Smoking Observational studies, designed experiments, lurking variables, mean, median, standard deviation, quartiles, boxplots 1.2, 1.6, 3.1, 3.2, 3.4, 3.5 199–200 4.2.29 Housing Prices Scatter diagrams, correlation, linear regression 4.1, 4.2 237 4.2.30 Smoking and Birth Weight Observational study vs designed experiment, prospective studies, scatter diagrams, linear regression, correlation vs causation, lurking variables 1.2, 4.1, 4.2 237–238 4.3.31 A Tornado Model Explanatory and response variables, scatter diagrams, correlation, least-square regression, interpret slope, coefficient of determination, residual plots, residual analysis 4.1, 4.2, 4.3 252 4.3.32 Exam Scores Building a linear model 4.1, 4.2, 4.3 252 5.1.54 Drug Side Effects Variables, graphical summaries of data, experiments, probability 1.1, 1.6, 2.1, 5.1 289 5.2.44 Speeding Tickets Contingency tables, marginal distributions, empirical probabilities 4.4, 5.1 300 5.2.45 Red Light Cameras Variables, relative frequency distributions, bar graphs, mean, standard deviation, probability, Simpson’s Paradox 1.1, 2.1, 3.1, 3.2, 4.4, 5.1, 5.2 300–301 6.1.35 Sullivan Statistics Survey I Mean, standard deviation, probability, probability distributions 3.1, 3.2, 5.1, 6.1 355 6.2.55 Beating the Stock Market Expected value, binomial probabilities 6.1, 6.2 370 7.2.52 Birth Weights Relative frequency distribution, histograms, mean and standard deviation from grouped data, normal probabilities 2.1, 2.2, 3.3, 7.2 405 7.3.13 Demon Roller Coaster Histograms, distribution shape, normal probability plots 2.2, 7.3 410 8.1.33 Playing Roulette Probability distributions, mean and standard deviation of a random variable, sampling distributions 6.1, 8.1 434–435 9.1.47 Hand Washing Observational studies, bias, confidence intervals 1.2, 1.5, 9.1 462 9.2.49 Smoking Cessation Study Experimental design, confidence intervals 1.6, 9.1, 9.2 476 10.2.38 Lupus Observational studies, retrospective vs prospective studies, bar graphs, confidence intervals, hypothesis testing 1.2, 2.1, 9.1, 10.2 521 10.2.39 Naughty or Nice? Experimental design, determining null and alternative hypotheses, binomial probabilities, interpreting P-values 1.6, 6.2, 10.1, 10.2 521 (continued) Putting It Together Exercises Skills Utilized Section(s) Covered Page(s) 11.1.36 Salk Vaccine Completely randomized design, hypothesis testing 1.6, 11.1 564 11.2.18 Glide Testing Matched pairs design, hypothesis testing 1.6, 11.2 574 11.3.23 Online Homework Completely randomized design, confounding, hypothesis testing 1.6, 11.3 585–586 12.1.27 The V-2 Rocket in London Mean of discrete data, expected value, Poisson probability distribution, goodness-of-fit 6.1, 6.3, 12.1 619 12.1.28 Weldon’s Dice Addition Rule for Disjoint Events, classical probability, goodness-of-Fit 5.1, 5.2, 12.1 619 12.2.21 Women, Aspirin, and Heart Attacks Population, sample, variables, observational study vs designed experiment, experimental design, compare two proportions, chi-square test of homogeneity 1.1, 1.2, 1.6, 11.1, 12.2 634 13.1.27 Psychological Profiles Standard deviation, sampling methods, two-sample t-test, Central Limit Theorem, one-way Analysis of Variance 1.4, 3.2, 8.1, 11.2, 13.1 662 13.2.17 Time to Complete a Degree Observational studies; sample mean, sample standard deviation, confidence intervals for a mean, one-way Analysis of Variance, Tukey’s test 1.2, 3.1, 3.2, 9.2, 13.1, 13.2 671 13.4.22 Students at Ease Population, designed experiments versus observational studies, sample means, sample standard deviation, two sample t-tests, one-way ANOVA, interaction effects, non-sampling error 1.1, 1.2, 3.1, 3.2, 11.3, 13.1, 13.4 693–694 14.6.8 Purchasing Diamonds Level of measurement, correlation matrix, multiple regression, confidence and prediction intervals 1.1, 14.3, 14.4, 14.6 763 Updated for this edition is the Student Activity Workbook The Activity Workbook includes many tactile activities for the classroom In addition, the workbook includes activities based on statistical applets Below is a list of the applet activities Applet Section Activity Name Mean versus Median 3.1 Understanding Measures of Center Standard Deviation 3.2 Exploring Standard Deviation Correlation by Eye 4.1 Exploring Properties of the Linear Correlation Coefficient Regression by Eye 4.2 Minimizing the Sum of the Squared Residuals Regression Influence 4.3 Understanding Influential Observations Rolling a Single Die 5.1 Demonstrating the Law of Large Numbers Binomial Distribution 6.2 Exploring a Binomial Distribution from Multiple Perspectives Baseball Applet 6.2 Using Binomial Probabilities in Baseball Sampling Distributions 8.1 Sampling from Normal and Non-normal Populations Sampling Distributions Binary 8.2 Describing the Distribution of the Sample Proportion Confidence Intervals for a Proportion 9.1 Exploring the Effects of Confidence Level, Sample Size, and Shape I Confidence Intervals for a Mean 9.2 Exploring the Effects of Confidence Level, Sample Size, and Shape II Political Poll Applet 10.2 The Logic of Hypothesis Testing Hypothesis Tests for a Proportion 10.2 Understanding Type I Error Rates Cola Applet 10.2 Testing Cola Preferences Hypothesis Tests for a Mean 10.3 Understanding Type I Error Rates Randomization Test Warts 11.1 Making an Inference about Two Proportions Randomization Test Basketball 11.2 Predicting Basketball Game Outcomes Randomization Test Sentence 11.2 Considering the Effects of Grammar Randomization Test Kiss 11.2 Analyzing Kiss Data Randomization Test Algebra 11.3 Using Randomization Test for Independent Means Randomization Test Market 11.3 Comparing Bull and Bear Markets Randomization Test Zillow 14.1 Using a Randomization Test for Correlation Randomization Test Brain Size 14.1 Using a Randomization Test for Correlation STATISTICS INFORMED DECISIONS USING DATA Fifth Edition Global Edition Michael Sullivan, III Joliet Junior College Harlow, England • London • New York • Boston • San Francisco • Toronto • Sydney • Dubai • Singapore • Hong Kong Tokyo • Seoul • Taipei • New Delhi • Cape Town • Sao Paulo • Mexico City • Madrid • Amsterdam • Munich • Paris • Milan Editorial Director: Chris Hoag Editor in Chief: Deirdre Lynch Acquisitions Editor: Patrick Barbera Editorial Assistant: Justin Billing Acquisitions Editor, Global Edition: Aditee Agarwal Program Team Lead: Karen Wernholm Program Manager: Danielle Simbajon Project Team Lead: Peter Silvia Project Manager: Tamela Ambush Project Editors, Global Edition: Radhika Raheja and K.K Neelakantan Senior Media Producer: Vicki Dreyfus Media Producer: Jean Choe Media Production Manager, Global Edition: Vikram Kumar QA Manager, Assessment Content: Marty Wright Senior Content Developer: John Flanagan MathXL Senior Project Manager: Bob Carroll Field Marketing Manager: Andrew Noble Product Marketing Manager: Tiffany Bitzel Marketing Assistant: Jennifer Myers Senior Technical Art Specialist: Joe Vetere Manager Rights and Permissions: Gina M Cheselka Procurement Specialist: Carol Melville Senior Manufacturing Controller, Global Edition: Trudy Kimber Associate Director Art/Design: Andrea Nix Senior Design Specialist: Heather Scott Composition: Lumina Datamatics, Inc Cover Design: Lumina Datamatics, Inc Cover Image: Science Photo Library/Shutterstock microsoft and/or its respective suppliers make no representations about the suitability of the information contained in the documents and related graphics published as part of the services for any purpose all such documents and related ­graphics are provided “as is” without warranty of any kind microsoft and/or its respective suppliers hereby disclaim all ­warranties and conditions with regard to this information, including all warranties and conditions of merchantability, whether express, implied or statutory, fitness for a particular purpose, title and non-infringement in no event shall microsoft and/or its respective suppliers be liable for any special, indirect or consequential damages or any damages whatsoever r­ esulting from loss of use, data or profits, whether in an action of contract, negligence or other tortious action, arising out of or in c­ onnection with the use or performance of information available from the services the documents and related graphics contained herein could include technical inaccuracies or typographical errors changes are periodically added to the information herein microsoft and/or its respective suppliers may make improvements and/or changes in the product(s) and/or the program(s) described herein at any time partial screen shots may be viewed in full within the software version specified microsoft® windows®, and microsoft office® are registered trademarks of the microsoft corporation in the u.s.a and other countries this book is not sponsored or endorsed by or affiliated with the microsoft corporation Acknowledgements of third party content appear on page PC-1, which constitutes an extension of this copyright page PEARSON, ALWAYS LEARNING, MYSTATLAB are exclusive trademarks owned by Pearson Education, Inc or its affiliates in the United States and/or other countries Pearson Education Limited Edinburgh Gate Harlow Essex CM20 2JE England and Associated Companies throughout the world Visit us on the World Wide Web at: www.pearsonglobaleditions.com © Pearson Education Limited 2018 The right of Michael Sullivan, III to be identified as the author of this work has been asserted by him in accordance with the Copyright, Designs and Patents Act 1988 Authorized adaptation from the United States edition, entitled Statistics: Informed Decisions Using Data, 5th Edition, ISBN 978-0-13-413353-9, by Michael Sullivan, III, published by Pearson Education © 2017 All rights reserved No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, without either the prior written permission of the publisher or a license ­permitting restricted copying in the United Kingdom issued by the Copyright Licensing Agency Ltd, Saffron House, 6–10 Kirby Street, London EC1N 8TS All trademarks used herein are the property of their respective owners The use of any trademark in this text does not vest in the author or publisher any trademark ownership rights in such trademarks, nor does the use of such trademarks imply any affiliation with or endorsement of this book by such owners British Library Cataloguing-in-Publication Data A catalogue record for this book is available from the British Library 10 ISBN 10: 1-292-15711-9 ISBN 13: 978-1-292-15711-5 Typeset by Lumina Datamatics Printed and bound in Malaysia To My Wife Yolanda and My Children Michael, Kevin, and Marissa This page intentionally left blank Contents Preface to the Instructor  13 Resources for Success  18 Applications Index  23 Part 1  Getting the Information You Need 29 Data Collection  30 1.1 1.2 1.3 1.4 1.5 1.6 Introduction to the Practice of Statistics  31 Observational Studies versus Designed Experiments  42 Simple Random Sampling  49 Other Effective Sampling Methods  56 Bias in Sampling  64 The Design of Experiments  70 Chapter Review  82 Chapter Test  85 Making an Informed Decision: What College Should I Attend?  87 Case Study: Chrysalises for Cash  87 Part   Descriptive Statistics 89 Organizing and Summarizing Data 90 2.1 2.2 2.3 2.4 Organizing Qualitative Data  91 Organizing Quantitative Data: The Popular Displays  104 Additional Displays of Quantitative Data  122 Graphical Misrepresentations of Data  129 Chapter Review  137 Chapter Test  141 Making an Informed Decision: Tables or Graphs?  143 Case Study: The Day the Sky Roared  143 Numerically Summarizing Data 145 3.1 3.2 3.3 3.4 3.5 Measures of Central Tendency  146 Measures of Dispersion  159 Measures of Central Tendency and Dispersion from Grouped Data  175 Measures of Position and Outliers  182 The Five-Number Summary and Boxplots  192 Chapter Review  200 Chapter Test  204 Making an Informed Decision: What Car Should I Buy?  206 Case Study: Who Was “A Mourner”?  207 Contents Describing the Relation between Two Variables 208 4.1 4.2 4.3 4.4 4.5 Scatter Diagrams and Correlation  209 Least-Squares Regression  225 Diagnostics on the Least-Squares Regression Line  239 Contingency Tables and Association  253 Nonlinear Regression: Transformations (online)  4-1 Chapter Review  264 Chapter Test  270 Making an Informed Decision: Relationships among Variables on a World Scale  271 Case Study: Thomas Malthus, Population, and Subsistence  272 PART   Probability and Probability Distributions 273 Probability 274 5.1 5.2 5.3 5.4 5.5 5.6 5.7 Probability Rules  275 The Addition Rule and Complements  290 Independence and the Multiplication Rule  301 Conditional Probability and the General Multiplication Rule  307 Counting Techniques  317 Putting It Together: Which Method Do I Use?  330 Bayes’s Rule (online)  5-1 Chapter Review  335 Chapter Test  339 Making an Informed Decision: The Effects of Drinking and Driving  340 Case Study: The Case of the Body in the Bag  341 Discrete Probability Distributions 343 6.1 6.2 6.3 6.4 Discrete Random Variables  344 The Binomial Probability Distribution  355 The Poisson Probability Distribution  371 The Hypergeometric Probability Distribution (online)  6-1 Chapter Review  377 Chapter Test  380 Making an Informed Decision: Should We Convict?  381 Case Study: The Voyage of the St Andrew  382 The Normal Probability Distribution 383 7.1 7.2 7.3 7.4 Properties of the Normal Distribution  384 Applications of the Normal Distribution  394 Assessing Normality  405 The Normal Approximation to the Binomial Probability Distribution  410 Chapter Review  415 Chapter Test  418 Making an Informed Decision: Stock Picking  419 Case Study: A Tale of Blood Chemistry  419 Contents PART   Inference: From Samples to Population 421 Sampling Distributions 422 8.1 8.2 Distribution of the Sample Mean  423 Distribution of the Sample Proportion  436 Chapter Review  443 Chapter Test  445 Making an Informed Decision: How Much Time Do You Spend in a Day … ?  446 Case Study: Sampling Distribution of the Median  446 Estimating the Value of a Parameter 448 9.1 9.2 9.3 9.4 9.5 Estimating a Population Proportion  449 Estimating a Population Mean  463 Estimating a Population Standard Deviation  477 Putting It Together: Which Procedure Do I Use?  483 Estimating with Bootstrapping  486 Chapter Review  493 Chapter Test  497 Making an Informed Decision: How Much Should I Spend for this House?  498 Case Study: Fire-Safe Cigarettes  499 Hypothesis Tests Regarding a Parameter 500 10 10.1 10.2 10.3 10.4 10.5 10.6 The Language of Hypothesis Testing  501 Hypothesis Tests for a Population Proportion  508 Hypothesis Tests for a Population Mean  522 Hypothesis Tests for a Population Standard Deviation  532 Putting It Together: Which Method Do I Use?  538 The Probability of a Type II Error and the Power of the Test  540 Chapter 10 Review  545 Chapter Test  549 Making an Informed Decision: Selecting a Mutual Fund  550 Case Study: How Old Is Stonehenge?  550 Inferences on Two Samples 552 11 11.1 11.2 11.3 11.4 11.5 Inference about Two Population Proportions  553 Inference about Two Means: Dependent Samples  564 Inference about Two Means: Independent Samples  575 Inference about Two Population Standard Deviations  586 Putting It Together: Which Method Do I Use?  595 Chapter 11 Review  600 Chapter Test  603 Making an Informed Decision: Which Car Should I Buy?  605 Case Study: Control in the Design of an Experiment  605 I-8 Index Percentile(s), 184–187 interpreting, 184 kth, 184 quartiles, 184–187 ranks by, 396 value of normal random variable corresponding to, 398–400 Permutations, 320–322, 326–327 computing, 321 using technology, 321, 327 definition of, 320 of distinct items, 325 with nondistinct items, 324–325 Phone-in polling, 60 Pie charts, 96–98 constructing, 96–97 drawing, 97–98 technology to draw, 115 three-dimensional, 133 using technology, 115 Placebo, 71–72 Planetary Motion, Kepler’s Law of, 251, 716 Point estimate definition of, 449 of population mean, 463 of population proportion, 449 of two population means, 569 Points, problem of, 284 Point-slope form of line, 225, B3 Poisson, Siméon Denis, 3444 Poisson probability distribution, 371–377, 422 mean and standard deviation of Poisson random variable, 373–374 Poisson probability distribution function, 371–372, 373 probabilities of Poisson random variable, 371–372 probability experiment following Poisson process, 371–374 using technology, 374 Poisson process, 371–374 computing probabilities of, 371–372 Polar area diagram, 108 Politics, statistics in, 30 Polling, phone-in, 60 Polling data, 34 Polygons, frequency, 122–123 technology to draw, 126 Polynomial regression, 745–750 finding quadratic regression equation, 745–748 using technology, 748 Pooled estimate, 555 Pooled t-statistic, 581 Pooling, 581 Population, 33 mean of (μ), 146–148 proportion of, 389 Population growth, 272 Population mean, 146–148, 176 bootstrapping to estimate, 486–493 confidence interval about, 569–570 confidence interval for, 450, 464–465, 467–469 technology for, 470 forming hypothesis about, 503–504 hypothesis testing about, 522–532 classical approach to, 522–523, 524–525, 526, 527 with large sample, 523–524, 527 P-value approach to, 522–523, 524–525, 526, 527 with small sample, 525–526 point estimate of, 463 sample size to estimate, within given margin of error, 469–470 Population proportion(s), 389 bootstrapping to estimate, 490 confidence interval for, 449–457 constructing and interpreting, 449–457 point estimate for population proportion, 449 technology for, 456, 458–459 difference between two confidence intervals, 558–559 McNemar’s Test to compare two proportions from matched-pairs data, 635–638 sample size requirements for, 560–561 using technology, 561 forming hypothesis about, 503 hypothesis testing for, 508–521 binomial probability distribution for, 516–517 classical approach using, 509–510, 511–512, 513–514, 514–515 left-tailed, 512–514 logic of, 509–511 P-value approach using, 510–512, 513–514, 514–515 technology in, 517–518 two-tailed, 514–516 using confidence interval, 516 hypothesis testing regarding two (independent samples), 554–558 classical approach to, 557 P-value approach to, 557–558 point estimate for, 449 pooled estimate of, 555 sample size determination, within specified margin of error, 457–458 sampling distribution of, 450–451 Population size, sample size and, 438–439 Population standard deviation(s), 162–163, 178 confidence intervals for constructing and interpreting, 479 critical values for chi-square distribution, 482 using technology, 481 difference between two, 586–589 critical values of F-distribution, 586–589 notation used for, 587 robustness of, 590 testing, 589–593 using technology, 591, 592–593 estimating, 477–481 confidence intervals for, 479–481 critical values for the chi-square distribution, 477–479 forming hypothesis about, 504 hypothesis testing about, 532–537 chi-square distribution and, 533 classical approach to, 534, 535 left-tailed test, 534–535 P-value approach to, 534, 535 using technology, 535–536 inference about two, 586–595 using technology, 593 least-squares regression model and, 704 Population variance, 166 confidence intervals for, 479–481 hypothesis testing about, 533–534 in one-way ANOVA, 647, 648 Population z-score, 183 Position, measures of, 182–192 outliers, 145, 187–188 percentiles, 184–187 quartiles, 184–187 z-scores, 183–184 Positively associated variables, 210 Power curve, 543 Power of a test, 543, 773 Practical significance definition of, 527 statistical significance vs., 526–527 Prediction interval(s) definition of, 717 for an individual response, 718–720 for multiple linear regression model, 730–732 Predictor (independent or explanatory) variable, 209 significance of, 730 Pretest-posttest (before-after) experiments, 74 Probability(ies), 273–342 Addition Rule with contingency tables, 294–295 General, 293–295 Addition Rule for Disjoint Events, 290–292 Benford’s Law and, 291–292 area as, 385 area under normal curve as, 389 at-least, 304–305 classical, 279–281, 343 Complement Rule, 295–297, 304 conditional, 307–317 definition of, 308 independence and, 313–314 using the General Multiplication Rule, 310–314 confidence interval and, 452 counting problems, 317–329 combinations for, 322–324, 326–327, 327 Multiplication Rule for, 317–320 permutations for, 320–322, 324–325, 327 without repetition, 319 defined, 275 Empirical Method to approximate, 278–279, 281–283 events and the sample space of probability experiment, 276 to identify unusual events, 277 Multiplication Rule for Independent Events, 302–304, 314 relative frequency to approximate, 278 rules of, 275–289, 305 determining appropriate, 330–331 simulation to obtain, 283–284 technology in, 285 subjective, 284 value of normal random variable corresponding to, 398, 400 Probability density function (pdf), 384–385, 389 normal, 389 Probability distribution, 422 See also Normal probability distribution binomial, 422, 516–517 cumulative, A7–A10 table, A3–A6 exponential, 476 geometric, 370 negative binomial, 370 Poisson, 422 testing claims regarding See Contingency (two-way) table(s); Goodness-of-fit test Probability experiment, 276, 278, 279, 343 binomial See Binomial experiment design of, control in, 605–606 following Poisson process, 371 Probability histograms of discrete probability distributions, 346–347 binomial, 364–366 Probability model, 277–278, 343 for random variables See Discrete probability distributions from survey data, 279 “Proof and Measurement of Association between Two Things, The” (Spearman), 807 Proportion(s) See also Population proportion(s); Sample proportion area under normal curve as, 389 homogeneity of, 627–630 definition of, 627 steps in, 628–630 value of normal random variable corresponding to, 398, 400 Prospective cohort studies, 216 Prospective studies, 46 Prothrombin time, 795 P-value approach to hypothesis testing, 510–512, 513–514, 514–515 in chi-square test Index for homogeneity of proportions, 629 for independence, 624, 625 definition of, 511 of difference between two means using independent samples, 576–577, 578–579 of difference between two population proportions from independent samples, 557–558 McNemar’s Test for, 636, 637 of difference between two population standard deviations, 590, 591, 592–593 goodness-of-fit, 610–611, 612, 614 in least-squares regression model, 707–709 of matched-pairs data, 565–566, 568 in multiple linear regression model, 730 in multiple regression model, 729–730 to one-sample sign test, 783 in one-way ANOVA, 654 in polynomic regression, 748 about population mean, 522–523, 524–525, 526, 527 about population standard deviation, 534, 535 in quadratic regression equation, 748 two-tailed, 517n to Wilcoxon matched-pairs signed-ranks test, 790, 791–792, 796 Q Quadratic regression model, 745–748 Qualitative data, 36, 91–103 See also Categorical data bar graphs of, 92–96, 97–98 frequency distribution of, 91–92 relative, 92 mode of, 153 pie charts of, 96–98 tables of, 91–92 Qualitative variable, 34–35 nominal or ordinal, 37 Quantitative data, 36, 109–121 cumulative frequency and relative frequency tables of, 123–124 dot plots of, 113, 114–115 frequency and relative frequency ogives of, 124 technology to draw, 126 frequency polygons of, 222–123 technology to draw, 126 histograms of, 105–106, 108–110, 114–115 mode of, 152–153 shape of distribution of, 113–114 stem-and-leaf plots of, 110–112, 114–115 split stems, 112–113 tables of, 104–108 time-series graphs of, 125 technology to draw, 126 Quantitative variable, 34–35 interval or ratio, 37 Quartiles, 184–187 checking for outliers using, 188 using technology, 185, 186, 188–189 Questionnaires, 85 ordering of questions or words in, 66 Questions ordering of, 66 type of, 67 wording of, 66, 87 Questions and Answers in Attitude Surveys (Schuman and Presser), 66 Queuing theory, 537 R Random digit dialing (RDD) telephone surveys, 65 Randomization, 72 Randomized block design, 75–77 Randomized complete block design, 671–680 analysis of variance on, 673–676, 680 analyzing, 674–676 normality requirement in, 675–676 null hypothesis in, 675 Tukey’s test for, 676, 680 using technology, 677 Randomness, runs test for, 773–780 critical values for, 774, A24 definition, 774 large-sample case, 775, 776, 777–778 notation used in, 774 performing, 773–778 small-sample case, 775, 776–777 steps in, 776 technology approach to, 776, 777, 778 test statistic for, 775 Random number generator, 52 Random numbers, table of, 51–52, A1 Random sampling, 49–56, 58, 59, 60, 62 combinations of samples, 324 definition of, 49 illustrating, 50 obtaining sample, 51–54 Random variable(s), 343, 422 binomial, 356 normal approximation to, 412–413 continuous, 344–345 probability density functions to find probabilities for, 384–385 definition of, 344 discrete, 344–355 continuous random variables distinguished from, 344–345 definition of, 344 mean of, 347–350, 351 variance and standard deviation of, 350–351 normal, 388–390 probability of, 396–397 standardizing, 394 value of, 398–401 Poisson, 371–374 probability models for See Discrete probability distributions statistics as, 422 Random Walk Down Wall Street, A (Malkiel), 777 Range, 160, 169 computing, 160 definition of, 160 interquartile (IQR), 186–187, 193, 194 technology to determine, 169 Ratio level of measurement, 37 Raw data, 91 continuous, 108 mean of variable from, 146–148 median of a variable from, 148–149 range of variable from, 160 standard deviation of variable from, 161–165 variance of variable from, 166 Récherches sur la probabilité des jugements (Poisson), 372 Regression See Least-squares regression model; Multiple regression; Polynomial regression Regression analysis, 230 Regression coefficients, 726–727 testing for significance, 730 Relation between two variables, 208–272 contingency tables and association, 253–264 conditional distribution to identify association among categorical data, 255–259 marginal distribution of a variable, 253–255 Simpson’s Paradox, 259–260 correlation versus causation, 216–217 least-squares regression line, 225–252 coefficient of determination, 239–242, 247–248 definition of, 227 diagnostics on, 239–252 equation of, 228 finding, 225–230, 239 influential observations, 246–247 interpreting the slope and y-intercept of, 230–231 I-9 residual analysis on a regression model, 242–245 sum of squared residuals, 231–232, 247–248 linear, determining, 215–216 linear correlation coefficient, 211–216, 711 computing and interpreting, 213–215 definition of, 211 properties of, 211–213 using technology, 217–218 scatter diagrams, 209–210 definition of, 209 drawing, 209–210, 214–215, 217–218 testing for linear relation, 707–709 Relative frequency(ies) association between two categorical variables and, 255–256 probability using, 278 of qualitative data, 92 Relative frequency bar graph, 93–94 side-by-side, 94–95, 96 using horizontal bars, 95–96 Relative frequency distribution, 92 from continuous data, 106–108 cumulative, 123–124 of discrete data, 104–105 histogram of, 389 Relative frequency marginal distributions, 254–255 Relative frequency ogives, 124 Replication, 72 Research objective, identification of, 34 Residual(s), 226–227, 242 normally distributed, 706 sum of squared, 231–232 using technology, 247–248 variance of, 243–244 Residual analysis on regression model, 242–245 appropriateness of linear model, 242–243 constant error variance, 244 graphical, 245 outliers, 244–245 Residual plot, 242–243 Resistant statistic, 150–152 Response bias, 65–67 Response (dependent) variable, 42, 71, 72, 209, 256 Retrospective studies, 45 Rewards, nonresponse and, 65 Right-tailed hypothesis testing See One-tailed test(s) Risk, 172 Robustness, 468, 566, 587 of least-squares regression model, 708 of one-way ANOVA, 648 of testing difference between two population standard deviations, 587, 590 Roman letters, use of, 146 Roosevelt, Franklin D., 65 Roti Roti, Joseph L., 42 Rounding, 36, 108 Round-off error, 166 Row variable, 253, 294 Run, defined, 774 Runs test for randomness, 773–780 critical values for, 774, A24 definition, 774 large-sample case, 775, 776, 777–778 notation used in, 774 performing, 773–778 small-sample case, 775, 776–777 steps in, 776 technology approach to, 776, 777, 778 test statistic for, 775 S St Andrew (ship), 382 Sample(s) convenience, 33 correlation coefficient, 211 defined, 33 I-10 Index Sample(s) (Continued) matched-pairs (dependent), 553–554 confidence intervals for, 569–570 hypotheses testing for a population mean regarding, 565–568 McNemar’s Test to compare two proportions from, 635–638 mean (x-bar), 146–147 self-selected (voluntary response), 60 Sample mean, 146–147, 176 sample size and, 425–426 sampling distribution of, 423–436, 448 definition of, 423 describing, 426–431 mean and standard deviation of, 426 from nonnormal populations, 427–431 from normal populations, 423–427 shape of, 426 standard deviation of, 448 Sample proportion computing, 437 definition of, 436 sampling distribution of, 436–443 describing, 436–439 probabilities of, 439–440 simulation to describe distribution of, 437–439 Sample size, 61 for difference of two population proportions, 560–561 distribution shape and, 428–430 hypothesis testing and, 516–517, 523–526, 527 margin of error and, 457–458 for population mean within given margin of error, 469–470 for population proportion estimation within specified margin of error, 457–458 population size and, 438 sampling variability and, 425–426 shape of the t-distribution and, 466 t-test and, 788 Sample space, 276 Sample standard deviation, 163–164, 178, 704–705 Sample variance, 166 Sample z-score, 183 Sampling, 49–70 acceptance, 311–312 bias in, 64–68 frame and, 64 misrepresented answers, 66 nonresponse bias, 65, 67 ordering of questions or words, 66 response bias, 65–67 sampling bias, 64–65 wording of questions, 66 cluster, 59–60, 62 convenience, 60–61 dependent, 553–554 errors in interviewer error, 65–66 goal of, 56, 64 independent, 553–554 multistage, 61 with replacement, 51 without replacement, 51 sample size considerations, 61 simple random, 49–56, 58, 59, 62 combinations of, 324 definition of, 51 illustrating, 50 obtaining sample, 51–54 stratified, 56–58, 60, 62 systematic, 58–59, 62 Sampling distribution(s), 422–447 of difference between two proportions (independent sample), 81 of difference of two means, 575–576 in least-squares regression model, 703 of median, 446–447 of population proportion, 451 of sample mean, 423–436, 448 definition of, 423 describing, 426–431 mean and standard deviation of, 426 from nonnormal populations, 427–431 from normal populations, 423–427 shape of, 426 of sample proportion, 436–443 describing, 436–439 probabilities of, 439–440 Sampling error, 67 Scatter diagrams, 209–210 definition of, 209 drawing, 209–210, 214–215, 217–218 Second-order model, complete, 740 Seed, 52 Self-selected samples, 60 Shape See Normal probability distribution Side-by-side bar graph, 94–95 using horizontal bars, 95–96 Sider, Christopher, 207 Sidereal year, 251, 716 Sigma (Σ), 147 Signed-ranks test See Wilcoxon matched-pairs signed-ranks test Significance of least-squares regression model, 702–711 level of, 505 practical definition of, 527 statistical vs., 526–527 of predictor variables, 730 of regression coefficients, 730 statistical, 509 definition of, 509 practical vs., 526–527 Type I error and, 505 Sign test, one-sample, 773, 781–785 critical values for, 782 definition, 781 large-sample case, 782, 783, 785 small-sample case, 781, 782, 783–785 steps in, 782–783 technology approach to, 783, 784, 785 test statistic for, 781–782 Sign test, one-sample See One-sample sign test Simple events, 276 Simple linear regression, 701 Simple random sample, 49–56, 58, 59, 62 combinations of, 324 definition of, 50 designed experiment and, 76–77 illustrating, 50 obtaining, 51–54 Simpson’s Paradox, 259–260 Simulation, 283–284 confidence intervals using, 490 standard normal distribution compared to t-distribution using, 465–466 using technology, 285 Single-blind experiment, 71 Skewed distributions, 113–114, 150–151 mean or median versus skewness, 150–151 quartile for, 195 Skewness, 195 coefficient of, 174 Slope, 227 definition of, B1 of least-squares regression model, 230–231 confidence interval about, 710–711, 711–712 hypothesis test, 712 of least-squares regression model, inference on, 706–709 of linear equations of line, B1–B6 calculating and interpreting, B1–B2 equation of horizontal line, B4 graphing, B2–B3 point-slope form, B3 slope and intercept form, B4–B5 Spearman, Charles, 807 Spearman rank-correlation coefficient, 807 Spearman rank-correlation test, 773, 805–811 critical values for, 807, 809, A29–A30 definition, 806 large-sample case, 809 steps for, 807 technology for, 809 test statistic for, 806 Split stems, 112–113 Sports, statistics in, 30 Spread See Standard deviation; Variance Spreadsheets See Statistical spreadsheets Standard deviation, 161–165, 169, 214 of binomial random variable, 363–364 confidence interval for, 479–481 of discrete random variables, 350–351 technology to find, 351 from grouped data, 178–180, 180 interpretations of, 165 outlier distortion of, 187 of Poisson random variable, 373–374 population, 162–163, 178 confidence intervals about, 479–481 hypothesis testing about, 533–537 inference about two, 586–595 least-squares regression model and, 704 sample, 163–164, 178, 704–705 of sample mean, 448 of sampling distribution of sample mean, 426 of two data sets, 165 unusual results in binomial experiment and, 366 using technology, 164–165, 179–180 Standard error, 704–706 computing, 704–706 definition of, 705 of the mean, 426 of sample mean difference, 664 Standard normal probability distribution, 394 table, A11–A13 StatCrunch, 54 area under the normal curve using, 402 normal values corresponding to, 402 backward elimination using, 761 bar graph using, 97–98 binomial probabilities using, 362, 367 bootstrapping for confidence intervals using, 490–491 boxplots using, 197 chi-square tests using, 631 coefficient of determination using, 248 combinations using, 327 confidence intervals using, 720 for population mean, 470 for population proportion, 459 for population standard deviation, 480–481, 481 for population variance, 480–481 for the slope of the true regression line using, 710–711 correlation coefficient using, 218 correlation matrix using, 733 factorials using, 327 forward selection using, 761 goodness-of-fit test, 615 in hypothesis testing about population mean, 528 about population proportion, 517–518 about population standard deviation, 536 inference of two population proportions using, 561 Kruskal–Wallis test using, 815 least-squares regression model using, 229–230, 233, 712 Mann–Whitney test using, 803 McNemar’s Test, 638 mean and median using, 149 Index multiple regression line and residual plots using, 733 normal probability plot using, 408 one-sample sign test using, 785 one-way ANOVA using, 654, 657 permutations using, 327 pie charts using, 97–98 Poisson probabilities using, 374 in polynomial regression, 748 prediction intervals using, 720 quartiles using, 189 randomized complete block design using, 680 Resample command, 489–490, 491 residual plots using, 248 scatter diagrams using, 218 simulation using, 285 Spearman rank correlation test on, 809 stepwise regression using, 761 testing claims about matched-pairs data using, 568 Tukey’s test using, 668 two-sample t-tests using dependent sampling, 570 independent sampling, 582 two-way ANOVA in, 690 Wilcoxon signed-ranks test of matched-pairs data on, 794 Statistic, 33 biased, 166 defined, 33 parameter versus, 33 as random variable, 422 resistant, 150–152 sample mean as, 146 Statistical Abstract of the United States, 283 Statistically significant, defined, 509 Statistical significance, 509 practical significance vs., 526–527 Statistical spreadsheets See also Excel frequency polygons on, 123 ogives on, 124 pie charts on, 96 Tally command, 92 time-series on, 125 Statistical thinking, 31–32 Statistics, 31–41 definition of, 31–32 descriptive, 33, 34 inferential, 33, 34, 72 mathematics vs., 32 process of, 32–34 roles in everyday life of, 30 variables in, 34–39 data vs., 36–37 discrete vs continuous, 35–36 qualitative (categorical) vs quantitative, 34–35 Status quo statement, 502, 503 Stem-and-leaf plots, 110–112 constructing, 110–112 using technology, 111–112 split stems, 112–113 using technology, 114–115 Stepwise regression, 754 building regression model using, 759–761 using technology, 761 Stonehenge, age of, 550–551 Strata, 57 Stratified sampling, 56–58, 62 Studentized range distribution, 664–665 Student’s t See t-distribution Sturdy, Tom, 207 Subject (experimental unit), 71, 72 in matched-pairs design, 74 Subjective probability, 284 Summers, Lawrence, 173 Sum of squared residuals, 231–232 using technology, 247–248 Sum of squares, 651, 653 due to error, 651 total (SS (Total)), 651 due to treatment (SST), 651 Survey data, probability model from, 279 Surveys American Time Use Survey, 422, 446 classroom, 67–68 Current Population Survey, 423 General Social Survey (GSS), 46 Internet, 60 random digit dialing (RDD) telephone, 65 random sample, 49–56 Suzuki, Ichiro, 353 Symmetric distributions, 113–114, 150, 165 Systematic sampling, 59–60, 62 T Tables, 143, 145 binomial, 361–362 continuous data in, 106–108 cumulative frequency and relative frequency, 123–124 discrete data in, 104–105 open-ended, 106 qualitative data in, 91–92 t-distribution, 463–467, 522 finding values, 466–467 hypothesis testing and, 522 properties of, 463–466 sample size and, 466 standard normal distribution compared to, 465–466 statement of, 464 table, A14 Technology See also Excel; MINITAB; StatCrunch; Statistical spreadsheets; TI-83/84 Plus graphing calculator ANOVA using one-way, 654–655, 656–657, 680 two-way, 686–687, 689, 690 backward elimination using, 761 binomial probabilities using, 362–363, 366–367 bootstrapping for confidence intervals using, 490–491 boxplots using, 197 chi-square tests using, 624, 625–626, 630–631 coefficient of determination using, 242, 247–248 combinations using, 323, 327 confidence intervals using, 719–720 for matched-pairs data, 569–570 for population mean, 468–469, 470 for population proportion, 456–457, 458–459 for population standard deviation, 480–481, 481 for population variance, 480–481 for slope of the true regression line, 710–711 correlation matrix using, 733 difference between two means using, 579, 581–582 difference between two population proportions using, 561 difference between two population standard deviations using, 591, 592–593, 593 exact P-values using, 612–613 factorials using, 327 five-number summary using, 193 forward selection using, 761 frequency polygons using, 123 goodness-of-fit test using, 612–613, 614–615 histogram for continuous data using, 108–110 in hypothesis testing about population mean, 528 about population proportion, 517–518 about population standard deviation, 535–536 interaction plots using, 686–687, 690 least-squares regression line using, 229–230 least-squares regression model using, 709, 711–712 linear correlation coefficient using, 214–215, 217–218 Mann–Whitney test using, 799, 801, 803 McNemar’s Test, 638 I-11 mean and standard deviation using, 179–180, 351 multiple regression equation using, 725–726 normal probability distribution using, 401–402 normal probability plot using, 408–409 normal random variable using, 398–399 one-sample sign test using, 783, 784, 785 permutations using, 321, 327 Poisson probability distribution using, 374 in polynomial regression, 748 prediction intervals using, 719–720 probabilities of a sample proportion using, 439, 440 randomized complete block design using, 680 runs test for randomness using, 776, 777, 778 scatter diagram using, 214–215, 217–218 simple random sample using, 52–54 standard error using, 706 stem-and-leaf plot using, 111–112 stepwise regression using, 761 sum of squared residuals using, 247–248 testing claims about matched-pairs data using, 568, 570 Tukey’s test using, 667, 668, 680 two-sample t-tests, independent sampling, 581–582 Wilcoxon signed-ranks test of matched-pairs data using, 790, 791–792, 794 Test, power of, 543 TI-83/84 Plus graphing calculator, 52 area under normal curve using, 401 binomcdf command, 413 binomial probabilities using, 366 boxplots using, 197 chi-square tests using, 630–631 coefficient of determination using, 247 combinations using, 323, 327 comparing two population standard deviations using, 593 confidence intervals using, 720 for population mean, 470 for population proportion, 458 for population standard deviation, 481 correlation coefficient using, 217 factorials using, 327 goodness-of-fit test, 614 histograms using, 114–115 in hypothesis testing about population mean, 528 about population proportion, 517 inference between two population proportions using, 561 invT feature, 467 least-squares regression line using, 229–230, 233 least-squares regression model using, 711 McNemar’s Test, 638 mean and median on, 154 mean and standard deviation using approximation, 179–180 from grouped data, 180 normal probability plot using, 408 normal random variable using, 398–399 one-way ANOVA using, 654–655, 656 permutations using, 321, 327 Poisson probabilities using, 374 prediction intervals using, 720 quartiles using, 186, 188 residual plots using, 247 role of level of confidence on margin of error using, 456–457 runs test for randomness using, 778 scatter diagrams using, 217 simulation using, 285 Spearman rank correlation test using, 809 standard deviation using, 164–165, 351 two-sample t-tests using dependent sampling, 570 independent sampling, 581 z-value for area under the standard normal curve using, 401 I-12 Index Time-series graphs, 125 technology to draw, 126 t-curves, A33–A35 t-interval, 468 Tornadoes, 143–144 Total deviation, 239–240 Total sum of squares (SS (Total)), 651 Treatment, 70 Tree diagram, 282 Trials, 356, 411 Trimmed mean, 158 t-statistic, 465, 731 marginality of, 731 pooled, 581 two-sample, 578–580, 582 Welch’s approximate, 575 t-test, 773 sample size and outliers affecting, 788 Tudor, Abby, 419 Tufte, Edward, 134 Tukey, John, 192 Tukey’s test, 663–668 cautions regarding, 667–668 confidence intervals for, 667 critical value for, 664 table of, A20–A23 goal of, 663 by hand, 665–666 multiple comparisons using, 676 for one-way ANOVA, 663–668 for randomized complete block design, 676, 677 steps in performing, 665–667 test statistic for, 664 for two-way ANOVA, 689, 690 using technology, 667, 668, 677 Twain, Mark, 238 * factorial design, 681, 687–688 * factorial design, 682 Two-factor theory of intelligence, 807 Two-tailed tests, 502, 542 of difference between two means, 565 of difference between two population proportions, 636 of difference between two population standard deviations, 590 of difference of two means: independent samples, 576–577 in least-squares regression model, 707–708 Two-way ANOVA, 680–694 crossed factors in, 682 decision rule for, 684–685 designs of, analyzing, 681–686 hypothesis testing using, 683–686 interaction effect in, 682–683 interaction plots in, 686–688, 690 main effects in, 682–683, 690 normality requirement in, 685–686 requirements for, 683 Tukey’s test for, 689, 690 using technology, 686–687, 689, 690 Two-way table See Contingency (two-way) table(s) Type I error, 504–505, 510, 580 in ANOVA, 646–647 in nonparametric statistical procedures, 773 probability of, 505, 510 Type II error, 504–505, 540–545 probability of, 505, 540–545 computing, 541–543 in Tukey’s test, 667 U Undercoverage, 64 Unexplained deviation, 240 Uniform density function, 386 Uniform probability distribution, 113–114, 383, 384–386 definition of, 384 Unimodal instruction, 584 United States Census, 46 Univariate data, 208 Unusual events, 277 Upper class limit, 106 “Use of Ranks in One-Criterion Variance Analysis” (Kruskal and Wallis), 812 V Value, expected, 349–350 Variability, 174 between-sample, 650 within-sample, 650 Variable(s), 34–38 See also Random variable(s) associated, 210, 214 column, 253, 294 data vs., 36–37 defined, 34 dependent (response), 209 discrete vs continuous, 35–36 dummy (indicator), 742–743 explanatory, 42, 70, 256 independent (explanatory or predictor), 209, 730 interaction between, 740 level of measurement of, 37–38 linear relation between two, determining, 215–216 lurking, 32, 44, 216, 259, 260 marginal distribution of, 253–255 modal class of, 182 multicollinearity among, 723–724, 730–732 relation between two See Relation between two variables response, 42, 71, 72 row, 253, 294 Variance, 166, 169 constant error, 244 of discrete random variables, 350–351 population, 166 of residuals, 243–244 sample, 166 technology to determine, 169 Variation, coefficient of, 174 Venn diagram, 293, 296 Venn diagrams, 290 VINDEX essay, 207 Visual Display of Quantitative Information, The (Tufte), 134 Voluntary response samples, 60 Vos Savant, Marilyn, 289, 317 W Wallis, W A., 812 Weighted mean, 177–178 Welch, Bernard Lewis, 575 Welch’s approximate t, 575 Whiskers, 194 Whitney, D Ransom, 797 Wholly Significant Difference Test See Tukey’s test Wilcoxon, Frank, 788 Wilcoxon matched-pairs signed-ranks test, 773, 788–794 critical values for, 789, A26–A28 large-sample case, 789, 790 on a single sample, 792–794 small-sample case, 789, 790, 791–792 steps for, 790 test statistic for, 789 Wilcoxon one-sample ranked-sums test, 793–794 Wiles, Andrew, 282 Within-sample variability, 650 Wording of questions, 66 Y Year, sidereal, 251, 716 y-intercept, 230–231, B4–B5 Z z-score, 183–184, 464–465, 466 comparing, 183–184 expected, 405–408 population, 183 sample, 183 for specified area to the right, 401   Area in right tail t Table VII t-Distribution Area in Right Tail df 0.25 0.20 0.15 0.10 0.05 0.025 0.02 0.01 0.005 0.0025 0.001 0.0005 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 50 60 70 80 90 100 1000 z 1.000 0.816 0.765 0.741 0.727 0.718 0.711 0.706 0.703 0.700 0.697 0.695 0.694 0.692 0.691 0.690 0.689 0.688 0.688 0.687 0.686 0.686 0.685 0.685 0.684 0.684 0.684 0.683 0.683 0.683 0.682 0.682 0.682 0.682 0.682 0.681 0.681 0.681 0.681 0.681 0.679 0.679 0.678 0.678 0.677 0.677 0.675 0.674 1.376 1.061 0.978 0.941 0.920 0.906 0.896 0.889 0.883 0.879 0.876 0.873 0.870 0.868 0.866 0.865 0.863 0.862 0.861 0.860 0.859 0.858 0.858 0.857 0.856 0.856 0.855 0.855 0.854 0.854 0.853 0.853 0.853 0.852 0.852 0.852 0.851 0.851 0.851 0.851 0.849 0.848 0.847 0.846 0.846 0.845 0.842 0.842 1.963 1.386 1.250 1.190 1.156 1.134 1.119 1.108 1.100 1.093 1.088 1.083 1.079 1.076 1.074 1.071 1.069 1.067 1.066 1.064 1.063 1.061 1.060 1.059 1.058 1.058 1.057 1.056 1.055 1.055 1.054 1.054 1.053 1.052 1.052 1.052 1.051 1.051 1.050 1.050 1.047 1.045 1.044 1.043 1.042 1.042 1.037 1.036 3.078 1.886 1.638 1.533 1.476 1.440 1.415 1.397 1.383 1.372 1.363 1.356 1.350 1.345 1.341 1.337 1.333 1.330 1.328 1.325 1.323 1.321 1.319 1.318 1.316 1.315 1.314 1.313 1.311 1.310 1.309 1.309 1.308 1.307 1.306 1.306 1.305 1.304 1.304 1.303 1.299 1.296 1.294 1.292 1.291 1.290 1.282 1.282 6.314 2.920 2.353 2.132 2.015 1.943 1.895 1.860 1.833 1.812 1.796 1.782 1.771 1.761 1.753 1.746 1.740 1.734 1.729 1.725 1.721 1.717 1.714 1.711 1.708 1.706 1.703 1.701 1.699 1.697 1.696 1.694 1.692 1.691 1.690 1.688 1.687 1.686 1.685 1.684 1.676 1.671 1.667 1.664 1.662 1.660 1.646 1.645 12.706 4.303 3.182 2.776 2.571 2.447 2.365 2.306 2.262 2.228 2.201 2.179 2.160 2.145 2.131 2.120 2.110 2.101 2.093 2.086 2.080 2.074 2.069 2.064 2.060 2.056 2.052 2.048 2.045 2.042 2.040 2.037 2.035 2.032 2.030 2.028 2.026 2.024 2.023 2.021 2.009 2.000 1.994 1.990 1.987 1.984 1.962 1.960 15.894 4.849 3.482 2.999 2.757 2.612 2.517 2.449 2.398 2.359 2.328 2.303 2.282 2.264 2.249 2.235 2.224 2.214 2.205 2.197 2.189 2.183 2.177 2.172 2.167 2.162 2.158 2.154 2.150 2.147 2.144 2.141 2.138 2.136 2.133 2.131 2.129 2.127 2.125 2.123 2.109 2.099 2.093 2.088 2.084 2.081 2.056 2.054 31.821 6.965 4.541 3.747 3.365 3.143 2.998 2.896 2.821 2.764 2.718 2.681 2.650 2.624 2.602 2.583 2.567 2.552 2.539 2.528 2.518 2.508 2.500 2.492 2.485 2.479 2.473 2.467 2.462 2.457 2.453 2.449 2.445 2.441 2.438 2.434 2.431 2.429 2.426 2.423 2.403 2.390 2.381 2.374 2.368 2.364 2.330 2.326 63.657 9.925 5.841 4.604 4.032 3.707 3.499 3.355 3.250 3.169 3.106 3.055 3.012 2.977 2.947 2.921 2.898 2.878 2.861 2.845 2.831 2.819 2.807 2.797 2.787 2.779 2.771 2.763 2.756 2.750 2.744 2.738 2.733 2.728 2.724 2.719 2.715 2.712 2.708 2.704 2.678 2.660 2.648 2.639 2.632 2.626 2.581 2.576 127.321 14.089 7.453 5.598 4.773 4.317 4.029 3.833 3.690 3.581 3.497 3.428 3.372 3.326 3.286 3.252 3.222 3.197 3.174 3.153 3.135 3.119 3.104 3.091 3.078 3.067 3.057 3.047 3.038 3.030 3.022 3.015 3.008 3.002 2.996 2.990 2.985 2.980 2.976 2.971 2.937 2.915 2.899 2.887 2.878 2.871 2.813 2.807 318.309 22.327 10.215 7.173 5.893 5.208 4.785 4.501 4.297 4.144 4.025 3.930 3.852 3.787 3.733 3.686 3.646 3.610 3.579 3.552 3.527 3.505 3.485 3.467 3.450 3.435 3.421 3.408 3.396 3.385 3.375 3.365 3.356 3.348 3.340 3.333 3.326 3.319 3.313 3.307 3.261 3.232 3.211 3.195 3.183 3.174 3.098 3.090 636.619 31.599 12.924 8.610 6.869 5.959 5.408 5.041 4.781 4.587 4.437 4.318 4.221 4.140 4.073 4.015 3.965 3.922 3.883 3.850 3.819 3.792 3.768 3.745 3.725 3.707 3.690 3.674 3.659 3.646 3.633 3.622 3.611 3.601 3.591 3.582 3.574 3.566 3.558 3.551 3.496 3.460 3.435 3.416 3.402 3.390 3.300 3.291 Area z Table V Standard Normal Distribution 03 04 05 00 01 02 23.4 23.3 23.2 23.1 23.0 0.0003 0.0005 0.0007 0.0010 0.0013 0.0003 0.0005 0.0007 0.0009 0.0013 0.0003 0.0005 0.0006 0.0009 0.0013 0.0003 0.0004 0.0006 0.0009 0.0012 0.0003 0.0004 0.0006 0.0008 0.0012 22.9 22.8 22.7 22.6 22.5 0.0019 0.0026 0.0035 0.0047 0.0062 0.0018 0.0025 0.0034 0.0045 0.0060 0.0018 0.0024 0.0033 0.0044 0.0059 0.0017 0.0023 0.0032 0.0043 0.0057 22.4 22.3 22.2 22.1 22.0 0.0082 0.0107 0.0139 0.0179 0.0228 0.0080 0.0104 0.0136 0.0174 0.0222 0.0078 0.0102 0.0132 0.0170 0.0217 21.9 21.8 21.7 21.6 21.5 0.0287 0.0359 0.0446 0.0548 0.0668 0.0281 0.0351 0.0436 0.0537 0.0655 21.4 21.3 21.2 21.1 21.0 0.0808 0.0968 0.1151 0.1357 0.1587 20.9 20.8 20.7 20.6 20.5 20.4 20.3 20.2 20.1 20.0 z 06 07 08 09 0.0003 0.0004 0.0006 0.0008 0.0011 0.0003 0.0004 0.0006 0.0008 0.0011 0.0003 0.0004 0.0005 0.0008 0.0011 0.0003 0.0004 0.0005 0.0007 0.0010 0.0002 0.0003 0.0005 0.0007 0.0010 0.0016 0.0023 0.0031 0.0041 0.0055 0.0016 0.0022 0.0030 0.0040 0.0054 0.0015 0.0021 0.0029 0.0039 0.0052 0.0015 0.0021 0.0028 0.0038 0.0051 0.0014 0.0020 0.0027 0.0037 0.0049 0.0014 0.0019 0.0026 0.0036 0.0048 0.0075 0.0099 0.0129 0.0166 0.0212 0.0073 0.0096 0.0125 0.0162 0.0207 0.0071 0.0094 0.0122 0.0158 0.0202 0.0069 0.0091 0.0119 0.0154 0.0197 0.0068 0.0089 0.0116 0.0150 0.0192 0.0066 0.0087 0.0113 0.0146 0.0188 0.0064 0.0084 0.0110 0.0143 0.0183 0.0274 0.0344 0.0427 0.0526 0.0643 0.0268 0.0336 0.0418 0.0516 0.0630 0.0262 0.0329 0.0409 0.0505 0.0618 0.0256 0.0322 0.0401 0.0495 0.0606 0.0250 0.0314 0.0392 0.0485 0.0594 0.0244 0.0307 0.0384 0.0475 0.0582 0.0239 0.0301 0.0375 0.0465 0.0571 0.0233 0.0294 0.0367 0.0455 0.0559 0.0793 0.0951 0.1131 0.1335 0.1562 0.0778 0.0934 0.1112 0.1314 0.1539 0.0764 0.0918 0.1093 0.1292 0.1515 0.0749 0.0901 0.1075 0.1271 0.1492 0.0735 0.0885 0.1056 0.1251 0.1469 0.0721 0.0869 0.1038 0.1230 0.1446 0.0708 0.0853 0.1020 0.1210 0.1423 0.0694 0.0838 0.1003 0.1190 0.1401 0.0681 0.0823 0.0985 0.1170 0.1379 0.1841 0.2119 0.2420 0.2743 0.3085 0.1814 0.2090 0.2389 0.2709 0.3050 0.1788 0.2061 0.2358 0.2676 0.3015 0.1762 0.2033 0.2327 0.2643 0.2981 0.1736 0.2005 0.2296 0.2611 0.2946 0.1711 0.1977 0.2266 0.2578 0.2912 0.1685 0.1949 0.2236 0.2546 0.2877 0.1660 0.1922 0.2206 0.2514 0.2843 0.1635 0.1894 0.2177 0.2483 0.2810 0.1611 0.1867 0.2148 0.2451 0.2776 0.3446 0.3821 0.4207 0.4602 0.5000 0.3409 0.3783 0.4168 0.4562 0.4960 0.3372 0.3745 0.4129 0.4522 0.4920 0.3336 0.3707 0.4090 0.4483 0.4880 0.3300 0.3669 0.4052 0.4443 0.4840 0.3264 0.3632 0.4013 0.4404 0.4801 0.3228 0.3594 0.3974 0.4364 0.4761 0.3192 0.3557 0.3936 0.4325 0.4721 0.3156 0.3520 0.3897 0.4286 0.4681 0.3121 0.3483 0.3859 0.4247 0.4641 Area z Table V (continued ) Standard Normal Distribution 03 04 05 00 01 02 0.0 0.1 0.2 0.3 0.4 0.5000 0.5398 0.5793 0.6179 0.6554 0.5040 0.5438 0.5832 0.6217 0.6591 0.5080 0.5478 0.5871 0.6255 0.6628 0.5120 0.5517 0.5910 0.6293 0.6664 0.5160 0.5557 0.5948 0.6331 0.6700 0.5 0.6 0.7 0.8 0.9 0.6915 0.7257 0.7580 0.7881 0.8159 0.6950 0.7291 0.7611 0.7910 0.8186 0.6985 0.7324 0.7642 0.7939 0.8212 0.7019 0.7357 0.7673 0.7967 0.8238 1.0 1.1 1.2 1.3 1.4 0.8413 0.8643 0.8849 0.9032 0.9192 0.8438 0.8665 0.8869 0.9049 0.9207 0.8461 0.8686 0.8888 0.9066 0.9222 1.5 1.6 1.7 1.8 1.9 0.9332 0.9452 0.9554 0.9641 0.9713 0.9345 0.9463 0.9564 0.9649 0.9719 2.0 2.1 2.2 2.3 2.4 0.9772 0.9821 0.9861 0.9893 0.9918 2.5 2.6 2.7 2.8 2.9 3.0 3.1 3.2 3.3 3.4 z 06 07 08 09 0.5199 0.5596 0.5987 0.6368 0.6736 0.5239 0.5636 0.6026 0.6406 0.6772 0.5279 0.5675 0.6064 0.6443 0.6808 0.5319 0.5714 0.6103 0.6480 0.6844 0.5359 0.5753 0.6141 0.6517 0.6879 0.7054 0.7389 0.7704 0.7995 0.8264 0.7088 0.7422 0.7734 0.8023 0.8289 0.7123 0.7454 0.7764 0.8051 0.8315 0.7157 0.7486 0.7794 0.8078 0.8340 0.7190 0.7517 0.7823 0.8106 0.8365 0.7224 0.7549 0.7852 0.8133 0.8389 0.8485 0.8708 0.8907 0.9082 0.9236 0.8508 0.8729 0.8925 0.9099 0.9251 0.8531 0.8749 0.8944 0.9115 0.9265 0.8554 0.8770 0.8962 0.9131 0.9279 0.8577 0.8790 0.8980 0.9147 0.9292 0.8599 0.8810 0.8997 0.9162 0.9306 0.8621 0.8830 0.9015 0.9177 0.9319 0.9357 0.9474 0.9573 0.9656 0.9726 0.9370 0.9484 0.9582 0.9664 0.9732 0.9382 0.9495 0.9591 0.9671 0.9738 0.9394 0.9505 0.9599 0.9678 0.9744 0.9406 0.9515 0.9608 0.9686 0.9750 0.9418 0.9525 0.9616 0.9693 0.9756 0.9429 0.9535 0.9625 0.9699 0.9761 0.9441 0.9545 0.9633 0.9706 0.9767 0.9778 0.9826 0.9864 0.9896 0.9920 0.9783 0.9830 0.9868 0.9898 0.9922 0.9788 0.9834 0.9871 0.9901 0.9925 0.9793 0.9838 0.9875 0.9904 0.9927 0.9798 0.9842 0.9878 0.9906 0.9929 0.9803 0.9846 0.9881 0.9909 0.9931 0.9808 0.9850 0.9884 0.9911 0.9932 0.9812 0.9854 0.9887 0.9913 0.9934 0.9817 0.9857 0.9890 0.9916 0.9936 0.9938 0.9953 0.9965 0.9974 0.9981 0.9940 0.9955 0.9966 0.9975 0.9982 0.9941 0.9956 0.9967 0.9976 0.9982 0.9943 0.9957 0.9968 0.9977 0.9983 0.9945 0.9959 0.9969 0.9977 0.9984 0.9946 0.9960 0.9970 0.9978 0.9984 0.9948 0.9961 0.9971 0.9979 0.9985 0.9949 0.9962 0.9972 0.9979 0.9985 0.9951 0.9963 0.9973 0.9980 0.9986 0.9952 0.9964 0.9974 0.9981 0.9986 0.9987 0.9990 0.9993 0.9995 0.9997 0.9987 0.9991 0.9993 0.9995 0.9997 0.9987 0.9991 0.9994 0.9995 0.9997 0.9988 0.9991 0.9994 0.9996 0.9997 0.9988 0.9992 0.9994 0.9996 0.9997 0.9989 0.9992 0.9994 0.9996 0.9997 0.9989 0.9992 0.9994 0.9996 0.9997 0.9989 0.9992 0.9995 0.9996 0.9997 0.9990 0.9993 0.9995 0.9996 0.9997 0.9990 0.9993 0.9995 0.9997 0.9998 Tables and Formulas for Sullivan, Statistics: Informed Decisions Using Data ©2017 Pearson Education, Inc Chapter Organizing and Summarizing Data • Relative frequency = Chapter • Class midpoint: The sum of consecutive lower class limits divided by frequency sum of all frequencies Numerically Summarizing Data • Population Mean: m = gxi n • Sample Mean: x = gxi N • Population Mean from Grouped Data: m = • Population Standard Deviation: s = C g 1xi - m2 gx 2i - N = • Sample Standard Deviation S g 1xi - x = C n-1 S s = gxi fi gfi • Sample Mean from Grouped Data: x = • Range = Largest Data Value - Smallest Data Value gxi 2 • Population Standard Deviation from Grouped Data: N g 1xi - m2 fi = gfi R N gx 2i - gwi xi gwi • Weighted Mean: xw = gxi fi gfi s = gxi 2 B gxi fi 2 gfi gfi gx 2i fi - • Sample Standard Deviation from Grouped Data: n g 1xi - m2 fi = B gfi - R n-1 s = • Population Variance: s2 • Sample Variance: s2 • Population z-score: z = • Empirical Rule: If the shape of the distribution is bellshaped, then • Approximately 68% of the data lie within standard deviation of the mean • Approximately 95% of the data lie within standard deviations of the mean • Approximately 99.7% of the data lie within standard deviations of the mean • Sample z-score: z = gxi fi 2 gfi gfi - gx 2i fi - x- m s x- x s • Interquartile Range: IQR = Q3 - Q1 • Lower and Upper Fences: Lower fence = Q1 - 1.51IQR2 Upper fence = Q3 + 1.51IQR2 • Five-Number Summary Minimum, Q1, M, Q3, Maximum Chapter Describing the Relation between Two Variables • Correlation Coefficient: r = aa xi - x yi - y ba b sx sy • Residual = observed y - predicted y = y - yn • R2 = r for the least-squares regression model yn = b1x + b0 n-1 • The equation of the least-squares regression line is yn = b1x + b0, where yn is the predicted value, b1 = r # is the slope, and b0 = y - b1x is the intercept Chapter sx Probability • Addition Rule for Disjoint Events • Empirical Probability P1E2 ≈ frequency of E P1E or F = P1E2 + P1F number of trials of experiment • Addition Rule for n Disjoint Events • Classical Probability P1E2 = sy • The coefficient of determination, R2, measures the proportion of total variation in the response variable that is explained by the least-squares regression line number of ways that E can occur number of possible outcomes = N1E2 N1S2 P1E or F or G or g = P1E2 + P1F + P1G2 + g • General Addition Rule P1E or F = P1E2 + P1F - P1E and F Tables and Formulas for Sullivan, Statistics: Informed Decisions Using Data ©2017 Pearson Education, Inc • Factorial • Complement Rule P1E c = - P1E2 • Multiplication Rule for Independent Events P1E and F = P1E2 # P1F n! = n # 1n - 12 # 1n - 22 # g # # # • Permutation of n objects taken r at a time: n Pr • Multiplication Rule for n Independent Events P1E and F and G g = P1E2 # P1F2 # P1G2 # g P1E and F P1E2 N1E and F = N1E2 nC r n! g # nk! Discrete Probability Distributions • Mean (Expected Value) of a Discrete Random Variable mX = gx # P1x2 • Mean and Standard Deviation of a Binomial Random Variable sX = g 1x - m2 # P1x2 = g 3x 2P1x24 - m2X • Poisson Probability Distribution Function P1x2 = • Binomial Probability Distribution Function P1x2 = nCx px 11 - p2 n - x • Standardizing a Normal Random Variable x! e -lt x = 0, 1, 2, c mX = lt sX = 2lt • Finding the Score: x = m + zs x-m s Sampling Distributions • Mean and Standard Deviation of the Sampling Distribution of x mx = m and • Sample Proportion: pn = Chapter 1lt2 x • Mean and Standard Deviation of a Poisson Random Variable The Normal Distribution z = sX = 2np11 - p2 mX = np • Standard Deviation of a Discrete Random Variable Chapter n! r!1n - r2! n1! # n2! # P1E and F2 = P1E2 # P1F ͉ E2 Chapter = • Permutations with Repetition: • General Multiplication Rule Chapter n! 1n - r2! • Combination of n objects taken r at a time: • Conditional Probability Rule P1F ͉ E2 = = sx = x n • Mean and Standard Deviation of the Sampling Distribution of pn s m pn = p and s pn = 2n p11 - p2 B n Estimating the Value of a Parameter Confidence Intervals • A 11 - a2 # 100% confidence interval about p is pn { za>2 # B pn 11 - pn n • A 11 - a2 # 100% confidence interval about m is x { ta>2 # s 1n Note: ta>2 is computed using n - degrees of freedom • A 11 - a2 # 100% confidence interval about s is B 1n - 12s2 xa>2 6s6 1n - 12s2 B x12 - a>2 Sample Size • To estimate the population proportion with a margin of error E at a 11 - a2 # 100% level of confidence: za>2 n = pn 11 - pn a b rounded up to the next integer, E where pn is a prior estimate of the population proportion, za>2 or n = 0.25 a b rounded up to the next integer when no E prior estimate of p is available • To estimate the population mean with a margin of error E za>2 # s b at a 11 - a2 # 100% level of confidence: n = a E rounded up to the next integer Tables and Formulas for Sullivan, Statistics: Informed Decisions Using Data ©2017 Pearson Education, Inc Chapter 10 Hypothesis Tests Regarding a Parameter Test Statistics pn - p0 • z0 = C • t0 = p0 11 - p0 n Chapter 11 • x20 = s ΋ 1n 1n - 12s2 s20 Inferences on Two Samples • Test Statistic Comparing Two Means (Independent Sampling) • Test Statistic Comparing Two Population Proportions (Independent Samples) z0 = x - m0 pn - pn - 1p1 - p2 t0 = x1 + x2 where pn = n1 + n2 1 2pn 11 - pn + B n1 n2 • Confidence Interval for the Difference of Two Proportions (Independent Samples) 1pn - pn 2 { za>2 C pn 11 - pn n1 + s21 s22 + C n1 n2 • Confidence Interval for the Difference of Two Means (Independent Samples) 1x1 - x2 { ta>2 pn 11 - pn 2 n2 d - md • Test Statistic for Comparing Two Population Standard Deviations sd ΋ 1n where d is the mean and sd is the standard deviation of the differenced data • Confidence Interval for Matched-Pairs Data d { ta>2 # s21 s22 + C n1 n2 Note: ta>2 is found using the smaller of n1 - or n2 - degrees of freedom • Test Statistic for Matched-Pairs Data t0 = 1x1 - x2 - 1m1 - m2 F0 = s21 s22 • Finding a Critical F for the Left Tail sd F1 - a,n1 - 1,n2 - = 1n Fa,n2 - 1,n1 - Note: ta>2 is found using n - degrees of freedom Chapter 12 Inference on Categorical Data • Expected Counts (when testing for goodness of fit) • Chi-Square Test Statistic Ei = mi = npi for i = 1, 2, c, k • Expected Frequencies (when testing for independence or homogeneity of proportions) Expected frequency = x20 = a 1observed - expected2 expected i = 1, 2, c, k table total Comparing Three or More Means Mean square due to treatment Mean square due to error MST = MSE = 1f12 - f21 2 f12 + f21 • Test Statistic for Tukey’s Test after One-Way ANOVA = MST MSE q = where Ei • Test Statistic for Comparing Two Proportions (Dependent Samples) • Test Statistic for One-Way ANOVA F = 1Oi - Ei 2 All Ei Ú and no more than 20% less than 1row total21column total2 x20 = Chapter 13 = a n1 1x1 - x2 + n2 1x2 - x2 + g + nk 1xk - x2 k-1 1n1 - 12s21 + 1n2 - 12s22 + g + 1nk - 12s2k n-k 1x2 - x1 - 1m2 - m1 s2 B2 #a1 n1 + b n2 = x2 - x1 s2 # 1 a + b B2 n1 n2 Tables and Formulas for Sullivan, Statistics: Informed Decisions Using Data ©2017 Pearson Education, Inc Chapter 14 Inference on the Least-Squares Regression Model and Multiple Regression • Standard Error of the Estimate se = C g 1yi - yn i g residuals n-2 = C • Standard error of b1 sb1 = • Confidence Interval about the Mean Response of y, yn yn { ta>2 # se n- where x* is the given value of the explanatory variable and ta>2 is the critical value with n - degrees of freedom se 2g 1xi - x2 • Prediction Interval about an Individual Response, yn • Test Statistic for the Slope of the Least-Squares Regression Line t0 = b1 - b1 se n 2g 1xi - x2 = b1 - b1 sb1 • Confidence Interval for the Slope of the Regression Line b1 { ta>2 # 1x* - x2 + C n g 1xi - x2 yn { ta>2 # se C 1+ 1x* - x2 + n g 1xi - x2 where x* is the given value of the explanatory variable and ta>2 is the critical value with n - degrees of freedom se 2g 1xi - x2 where ta>2 is computed with n - degrees of freedom Chapter 15 Nonparametric Statistics • Test Statistic for a Runs Test for Randomness Small-Sample Case If n1 … 20 and n2 … 20, the test statistic in the runs test for randomness is r, the number of runs Large-Sample Case If n1 20 or n2 20, the test statistic is r - mr z0 = where sr mr = 2n1n2 12n1n2 - n2 2n1n2 + and sr = n B n2 1n - 12 • Test Statistic for a One-Sample Sign Test Small-Sample Case n " 252 Two-Tailed Left-Tailed Right-Tailed H0 : M = M0 H0 : M = M0 H0 : M = M0 H1 : M ≠ M0 H1 : M M0 H1 : M M0 The test statistic, k, is the smaller of the number of minus signs or plus signs The test statistic, k, is the number of plus signs The test statistic, k, is the number of minus signs Large-Sample Case n + 252 The test statistic, z0, is z0 = n 1k + 0.52 1n Large-Sample Case 1n + 302 z0 = C T - n1n + 12 n1n + 12 12n + 12 24 where T is the test statistic from the small-sample case • Test Statistic for the Mann–Whitney Test Small-Sample Case 1n1 " 20 and n2 " 202 If S is the sum of the ranks corresponding to the sample from population X, then the test statistic, T, is given by T = S- n1 1n1 + 12 Note: The value of S is always obtained by summing the ranks of the sample data that correspond to MX in the hypothesis Large-Sample Case 1n1 + 202 or 1n2 + 202 T - z0 = B n1n2 n1n2 1n1 + n2 + 12 12 • Test Statistic for Spearman’s Rank Correlation Test where n is the number of minus and plus signs and k is obtained as described in the small sample case • Test Statistic for the Wilcoxon Matched-Pairs Signed-Ranks Test Small-Sample Case n " 302 Two-Tailed Left-Tailed Right-Tailed H0: MD = H0: MD = H0: MD = H1: MD ≠ H1: MD H0: MD Test Statistic: T is the smaller of T + or T - Test Statistic: T = T+ Test Statistic: T = ͉T- ͉ rs = - 6gd 2i n1n2 - 12 where di = the difference in the ranks of the two observations in the i th ordered pair • Test Statistic for the Kruskal–Wallis Test H = = ni 1N + 12 12 c Ri d a N1N + 12 ni R2k R21 R22 12 + + g+ R - 31N + 12 J N1N + 12 n1 n2 nk where Ri is the sum of the ranks in the ith sample Tables and Formulas for Sullivan, Statistics: Informed Decisions Using Data ©2017 Pearson Education, Inc­ Table I Random Numbers Column Number Row Number 01–05 06–10 11–15 16–20 21–25 26–30 31–35 36–40 41–45 46–50 01 02 03 04 89392 61458 11452 27575 23212 17639 74197 04429 74483 96252 81962 31308 36590 95649 48443 02241 25956 73727 90360 01698 36544 33912 26480 19191 68518 72896 73231 18948 40805 66218 37740 78871 09980 52341 26628 36030 00467 97141 44690 23980 05 06 07 08 09 10 36829 81902 59761 46827 24040 98144 59109 93458 55212 25906 66449 96372 88976 42161 33360 64708 32353 50277 46845 26099 68751 20307 83668 15571 28329 09419 86737 78423 13874 82261 47460 89073 79743 15910 86741 66628 88944 82849 85262 86548 81312 31457 08264 09160 31887 08763 54185 00377 00843 61845 37879 47050 78824 63423 84592 40906 17525 18513 00718 55141 11 12 13 14 15 14228 55366 96101 38152 85007 17930 51057 30646 55474 18416 30118 90065 35526 30153 24661 00438 14791 90389 26525 95581 49666 62426 73634 83647 45868 65189 02957 79304 31988 15662 62869 85518 96635 82182 28906 31304 28822 06626 98377 36392 17117 30588 94683 33802 07617 71489 32798 16696 80471 50248 16 17 18 19 20 85544 10446 67237 23026 67411 15890 20699 45509 89817 58542 80011 98370 17638 05403 18678 18160 17684 65115 82209 46491 33468 16932 29757 30573 13219 84106 80449 80705 47501 84084 40603 92654 82686 00135 27783 01315 02084 48565 33955 34508 74664 19985 72612 50250 55158 20553 59321 61760 72592 78742 Table II  Critical Values (CV) for Correlation Coefficient n CV n CV n CV n CV 0.997 10 0.632 17 0.482 24 0.404 0.950 11 0.602 18 0.468 25 0.396 0.878 12 0.576 19 0.456 26 0.388 0.811 13 0.553 20 0.444 27 0.381 0.754 14 0.532 21 0.433 28 0.374 0.707 15 0.514 22 0.423 29 0.367 0.666 16 0.497 23 0.413 30 0.361 Table VI  Critical Values for Normal Probability Plots Sample Size, n Critical Value Sample Size, n Critical Value Sample Size, n Critical Value  5 0.880 13 0.932 21 0.952  6 0.888 14 0.935 22 0.954  7 0.898 15 0.939 23 0.956  8 0.906 16 0.941 24 0.957  9 0.912 17 0.944 25 0.959 10 0.918 18 0.946 30 0.960 11 0.923 19 0.949 12 0.928 20 0.951 Source: S W Looney and T R Gulledge, Jr “Use of the ­Correlation Coefficient with Normal Probability Plots,” ­American ­Statistician 39(Feb 1985): 75–79 Tables and Formulas for Sullivan, Statistics: Informed Decisions Using Data ©2017 Pearson Education, Inc­ Table V Area z Standard Normal Distribution 02 03 04 05 06 00 01 07 08 09 23.4 23.3 23.2 23.1 23.0 0.0003 0.0005 0.0007 0.0010 0.0013 0.0003 0.0005 0.0007 0.0009 0.0013 0.0003 0.0005 0.0006 0.0009 0.0013 0.0003 0.0004 0.0006 0.0009 0.0012 0.0003 0.0004 0.0006 0.0008 0.0012 0.0003 0.0004 0.0006 0.0008 0.0011 0.0003 0.0004 0.0006 0.0008 0.0011 0.0003 0.0004 0.0005 0.0008 0.0011 0.0003 0.0004 0.0005 0.0007 0.0010 0.0002 0.0003 0.0005 0.0007 0.0010 22.9 22.8 22.7 22.6 22.5 0.0019 0.0026 0.0035 0.0047 0.0062 0.0018 0.0025 0.0034 0.0045 0.0060 0.0018 0.0024 0.0033 0.0044 0.0059 0.0017 0.0023 0.0032 0.0043 0.0057 0.0016 0.0023 0.0031 0.0041 0.0055 0.0016 0.0022 0.0030 0.0040 0.0054 0.0015 0.0021 0.0029 0.0039 0.0052 0.0015 0.0021 0.0028 0.0038 0.0051 0.0014 0.0020 0.0027 0.0037 0.0049 0.0014 0.0019 0.0026 0.0036 0.0048 22.4 22.3 22.2 22.1 22.0 0.0082 0.0107 0.0139 0.0179 0.0228 0.0080 0.0104 0.0136 0.0174 0.0222 0.0078 0.0102 0.0132 0.0170 0.0217 0.0075 0.0099 0.0129 0.0166 0.0212 0.0073 0.0096 0.0125 0.0162 0.0207 0.0071 0.0094 0.0122 0.0158 0.0202 0.0069 0.0091 0.0119 0.0154 0.0197 0.0068 0.0089 0.0116 0.0150 0.0192 0.0066 0.0087 0.0113 0.0146 0.0188 0.0064 0.0084 0.0110 0.0143 0.0183 21.9 21.8 21.7 21.6 21.5 0.0287 0.0359 0.0446 0.0548 0.0668 0.0281 0.0351 0.0436 0.0537 0.0655 0.0274 0.0344 0.0427 0.0526 0.0643 0.0268 0.0336 0.0418 0.0516 0.0630 0.0262 0.0329 0.0409 0.0505 0.0618 0.0256 0.0322 0.0401 0.0495 0.0606 0.0250 0.0314 0.0392 0.0485 0.0594 0.0244 0.0307 0.0384 0.0475 0.0582 0.0239 0.0301 0.0375 0.0465 0.0571 0.0233 0.0294 0.0367 0.0455 0.0559 21.4 21.3 21.2 21.1 21.0 0.0808 0.0968 0.1151 0.1357 0.1587 0.0793 0.0951 0.1131 0.1335 0.1562 0.0778 0.0934 0.1112 0.1314 0.1539 0.0764 0.0918 0.1093 0.1292 0.1515 0.0749 0.0901 0.1075 0.1271 0.1492 0.0735 0.0885 0.1056 0.1251 0.1469 0.0721 0.0869 0.1038 0.1230 0.1446 0.0708 0.0853 0.1020 0.1210 0.1423 0.0694 0.0838 0.1003 0.1190 0.1401 0.0681 0.0823 0.0985 0.1170 0.1379 20.9 20.8 20.7 20.6 20.5 0.1841 0.2119 0.2420 0.2743 0.3085 0.1814 0.2090 0.2389 0.2709 0.3050 0.1788 0.2061 0.2358 0.2676 0.3015 0.1762 0.2033 0.2327 0.2643 0.2981 0.1736 0.2005 0.2296 0.2611 0.2946 0.1711 0.1977 0.2266 0.2578 0.2912 0.1685 0.1949 0.2236 0.2546 0.2877 0.1660 0.1922 0.2206 0.2514 0.2843 0.1635 0.1894 0.2177 0.2483 0.2810 0.1611 0.1867 0.2148 0.2451 0.2776 20.4 20.3 20.2 20.1 20.0 0.0 0.1 0.2 0.3 0.4 0.3446 0.3821 0.4207 0.4602 0.5000 0.5000 0.5398 0.5793 0.6179 0.6554 0.3409 0.3783 0.4168 0.4562 0.4960 0.5040 0.5438 0.5832 0.6217 0.6591 0.3372 0.3745 0.4129 0.4522 0.4920 0.5080 0.5478 0.5871 0.6255 0.6628 0.3336 0.3707 0.4090 0.4483 0.4880 0.5120 0.5517 0.5910 0.6293 0.6664 0.3300 0.3669 0.4052 0.4443 0.4840 0.5160 0.5557 0.5948 0.6331 0.6700 0.3264 0.3632 0.4013 0.4404 0.4801 0.5199 0.5596 0.5987 0.6368 0.6736 0.3228 0.3594 0.3974 0.4364 0.4761 0.5239 0.5636 0.6026 0.6406 0.6772 0.3192 0.3557 0.3936 0.4325 0.4721 0.5279 0.5675 0.6064 0.6443 0.6808 0.3156 0.3520 0.3897 0.4286 0.4681 0.5319 0.5714 0.6103 0.6480 0.6844 0.3121 0.3483 0.3859 0.4247 0.4641 0.5359 0.5753 0.6141 0.6517 0.6879 0.5 0.6 0.7 0.8 0.9 0.6915 0.7257 0.7580 0.7881 0.8159 0.6950 0.7291 0.7611 0.7910 0.8186 0.6985 0.7324 0.7642 0.7939 0.8212 0.7019 0.7357 0.7673 0.7967 0.8238 0.7054 0.7389 0.7704 0.7995 0.8264 0.7088 0.7422 0.7734 0.8023 0.8289 0.7123 0.7454 0.7764 0.8051 0.8315 0.7157 0.7486 0.7794 0.8078 0.8340 0.7190 0.7517 0.7823 0.8106 0.8365 0.7224 0.7549 0.7852 0.8133 0.8389 1.0 1.1 1.2 1.3 1.4 0.8413 0.8643 0.8849 0.9032 0.9192 0.8438 0.8665 0.8869 0.9049 0.9207 0.8461 0.8686 0.8888 0.9066 0.9222 0.8485 0.8708 0.8907 0.9082 0.9236 0.8508 0.8729 0.8925 0.9099 0.9251 0.8531 0.8749 0.8944 0.9115 0.9265 0.8554 0.8770 0.8962 0.9131 0.9279 0.8577 0.8790 0.8980 0.9147 0.9292 0.8599 0.8810 0.8997 0.9162 0.9306 0.8621 0.8830 0.9015 0.9177 0.9319 1.5 1.6 1.7 1.8 1.9 0.9332 0.9452 0.9554 0.9641 0.9713 0.9345 0.9463 0.9564 0.9649 0.9719 0.9357 0.9474 0.9573 0.9656 0.9726 0.9370 0.9484 0.9582 0.9664 0.9732 0.9382 0.9495 0.9591 0.9671 0.9738 0.9394 0.9505 0.9599 0.9678 0.9744 0.9406 0.9515 0.9608 0.9686 0.9750 0.9418 0.9525 0.9616 0.9693 0.9756 0.9429 0.9535 0.9625 0.9699 0.9761 0.9441 0.9545 0.9633 0.9706 0.9767 2.0 2.1 2.2 2.3 2.4 0.9772 0.9821 0.9861 0.9893 0.9918 0.9778 0.9826 0.9864 0.9896 0.9920 0.9783 0.9830 0.9868 0.9898 0.9922 0.9788 0.9834 0.9871 0.9901 0.9925 0.9793 0.9838 0.9875 0.9904 0.9927 0.9798 0.9842 0.9878 0.9906 0.9929 0.9803 0.9846 0.9881 0.9909 0.9931 0.9808 0.9850 0.9884 0.9911 0.9932 0.9812 0.9854 0.9887 0.9913 0.9934 0.9817 0.9857 0.9890 0.9916 0.9936 2.5 2.6 2.7 2.8 2.9 0.9938 0.9953 0.9965 0.9974 0.9981 0.9940 0.9955 0.9966 0.9975 0.9982 0.9941 0.9956 0.9967 0.9976 0.9982 0.9943 0.9957 0.9968 0.9977 0.9983 0.9945 0.9959 0.9969 0.9977 0.9984 0.9946 0.9960 0.9970 0.9978 0.9984 0.9948 0.9961 0.9971 0.9979 0.9985 0.9949 0.9962 0.9972 0.9979 0.9985 0.9951 0.9963 0.9973 0.9980 0.9986 0.9952 0.9964 0.9974 0.9981 0.9986 3.0 3.1 3.2 3.3 3.4 0.9987 0.9990 0.9993 0.9995 0.9997 0.9987 0.9991 0.9993 0.9995 0.9997 0.9987 0.9991 0.9994 0.9995 0.9997 0.9988 0.9991 0.9994 0.9996 0.9997 0.9988 0.9992 0.9994 0.9996 0.9997 0.9989 0.9992 0.9994 0.9996 0.9997 0.9989 0.9992 0.9994 0.9996 0.9997 0.9989 0.9992 0.9995 0.9996 0.9997 0.9990 0.9993 0.9995 0.9996 0.9997 0.9990 0.9993 0.9995 0.9997 0.9998 z Confidence Interval Critical Values, zA/2 Hypothesis Testing Critical Values Level of Confidence Critical Value, zA/2 Level of Significance, A 0.90 or 90% 1.645 0.10 Left-Tailed - 1.28 Right-Tailed 1.28 Two-Tailed { 1.645 0.95 or 95% 1.96 0.05 - 1.645 1.645 { 1.96 0.98 or 98% 2.33 0.01 - 2.33 2.33 { 2.575 0.99 or 99% 2.575 Tables and Formulas for Sullivan, Statistics: Informed Decisions Using Data ©2017 Pearson Education, Inc­ Area in right tail t Table VII t-Distribution Area in Right Tail df 0.25 0.20 0.15 0.10 0.05 0.025 0.02 0.01 0.005 0.0025 0.001 0.0005 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 50 60 70 80 90 100 1000 z 1.000 0.816 0.765 0.741 0.727 0.718 0.711 0.706 0.703 0.700 0.697 0.695 0.694 0.692 0.691 0.690 0.689 0.688 0.688 0.687 0.686 0.686 0.685 0.685 0.684 0.684 0.684 0.683 0.683 0.683 0.682 0.682 0.682 0.682 0.682 0.681 0.681 0.681 0.681 0.681 0.679 0.679 0.678 0.678 0.677 0.677 0.675 0.674 1.376 1.061 0.978 0.941 0.920 0.906 0.896 0.889 0.883 0.879 0.876 0.873 0.870 0.868 0.866 0.865 0.863 0.862 0.861 0.860 0.859 0.858 0.858 0.857 0.856 0.856 0.855 0.855 0.854 0.854 0.853 0.853 0.853 0.852 0.852 0.852 0.851 0.851 0.851 0.851 0.849 0.848 0.847 0.846 0.846 0.845 0.842 0.842 1.963 1.386 1.250 1.190 1.156 1.134 1.119 1.108 1.100 1.093 1.088 1.083 1.079 1.076 1.074 1.071 1.069 1.067 1.066 1.064 1.063 1.061 1.060 1.059 1.058 1.058 1.057 1.056 1.055 1.055 1.054 1.054 1.053 1.052 1.052 1.052 1.051 1.051 1.050 1.050 1.047 1.045 1.044 1.043 1.042 1.042 1.037 1.036 3.078 1.886 1.638 1.533 1.476 1.440 1.415 1.397 1.383 1.372 1.363 1.356 1.350 1.345 1.341 1.337 1.333 1.330 1.328 1.325 1.323 1.321 1.319 1.318 1.316 1.315 1.314 1.313 1.311 1.310 1.309 1.309 1.308 1.307 1.306 1.306 1.305 1.304 1.304 1.303 1.299 1.296 1.294 1.292 1.291 1.290 1.282 1.282 6.314 2.920 2.353 2.132 2.015 1.943 1.895 1.860 1.833 1.812 1.796 1.782 1.771 1.761 1.753 1.746 1.740 1.734 1.729 1.725 1.721 1.717 1.714 1.711 1.708 1.706 1.703 1.701 1.699 1.697 1.696 1.694 1.692 1.691 1.690 1.688 1.687 1.686 1.685 1.684 1.676 1.671 1.667 1.664 1.662 1.660 1.646 1.645 12.706 4.303 3.182 2.776 2.571 2.447 2.365 2.306 2.262 2.228 2.201 2.179 2.160 2.145 2.131 2.120 2.110 2.101 2.093 2.086 2.080 2.074 2.069 2.064 2.060 2.056 2.052 2.048 2.045 2.042 2.040 2.037 2.035 2.032 2.030 2.028 2.026 2.024 2.023 2.021 2.009 2.000 1.994 1.990 1.987 1.984 1.962 1.960 15.894 4.849 3.482 2.999 2.757 2.612 2.517 2.449 2.398 2.359 2.328 2.303 2.282 2.264 2.249 2.235 2.224 2.214 2.205 2.197 2.189 2.183 2.177 2.172 2.167 2.162 2.158 2.154 2.150 2.147 2.144 2.141 2.138 2.136 2.133 2.131 2.129 2.127 2.125 2.123 2.109 2.099 2.093 2.088 2.084 2.081 2.056 2.054 31.821 6.965 4.541 3.747 3.365 3.143 2.998 2.896 2.821 2.764 2.718 2.681 2.650 2.624 2.602 2.583 2.567 2.552 2.539 2.528 2.518 2.508 2.500 2.492 2.485 2.479 2.473 2.467 2.462 2.457 2.453 2.449 2.445 2.441 2.438 2.434 2.431 2.429 2.426 2.423 2.403 2.390 2.381 2.374 2.368 2.364 2.330 2.326 63.657 9.925 5.841 4.604 4.032 3.707 3.499 3.355 3.250 3.169 3.106 3.055 3.012 2.977 2.947 2.921 2.898 2.878 2.861 2.845 2.831 2.819 2.807 2.797 2.787 2.779 2.771 2.763 2.756 2.750 2.744 2.738 2.733 2.728 2.724 2.719 2.715 2.712 2.708 2.704 2.678 2.660 2.648 2.639 2.632 2.626 2.581 2.576 127.321 14.089 7.453 5.598 4.773 4.317 4.029 3.833 3.690 3.581 3.497 3.428 3.372 3.326 3.286 3.252 3.222 3.197 3.174 3.153 3.135 3.119 3.104 3.091 3.078 3.067 3.057 3.047 3.038 3.030 3.022 3.015 3.008 3.002 2.996 2.990 2.985 2.980 2.976 2.971 2.937 2.915 2.899 2.887 2.878 2.871 2.813 2.807 318.309 22.327 10.215 7.173 5.893 5.208 4.785 4.501 4.297 4.144 4.025 3.930 3.852 3.787 3.733 3.686 3.646 3.610 3.579 3.552 3.527 3.505 3.485 3.467 3.450 3.435 3.421 3.408 3.396 3.385 3.375 3.365 3.356 3.348 3.340 3.333 3.326 3.319 3.313 3.307 3.261 3.232 3.211 3.195 3.183 3.174 3.098 3.090 636.619 31.599 12.924 8.610 6.869 5.959 5.408 5.041 4.781 4.587 4.437 4.318 4.221 4.140 4.073 4.015 3.965 3.922 3.883 3.850 3.819 3.792 3.768 3.745 3.725 3.707 3.690 3.674 3.659 3.646 3.633 3.622 3.611 3.601 3.591 3.582 3.574 3.566 3.558 3.551 3.496 3.460 3.435 3.416 3.402 3.390 3.300 3.291 ... Correlation Randomization Test Brain Size 14.1 Using a Randomization Test for Correlation STATISTICS INFORMED DECISIONS USING DATA Fifth Edition Global Edition Michael Sullivan, III Joliet Junior... adaptation from the United States edition, entitled Statistics: Informed Decisions Using Data, 5th Edition, ISBN 978-0-13-413353-9, by Michael Sullivan, III, published by Pearson Education © 2017 All... complete statistics processes • Any problem that has 12 or more observations in the data set has a icon indicating that data set is included on the companion website (www.pearsonglobaleditions.com/

Ngày đăng: 20/10/2018, 10:22

Từ khóa liên quan

Mục lục

  • Cover

  • Title Page

  • Copyright Page

  • Contents

  • Preface to the Instructor

  • Resources for Success

  • Applications Index

  • Part 1: Getting the Information You Need

    • Chapter 1: Data Collection

      • 1.1 Introduction to the Practice of Statistics

      • 1.2 Observational Studies versus Designed Experiments

      • 1.3 Simple Random Sampling

      • 1.4 Other Effective Sampling Methods

      • 1.5 Bias in Sampling

      • 1.6 The Design of Experiments

      • Chapter 1 Review

      • Chapter Test

      • Making an Informed Decision: What College Should I Attend?

      • Case Study: Chrysalises for Cash

      • Part 2: Descriptive Statistics

        • Chapter 2: Organizing and Summarizing Data

          • 2.1 Organizing Qualitative Data

          • 2.2 Organizing Quantitative Data: The Popular Displays

          • 2.3 Additional Displays of Quantitative Data

Tài liệu cùng người dùng

Tài liệu liên quan