Experimental error in physics

30 129 0
Experimental error in physics

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

Experimental Error in Physics, A Few Brief Remarks… [What Every Physicist SHOULD Know] L Pinsky July 2004 © 2004 L Pinsky Outline of This Talk… Overview Systematic and Statistical Errors Kinds of Statistics The Interval Distribution Drawing Conclusions July 2004 © 2004 L Pinsky What your should take away In Science, it is NOT the value you measure, BUT how well you know that value that really counts… Appreciation of the accuracy of the information is what distinguishes REAL Science from the rest of human speculation about nature… …And, remember, NOT all measurements are statistical, BUT all observations have a some sort of associated confidence level… July 2004 © 2004 L Pinsky Almighty Chance… Repeatability is the cornerstone of Science! …BUT, No observation or measurement is truly repeatable! The challenge is to understand the differences between successive measurements… Some observations differ because they are genuinely unique!  (e.g Supernovae, Individual Human Behavior, etc.) Some are different because of RANDOM CHANCE Most real measurements are a combination of BOTH…  (Even the most careful preparation cannot guarantee identical initial conditions…) July 2004 © 2004 L Pinsky The Experimentalist’s Goal The Experimental Scientist seeks to observe nature and deduce from those observations, generalizations about the Universe The generalizations are typically compared with representations of nature (theoretical models) to gain insight as to how well those representations in mimicking nature’s behavior… July 2004 © 2004 L Pinsky Tools of the Trade The techniques associated with STATISTICS are employed to focus the analysis in cases where RANDOM CHANCE is present in the measurement  (e.g Measuring the individual energy levels in an atom) Statistical analysis is generally combined with a more global attempt to place the significance of the observation within the broader context of similar or related phenomena  (e.g Fitting the measured energy levels into a Quantum Mechanical Theory of atomic structure…) What we typically want to know is whether, and to what extent the measurements support or contradict the Theory… July 2004 © 2004 L Pinsky Blunders These can be either Explicit or Implicit  Explicit—Making an overt mistake ( i.e intending to the right thing, but accidentally doing something else, and not realizing it…)  (e.g Using a mislabeled reagent bottle…)  Implicit—Thinking some principle is true, which is not, and proceeding on that assumption  (e.g Believing that no pathogens can survive 100 C) Blunders can only be guarded against by vigilance, and are NOT reflected in error bars when the data are presented… Confidence against Explicit blunders can be enhanced by independent repetition Protection against Implicit blunders can be enhanced by carefully considering (and disclosing) the details regarding ALL procedures and assumptions… July 2004 © 2004 L Pinsky Systematic Error Generally, this includes all of the KNOWN uncertainties that are related to the nature of the observations being made  Instrumental Limitations (e.g resolution or calibration)  Human Limitations (e.g gauge reading ability)  Knowledge limitations (e.g the accuracy with which needed fundamental constants are known) Usually, Systematic Error is quoted independently from Statistical Error However, like all combinations of errors, effects that are independent of one an other can be added in “Quadrature”:  (i.e Etotal = [ E12 + E22]1/2 ) Increased statistics can NEVER reduce Systematic Error ! Even Non-Statistical measurements are subject to Blunders and Systematic Error… July 2004 © 2004 L Pinsky Quantitative v Categorical Statistics Quantitative—When the measured variable takes NUMERICAL values, so that differences and averages between the values make sense…   Continuous—The variable is a continuous real number… (e.g kinematic elastic scattering angles) Discrete—The variable can take on only discrete “counting” values… (e.g demand as a function of price in Economics) Categorical—When the variable can only have an exclusive value (e.g your country of residence), and arithmetic operations have no meaning with respect to the categories… July 2004 © 2004 L Pinsky Getting the Right Parent Distribution Generally, the issue is to find the proper PARENT DISTRIBUTION—(i.e the probability distribution that is actually responsible for the data…) In most cases the PARENT DISTRIBUTION is complex and unknown… …BUT, in most cases it may be reasonably approximated by one of the well known distribution functions… July 2004 © 2004 L Pinsky 10 The Central Limit Theorem Any Distribution that is the sum of many SMALL effects, which are each due to some RANDOM DISTRIBUTION, will tend towards a Normal Distribution in the limit of large statistics, REGARDLESS of the nature of the individual random distributions! July 2004 © 2004 L Pinsky 16 Other Distributions to Know Lorentzian (Cauchy) Distribution—Used to describe Resonant behavior:    P(y) = (Γ/2)/{π[(y-µ)2 + (Γ/2)2]}, Γ =FWHM Here, π means 3.14159… & σ has no meaning! …Instead, the FWHM is the relevant parameter! Landau Distribution—in Particle Physics… Boltzmann Distribution—in Thermo… Bose-Einstein Distribution—in QM… Fermi-Dirac Distribution—in QM… …and others… July 2004 © 2004 L Pinsky 17 Maximum Likelihood The “Likelihood” is simply the product of the probabilities for each individual outcome in a measurement, or an estimate for the total actual probability of the observed measurement being made If one has a candidate distribution that is a function of some parameter, then the value of that parameter that maximizes the likelihood of the observation is the best estimate of that parameter’s value The catch is, one has to know the correct candidate distribution for this to have any meaning… July 2004 © 2004 L Pinsky 18 Drawing Conclusions Rejecting Hypotheses:   Relatively Easy if the form of the PARENT Distribution is known: just show a low probability of fit The χ2 technique is perhaps the best known method A more general technique is the F-Test, which allows one to separate the deviation of the data from the Estimated Distribution AND the discrepancy between the Estimated Distribution and the PARENT DISTRIBUTION July 2004 © 2004 L Pinsky 19 Comparing Alternatives This is much tougher…  Where χ2 tests favor one hypothesis over another, but not decisively, one must take great care It is very east to be fooled into rejecting the correct alternative…   Generally, a test is based on some statistic (e.g χ2) that estimates some parameter in a hypothesis Values of the estimate of the parameter far from that specified by the hypothesis gives evidence against it… One can ask, given a hypothesis, for the probability of getting a set of measurements farther from the one obtained assuming the hypothesis is correct The lower the probability, the less the confidence in the hypothesis being correct… July 2004 © 2004 L Pinsky 20 Fitting Data Fitting to WHAT???  Phenomenological (Generic)  Linear  LogLinear  Polynomial  Hypothesis Driven  Functional Form From Hypothesis Least Squares Paradigm…  Minimizing the Mean Square Error is the Best Estimate of Fit… July 2004 © 2004 L Pinsky 21 Errors in Comparing Hypotheses: Choice of Tests Type I Error—Rejecting a TRUE Hypothesis  The Significance Level of any fixed level confidence test is the probability of a Type I Error More serious, so choose a strict test Type II Error—Accepting a FALSE Hypothesis  The Power of a fixed level test against a particular alternative is – the probability of a Type II Error Choose a test that makes the probability of a Type II Error as small as possible July 2004 © 2004 L Pinsky 22 The INTERVAL DISTRIBUTION This is just an aside that needs mentioning: For RANDOMLY OCCURING EVENTS, the Distribution of TIME INTERVALS between successive events is given by: I(t) = (1/τ) e-t/τ The mean value is τ I(0) = τ, or in words: the most likely value is Thus, there are far more short intervals than long ones! BEWARE: As such, truly RANDOM EVENTS TO THE NAÏVE EYE APPEAR TO “CLUSTER”!!! July 2004 © 2004 L Pinsky 23 Time Series Analysis Plotting Data taken at fixed time intervals is called a Time Series  (e.g The closing Dow Jones Average each day) If nothing changes in the underlying PARENT DISTRIBUTION, then Poisson Statistics apply… BUT, in the real world one normally sees changes from period to period Without specific hints as to causes, one can look for TRENDS and CYCLES or“SEASONS.” Usually, the problem is filtering these out from large variation background fluctuations… July 2004 © 2004 L Pinsky 24 Bayesian Statistics A Field of Statistics that takes into account the degree of “Belief” in a Hypothesis:   P(H|d) = P(d|H) P(H)/P(d) P(d) = Σi P(d|Hi) P(Hi), for multiple hypotheses Can be useful for non-repeatable events Can be applied to multiple sets of prior knowledge taken under differing conditions Bayes Theorem: P(B|A) P(A) = P(A|B) P(B)  Where P(A) and P(B) are unconditional or a priori probabilities… July 2004 © 2004 L Pinsky 25 Propagation of Error Where x= f(u,v), (from the 1st term in the Taylor Series expansion): ∆f(u,v) ~ ∂f/∂u ∆u + ∂f/∂v ∆v Μore generally: σ = σ (∂x/∂u)2 + σ (∂x/∂v)2 + … …+ σ (∂x/∂u) (∂x/∂v) , Where σ is the Covariance… x ϖ u uv uv July 2004 © 2004 L Pinsky 26 Binning Effects One usually “BINS” data in intervals in the dependent variable The choice of both BIN WIDTH and BIN OFFSET may have serious effects on the analysis…  Bin Width Effects May Include:  A large variation in the PARENT DISTRIBUTION over the bin width…  Bins with small statistics…  Artifacts due to discrete structure in the measured values…  Bin Offset Effects May Include:  Mean Value or Fit Slewing…  Artifacts due to discrete structure in the measured values… July 2004 © 2004 L Pinsky 27 Falsifiability To be a valid Scientific Hypothesis, it MUST be FALSIFIABLE   Astrology is a good example of a theory that is not falsifiable because the proponents only look as confirming observations Likewise, the “Marxist Theory of History” is not falsifiable for a similar reason, proponents tend to subsume ALL results within the theory That is: It must make clear, testable predictions, that if shown not to occur, cause REJECTION of the Hypothesis Good Scientific Theories generally Prohibit things! July 2004 © 2004 L Pinsky 28 Occam’s Razor This often misunderstood Philosophical Principle is critical to Scientific Reasoning! Originally stated as “…Assumptions introduced to explain a thing must not be multiplied beyond necessity…” The implication is that if two theories are INDISTINGUISHABLE in EFFECT, then there is NO Distinction, and one can proceed to assume the simpler is true! July 2004 © 2004 L Pinsky 29 After Karl Popper… There are no “Laws” in Science, only Falsifiable CONJECTURES Science is Empirical, which means that an existing Law (Conjecture) can be Falsified without rejecting any or all prior results There is no absolute “Demarkation” in the life of a Hypothesis that elevates it to the exalted status of a LAW… That tends to happen when it is the only Hypothesis left standing at a particular time… July 2004 © 2004 L Pinsky 30 [...]... 2004 L Pinsky 26 Binning Effects One usually “BINS” data in intervals in the dependent variable The choice of both BIN WIDTH and BIN OFFSET may have serious effects on the analysis…  Bin Width Effects May Include:  A large variation in the PARENT DISTRIBUTION over the bin width…  Bins with small statistics…  Artifacts due to discrete structure in the measured values…  Bin Offset Effects May Include:...  Minimizing the Mean Square Error is the Best Estimate of Fit… July 2004 © 2004 L Pinsky 21 Errors in Comparing Hypotheses: Choice of Tests Type I Error Rejecting a TRUE Hypothesis  The Significance Level of any fixed level confidence test is the probability of a Type I Error More serious, so choose a strict test Type II Error Accepting a FALSE Hypothesis  The Power of a fixed level test against... means 3.14159… & σ has no meaning! …Instead, the FWHM is the relevant parameter! Landau Distribution in Particle Physics Boltzmann Distribution in Thermo… Bose-Einstein Distribution in QM… Fermi-Dirac Distribution in QM… …and others… July 2004 © 2004 L Pinsky 17 Maximum Likelihood The “Likelihood” is simply the product of the probabilities for each individual outcome in a measurement, or an estimate... hypothesis gives evidence against it… One can ask, given a hypothesis, for the probability of getting a set of measurements farther from the one obtained assuming the hypothesis is correct The lower the probability, the less the confidence in the hypothesis being correct… July 2004 © 2004 L Pinsky 20 Fitting Data Fitting to WHAT???  Phenomenological (Generic)  Linear  LogLinear  Polynomial  Hypothesis... are far more short intervals than long ones! BEWARE: As such, truly RANDOM EVENTS TO THE NAÏVE EYE APPEAR TO “CLUSTER”!!! July 2004 © 2004 L Pinsky 23 Time Series Analysis Plotting Data taken at fixed time intervals is called a Time Series  (e.g The closing Dow Jones Average each day) If nothing changes in the underlying PARENT DISTRIBUTION, then Poisson Statistics apply… BUT, in the real world one... ALL results within the theory That is: It must make clear, testable predictions, that if shown not to occur, cause REJECTION of the Hypothesis Good Scientific Theories generally Prohibit things! July 2004 © 2004 L Pinsky 28 Occam’s Razor This often misunderstood Philosophical Principle is critical to Scientific Reasoning! Originally stated as “…Assumptions introduced to explain a thing must not be... theories are INDISTINGUISHABLE in EFFECT, then there is NO Distinction, and one can proceed to assume the simpler is true! July 2004 © 2004 L Pinsky 29 After Karl Popper… There are no “Laws” in Science, only Falsifiable CONJECTURES Science is Empirical, which means that an existing Law (Conjecture) can be Falsified without rejecting any or all prior results There is no absolute “Demarkation” in the life... the probability of a Type II Error Choose a test that makes the probability of a Type II Error as small as possible July 2004 © 2004 L Pinsky 22 The INTERVAL DISTRIBUTION This is just an aside that needs mentioning: For RANDOMLY OCCURING EVENTS, the Distribution of TIME INTERVALS between successive events is given by: I(t) = (1/τ) e-t/τ The mean value is τ I(0) = τ, or in words: the most likely value... µ When the Experimental Variance exceeds σ, it is called “Overdispersion” and is usually due to differences in the conditions from one measurement to the next… The distribution of counts within an INDIVIDUAL category over multiple experiments is Poisson! When N is Large and π is small (such that µ = Nπ

Ngày đăng: 01/12/2016, 23:28

Mục lục

  • Experimental Error in Physics, A Few Brief Remarks…

  • Outline of This Talk…

  • What your should take away

  • Almighty Chance…

  • The Experimentalist’s Goal

  • Tools of the Trade

  • Blunders

  • Systematic Error

  • Quantitative v. Categorical Statistics

  • Getting the Right Parent Distribution

  • Deviation, Variance and Standard Deviation

  • Categorical Distributions

  • The Binomial Distribution

  • The Poisson Distribution

  • The Pervasive Gaussian: The NORMAL Distribution

  • The Central Limit Theorem

  • Other Distributions to Know

  • Maximum Likelihood

  • Drawing Conclusions

  • Comparing Alternatives

Tài liệu cùng người dùng

  • Đang cập nhật ...

Tài liệu liên quan