100 Linear discrete inverse Problems

37 262 0
100 Linear discrete inverse Problems

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

Linear discrete inverse Problems (parameter estimation) Least squares and all that Least squares problems Least squares is the basis of many parameter estimation and data fitting procedures A concise tutorial can be found in Chapter 15 of the book Numerical Recipes Press et al (1992) Cambridge Univ Press Available for free online at http://www.nrbook.com Good explanation of essentials in Aster et al (2005) Linear discrete inverse problems Can a and b be resolved ? Under-determined Over-determined Even-determined Over-determined: Linear discrete inverse problem To find the best fit model we can minimize the prediction error of the solution But the data contain errors Let’s assume these are independent and normally distributed, then we weight each residual inversely by the standard deviation of the corresponding (known) error distribution We can obtain a least squares solution by minimizing the weighted prediction error of the solution Over-determined: Linear discrete inverse problem We seek the model vector m which minimizes Compare with maximum likelihood Note that this is a quadratic function of the model vector Solution: Differentiate with respect to m and solve for the model vector which gives a zero gradient in This gives… This is the least-squares solution A solution to the normal equations: Over-determined: Linear discrete inverse problem How does the Least-squares solution compare to the standard equations of linear regression ? Given N data yi with independent normally distributed errors and standard deviations σi what are the expressions for the model parameters m = [a,b]T ? Linear discrete inverse problem: Least squares What happens in the under and even-determined cases ? Under-determined, N=1: Matrix has a zero determinant What m ?eigenvalue and aiszero an infinite number of solutions exist Even-determined, N=2: r d m G =error ? What= is the¡prediction Prediction error is zero ! Example: Over-determined, Linear discrete inverse problem The Ballistics example Given data and noise Calculate G Is the data fit good enough ? And how to errors in data propagate into the solution ? The two questions in parameter estimation We have our fitted model parameters …but we are far from finished ! We need to: Assess the quality of the data fit Goodness of fit: Does the model fit the data to within the statistical uncertainty of the noise ? Estimate how errors in the data propagate into the model What are the errors on the model parameters ? Goodness of fit Once we have our least squares solution mLS how we know whether the fit is good enough given the errors in the data ? Use the prediction error at the least squares solution ! If data errors are Gaussian this as a chi-square statistic 10 Solution error: Model Covariance The model covariance for a least squares problem depends on data errors and not the data itself ! G is controlled by the design of the experiment is the least squares solution The data error distribution gives a model error distribution ! 23 Solution error: Model Covariance For the special case of independent data errors Independent data errors Correlated model errors For linear regression problem 24 Confidence intervals by projection (1-D) The model covariance is a symmetric M x M matrix In the multi-dimensional model space the value of ∆2 follows a distribution with M degrees of freedom Projecting onto the mi axis the 1-D confidence interval becomes Where ∆2 follows a χ21 distribution e.g for 90% interval on m1 Note this is 90% the confidence interval on m alone The joint (m1, m2) 90% confidence ellipse is wider than this 25 Example: Model Covariance and confidence intervals For Ballistics problem 95% confidence interval for parameter i 26 Confidence intervals by projection The M-dimensional confidence ellipsoid can be projected onto any subset (or combination) of ∆ parameters to obtain the corresponding confidence ellipsoid Full M-dimensional ellipsoid Projected ν dimension ellipsoid Projected model vector Projected covariance matrix Chosen percentage point of the χ2ν distribution To find the 90% confidence ellipse for (x,y) from a 3-D (x,y,z) ellipsoid Can you see that this procedure gives the same formula for the 1-D case obtained previously ? 27 Recap: Goodness of fit and model covariance Once a best fit solution has been obtained we test goodness of fit with a chisquare test (assuming Gaussian statistics) If the model passes the goodness of fit test we may proceed to evaluating model covariance (if not then your data errors are probably too small) Evaluate model covariance matrix Plot model or projections of it onto chosen subsets of parameters Calculate confidence intervals using projected equation Where ∆2 follows a χ21 distribution 28 Robust data fitting with the L1 norm Least squares solutions are not robust Minimize We can calculate an L1 solution with the IRLS algorithm is a diagonal weighting matrix that depends on the model See section 2.4 of Aster (2005) 29 Monte Carlo error propagation Its possible to define an approximate p statistic for the L1 solution and hence test goodness of fit of the solution However there is no analytical solution to error propagation …but Monte Carlo error propagation can be used Calculate data prediction from solution Add random realization of noise to data and repeat IRLS algorithm Repeat Q times and generate difference vectors 30 Monte Carlo error propagation For the ballistics problem we get Compare to LS solution without outlier 31 What if we not know the errors on the data ? Both Chi-square goodness of fit tests and model covariance Calculations require knowledge of the variance of the data What can we if we not know σ ? Consider the case of C D = ¾2I Independent data errors Calculated from least squares solution So we can still estimate model errors using the calculated data errors but we can no long claim anything about goodness of fit 32 Example: Over-determined, Linear discrete inverse problem MATLAB exercise Generate data with Gaussian noise for a linear regression problem and invert for the best fitting gradient and intercept Generate xi points randomly between and 10 Calculate data yi Add N[0,σ] noise to data yi Calculate G matrix Use MATLAB matrix routines to solve the normal equations Plot the data, plot the data errors and plot the least squares solution 33 Model Resolution matrix If we obtain a solution to an inverse problem we can ask what its relationship is to the true solution m est = G ¡ gd But we know d = G m true and hence m est = G ¡ gG m tr ue = R m true The matrix R measures how `good an inverse’ G-g is The matrix R shows how the elements of mest are built from linear combination of the true model, mtrue Hence matrix R measures the amount of blurring produced by the inverse operator For the least squares solution we have ¡ ¡ G ¡ g = (G T C D G) ¡ 1G T C D ) R = I 34 Example: Model resolution in a tomographic experiment m = R m true If the calculated model resolution matrix looked like this 6 R = 0:75 ¡ 0:25 0:25 0:25 ¡ 0:25 0:75 0:25 0:25 0:25 0:25 0:75 ¡ 0:25 0:25 0:25 ¡ 0:25 0:75 True model 7 Spike test What units the elements of R have ? Recovered model 35 Data Resolution matrix If we obtain a solution to an inverse problem we can ask what how it compares to the data dpre = G m est But we know m est = G ¡ gdobs and hence dpre = GG ¡ gdobs = D dobs The matrix D is analogous to the model resolution matrix R but measures how independently the model produced by G-g can reproduce the data If D = I then the data is fit exactly and the prediction error d-Gm is zero 36 Recap: Linear discrete inverse problems The Least squares solution minimizes the prediction error Goodness of fit criteria tells us whether the least squares model adequately fits the data, given the level of noise Chi-square with N-M degrees of freedom The covariance matrix describes how noise propagates from the data to the estimated model Chi-square with M degrees of freedom Gives confidence intervals The resolution matrix describes how the estimated model relates to the true model 37 ... Gaussian ? Probability density functions of random variables X i The Central Limit Theorem 2.5 1000 00 deviates Xi 0.5 X3 5.0 1.0 0.5 X1 1.0 0.5 X2 1.0 0.5 X4 1.0 0.5 X5 1.0 11 Mathematical Background:... Â2(5%) Â2( 50%) Â2(95%) 1:15 4:35 11:07 10 3:94 9:34 18:31 20 10:85 19:34 31:41 50 34:76 49:33 67:50 100 77:93 99:33 124:34 The test provides a means to testing the assumptions that went into producing... Â2(5%) Â2(50%) Â2( 95%) 1:15 4:35 11:07 10 3:94 9:34 18:31 20 10:85 19:34 31:41 50 34:76 49:33 67:50 100 77:93 99:33 124:34 Exercise: If I fit data points with a straight line and get what would you

Ngày đăng: 15/06/2017, 20:36

Mục lục

    Linear discrete inverse Problems (parameter estimation)

    Linear discrete inverse problems

    Over-determined: Linear discrete inverse problem

    Linear discrete inverse problem: Least squares

    Example: Over-determined, Linear discrete inverse problem

    The two questions in parameter estimation

    Why do we always assume errors are Gaussian ?

    Mathematical Background: Probability distributions

    Example: Goodness of fit

    Solution error: Model Covariance

Tài liệu cùng người dùng

Tài liệu liên quan