Modeling phosphorus in the environment - Chapter 6 pot

30 330 0
Modeling phosphorus in the environment - Chapter 6 pot

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

131 6 Uncertainty Estimation in Phosphorus Models Keith Beven Lancaster University, Lancaster, United Kingdom Trevor Page Lancaster University, Lancaster, United Kingdom Malcolm McGechan Scottish Agricultural College, Bush Estate, Penicuik, United Kingdom CONTENTS 6.1 Sources of Uncertainty in Modeling P Transport to Stream Channels 132 6.2 Sources of Uncertainty 133 6.3 Uncertainty Is Not Only Statistics 134 6.4 Uncertainty Estimation: Formal Bayes Methods 135 6.5 Uncertainty Estimation Based on the Equifinality Concept and Formal Rejectionist Methods 137 6.6 Uncertainty as Part of a Learning Process 140 6.7 An Example Application 142 6.7.1 The MACRO Model 142 6.7.2 Study Site and Data 143 6.7.2.1 Drainage Discharge and Phosphorous Concentrations 143 6.7.2.2 Slurry Applications 144 6.7.3 MACRO Implementation within a Model Rejection Framework 144 6.7.4 Results and Discussion 146 6.7.4.1 Using Initial Rejection Criteria 146 6.7.4.2 Using Relaxed Rejection Criteria 148 6.7.4.3 Simulations for the Period from 1994 to 1995 148 6.7.4.4 Simulations and Parameterizations for the Period 1995 to 96 150 6.8 Learning from Rejection: What If All the Models Tried Are Nonbehavioral? 153 6.9 What Are the Implications for P Models? 155 Acknowledgments 157 References 157 © 2007 by Taylor & Francis Group, LLC 132 Modeling Phosphorus in the Environment 6.1 SOURCES OF UNCERTAINTY IN MODELING P TRANSPORT TO STREAM CHANNELS The starting point for this contribution is the extensive review of Beck (1987). Sum- marizing his arguments at the end of the review, he posed the following questions: Are the basic problems of model identification ones primarily of inadequate method or of inadequate forms of data? What opportunities are there for the development of improved, novel methods of model structure identification, particularly regarding exposing the failure of inadequate, constituent model hypotheses? How can an archive of prior hypotheses be appropriately engaged in inferring the form of an improved model structure from diagnosis of the failure of an inadequate model structure? Moreover, in what form should the knowl- edge of the archive be most usefully represented? What does a lack of identifiability imply for the distortion of a model struc- ture, and what are the consequences of a distorted model structure in terms of generating predictions? Given uncertainty, how can one speculate about the prediction of a “radically different” future? What, in the end, does all this mean for decision making under uncertainty? These questions have been reinforced by the more recent analyses of environmental modeling by Beven (2002a, 2002b, 2004a, 2005, 2006a) and demand an answer to why, nearly two decades later, there are still many model structures and applications that do not consider model identification problems and uncertainties explicitly. The uncertainties exist. They are often ignored. It seems as if the saving grace of the environmental modeler has been model calibration. If a model has at least approximately the right sort of functionality, then there are generally sufficient degrees of freedom to be able to adjust effective values of the parameters to get an acceptable fit to the data and to declare some sort of success in reporting results in scientific articles and reports to decision makers. This obviously does not mean that what is being reported is good science if the calibration allows compensation for errors in model structure as a representation of the processes actually controlling water quality variables, including phosphorus (P) concentrations in different forms. Perhaps we are now reaching a stage where it might be possible to take account of some of the sources of uncertainty in predicting water quality more explicitly, P being a particularly interesting and practically relevant example. It is important to recognize from the outset, however, that this will be difficult: (1) to evaluate model structures as working hypotheses about the functioning of catchment systems independently of errors in the input data used to drive the model and the calibration of effective parameter values; and (2) to estimate effec- tive values of physical and geochemical parameter values a priori by measurement. The struggle to improve water-quality modeling remains as much a struggle against the limitations of current measurement techniques as against the limitations of current model structures. © 2007 by Taylor & Francis Group, LLC Uncertainty Estimation in Phosphorus Models 133 6.2 SOURCES OF UNCERTAINTY The sources of uncertainty in the modeling process are manifold, and, generally speaking, good methodologies have not been developed for assessing the nature and magnitude of uncertainties from different sources. They are thus frequently neglected. For example, some uncertainty exists in the input and boundary condition data used to drive a model. Such uncertainties include measurement errors in assess- ing the inputs at the measurement scale, together with interpolation errors in space and time to provide the values required at the lumped or distributed element scale of the model. The interpolation error may be made worse by a lack of resolution in the measurements in space and time and by nonstationarity in the processes con- trolling the inputs. Rainfall is a good example. There are issues about all the measurement techniques available to estimate rainfalls, both at a point using gauges or over an area using radar or microwave techniques. Point ground-level measure- ments may be sparse in space, whereas the spatial and temporal variability of rainfall intensities may vary markedly between events. Rainfall may show fractal character- istics in space and time, but analyses suggest that there may be nonstationarity in the fractal scaling between events. Thus, interpolation of the measurements to pro- vide the inputs — and an estimate of their uncertainty — at the space and time scales of the model may be difficult. What is clear is that a point measurement of rainfall is, under many circumstances, not a good estimate of the rainfall inputs required by the model. The two variables may, because of time and space variability, actually be related but different variables — they are incommensurate. Yet rainfall data are essential to drive models that will predict the fluxes in hydrological pathways that will control the transport of P. However, the number of nonhypothetical hydro- logical modeling studies that have attempted to include a treatment of rainfall estimation error is very small indeed. The problem is compounded by other uncertainties. Most particularly for event- based simulations, errors in the estimation of model initial conditions may be important. Errors may be associated with the model structures used due to the incorrect representation of some processes or the neglect of processes (e.g., prefer- ential flow pathways) that are important in the real system. There may be errors in estimating or calibrating the effective values of parameters in the model that may control the predictions of P mobilization and transport in different pathways. Finally, there may be errors in the observations used to evaluate the model predictions or to calibrate the model parameters. Unfortunately, the possibility of assessing all these different sources of error is limited. In general, only the total model error produced can be assessed by comparing an observation, which is not error-free, with a model-predicted variable produced by a model, which is subject to structural and input errors. Unless some very strong — and usually difficult to justify — assumptions are made about the nature of the sources of error, disaggregating the total model error into its component parts will be impossible. It is an ill-posed problem. The result will be an inevitable ambiguity in model calibrations and error assessment — an ambiguity that also brings with it difficulty in transferring information gained in one application to applications at other sites or different hydrological conditions. © 2007 by Taylor & Francis Group, LLC 134 Modeling Phosphorus in the Environment 6.3 UNCERTAINTY IS NOT ONLY STATISTICS The aim of science, however, is a single true description of reality. The ambiguity arising from uncertainties from these different sources means that this aim is difficult to achieve in applications to places that are all unique in their characteristics and uncertainties. It follows that many descriptions may be compatible with current understanding and available observations, called the equifinality thesis (Beven 1993, 2006a). One way of viewing these multiple descriptions is as different working hypotheses of how a system functions. The concept of the single description may remain a philosophical axiom or theoretical aim but will generally be impossible to achieve in practice in applications to real systems (Beven 2002a, 2002b). This view is actually fundamentally different to a statistical approach to model identification. In both frequentist and Bayesian approaches to statistics, the uncer- tainty associated with a model prediction is often assumed to be adequately treated as a single lumped additive variable in the form O(X, t) = M(Θ, ε θ , I, ε I , X, t) + ε (X, t) (6.1) where O(X, t) is a measured output variable, such as discharge, at point X and time t; M(Θ, ε θ , I, ε I , X, t) is the prediction of that variable from the model with parameter set Θ with errors ε θ and driven by the input vector I with errors ε I ; and ε (X, t) is the model error at that point in space and time. Transformations of the variables of Equation 6.1 can also be used where appropriate to constrain the modeling problem to this form. A logarithmic transformation, for example, can be used for an error that is multiplicative —that is, increasing with the magnitude of the model prediction — as a simple way of allowing for heteroscedascticity in the errors with nonconstant variance. Other transformations can also be used to try to stabilize the statistical characteristics of the error series (Box and Cox 1964). Normal statistical inference then aims to identify the parameter set Θ that will be in some sense optimal, normally by minimizing the residual error variance of a model of the model error, which might include its own parameters for bias and autocorrelation terms with the aim of making the residual error independent and identically distributed, even though there may be good physical reasons why errors that have constant statistical char- acteristics in hydrological and water quality modeling should not be expected (see, e.g., Freer et al. 1996). The additive form of Equation 6.1 allows the full range of statistical estimation techniques, including Bayesian updating, to be used in model calibration. The approach has been widely used in hydrological and water resources applications, including flood forecasting involving data assimilation (e.g., Krzysztofowicz 2002; Young 2001, 2002 and references therein), groundwater modeling, including Bayesian averaging of model structures (e.g., Ye et al. 2004), and rainfall-runoff modeling (e.g. Kavetski et al. 2002; Vrugt et al. 2002, 2003). In principle, the additive error assumption that underlies this form of uncertainty is particularly valuable for two reasons: (1) it allows checking of whether the actual errors conform to the assumptions made about the structural model of the errors; and (2) if this is so, then a true probability of predicting an observation, conditional © 2007 by Taylor & Francis Group, LLC Uncertainty Estimation in Phosphorus Models 135 on the model, can be predicted as the likelihood L(O(X, t) | M( θ , I, X, t)). These advantages, however, may be difficult to justify in many real applications where poorly known input errors are processed through a nonlinear model subject to structural error and equifinality (see Hall 2003; Klir 1994 for reviews of more generalized mathematizations of uncertainty, including discussion of fuzzy set meth- ods and the Dempster-Shafer theory of evidence). One implication of the limitations of the additive error model is that it may actually be quite difficult to estimate the true probability of predicting an observation, given one or more models, except in ideal cases because the model structural error has a complex and nonlinear effect structured in both time and space on the total model error, ε (X, t). This implies that a philosophically different approach to the statistical approach might be worth investigating. In the statistical approach, the error model is generally evaluated as conditioned on finding the best maximum likelihood model. In evalu- ating models as multiple working hypotheses, it is often more interesting to estimate the likelihood of a model conditioned on some vector of observations such as L(M( θ , ε θ , I, ε I , X, t) | O(X, t)) and, in particular, to reject those models as unacceptable hypotheses that should have a zero likelihood. This is the basis for the Generalized Likelihood Uncertainty Estimation (GLUE) methodology, first proposed by Beven and Binley (1992). It can be argued that the formal statistical approaches are a special case of the GLUE methodology within which the formal assumptions of a defined error model can be accepted such that the formal likelihood function can be used to weight model predictions. It can also be argued that the GLUE methodology is a special case of formal statistical inference, in which informal likelihood measures replace a formal likelihood function with its rigorous assumptions about the nature of the error model. GLUE can indeed make use of formal likelihood measures if the associated assumptions can be justified. It is perhaps better, however, to consider the two approaches as based on different philosophical frameworks to the uncertainty estimation problem. 6.4 UNCERTAINTY ESTIMATION: FORMAL BAYES METHODS The traditional approach to model calibration in hydrological modeling has been to simplify Equation 6.1 to the form O(X, t) = M( θ , I, X, t) + ε (X, t) (6.2) with the aim of minimizing the total error in some way. This assumes that the effect of all sources of error can be subsumed into the total error series as if the model was correct and that the input and boundary condition data and observations were known precisely. Furthermore, if the total error ε (X, t) can be assumed to have a relatively simple form — or can be suitably transformed to a simple form — then a formal statistical likelihood function can be defined, dependent on the assumed error structure. Thus, for an evaluation made for observations at a single site for total model errors that © 2007 by Taylor & Francis Group, LLC 136 Modeling Phosphorus in the Environment can be assumed to have zero mean, constant variance, independence in time, and a Gaussian distribution, the likelihood function takes the form (6.3) where ε t = O(X, t) − M( θ , I, X, t) at time t, T is the total number of time steps, and σ 2 is the residual error variance. For total model errors that can be assumed to have a constant bias, constant variance, autcorrelation in time, and a Gaussian distribution, the likelihood function takes the form (6.4) where µ is the mean residual error (bias) and α is the lag 1 correlation coefficient of the total model residuals in time. More complex error structure assumptions will lead to more complex likelihood functions, with more parameters to be estimated. A significant advantage of this formal statistical approach is that when the assumptions are satisfied, the theory allows the estimation of the probability with which an observation will be predicted, conditional on the model and parameter values, and the probability density functions of the parameter estimates, which under these assumptions will be multivariate normal. As more data are made available, the use of these likelihood functions will also lead to reduced uncertainty in the estimated parameter values, even if the total error variance is not reduced. O’Hagan (2004) suggested that this is the only satisfactory way of addressing the issue of model uncertainty; without proper probability estimate statements about modeling, uncer- tainty will have no meaning. There is an issue, however, about when probability estimates based on additive, or transformed, error structures are meaningful. From a purely empirical point of view, a test of the actual model residuals ε (X, t) for validity relative to the assumptions made in formulating the likelihood function might be considered sufficient to justify probability statements of uncertainty. From a theoretical point of view, however, there has to be some concern about treating the full sources of error in Equation 6.2 in this type of aggregated form. Model structural errors will, in the general case, be nonlinear, nonstationary, and nonadditive. Input and boundary condition errors, as well as any parameter errors, will also be processed through the model structure in nonlinear, nonstationary, and nonadditive ways. Kennedy and O’Hagan (2001) attempted to address this problem by showing that all sources of error might be represented within a hierarchical Bayesian frame- work. In particular, where any model structural error is simple in form, it might be possible to estimate this as what they called a “model inadequacy function,” or, more LMIXt T T (| (,, ,)) ( ) exp / εθ πσ σ ε =−  − = ∑ 2 1 2 22 2 2 1 t t        LMIXt T (| (,, ,)) ( ) ( ) exp // εθ πσ α σ =− ×− − 21 1 2 22 212 22 2 1 2 1 2 2 1()() [ ( )]−−+−−−      − = ∑ αεµ εµαε µ tt t T                © 2007 by Taylor & Francis Group, LLC Uncertainty Estimation in Phosphorus Models 137 recently, “model discrepancy function” (O’Hagan 2004). In principle, this could take any nonlinear form, although the most complex in the cases they considered was a constant bias, which can, in any case, be included as a parameter in Equation 6.4. The aim is to extract as much structural information from the total error series as possible, ideally leaving a Gaussian independent and identically distributed residual error term. The model discrepancy function can then also be used in prediction, under the assumption that the nature of the structural errors in calibration will be similar in prediction. It should be noted, however, that the model discrepancy function is not a direct representation of model structural error. It is a compensatory term for all the unknown sources of error in Equation 6.1, conditional on any particular realization of the model, including specified parameter values and input data. These sources of error could, in principle, be considered explicitly in the Bayesian hierarchy if good information were available as to their nature. This will rarely be the case in hydro- logical modeling applications, where, for example, rainfall inputs to the system may be poorly known for all events in some catchments and where even the most fundamental equation — the water balance — cannot be closed by measurement (Beven 2001, 2002b). Thus, disaggregation of the different error components will be necessarily poorly posed, and ignoring potential sources of error, including model structural error, may result in an overestimation of the information content of addi- tional data and may lead to an unjustified overconfidence in estimated parameter values (see discussion in Beven and Young 2003). In representing the modeling process by the simplified form of Equation 6.2, the error model is required to compensate for all sources of deficiency. 6.5 UNCERTAINTY ESTIMATION BASED ON THE EQUIFINALITY CONCEPT AND FORMAL REJECTIONIST METHODS The equifinality thesis is the central concept of the GLUE methodology (Beven and Binley 1992; Beven and Freer 2001). The GLUE methodology does not purport to estimate the probability of predicting an observation given the model but rather attempts to evaluate the predicted distribution of a variable that is always conditional on the model or models considered, the ranges of parameter values considered, the evaluation measures used, and the input and output data available to the application for model evaluation. The prediction distributions do not consider the residual error associated with a particular model run explicitly. There is instead an assumption that the error series associated with a model run in calibration will have similar charac- teristics in prediction — note the similar assumption about model structural error in the formal likelihood approach just described. Thus, in weighting the predictions of multiple models to form the predictive distribution for a variable, there is an implicit weighting of the error series associated with those models, without the need to consider different sources of error explicitly; explicit error models can be handled in this framework by treating them as additional model components (see, e.g., Romanowicz et al. 1998). © 2007 by Taylor & Francis Group, LLC 138 Modeling Phosphorus in the Environment One of the most interesting features of the GLUE methodology is the comple- mentarity of model equifinality and model rejection. Equifinality accepts that mul- tiple models may be useful in prediction and that any attempt to identify an optimal model might be illusory. But if multiple models are to be considered acceptable or behavioral, it is evident that models can also be rejected (given a likelihood of zero) where they can be shown to be nonbehavioral (given unacceptable simulations of the available observables). Thus, there is always a possibility that all the models tried will be rejected — unlike the statistical approach where it is possible to compensate for model deficiencies by some error structure. However, at this point the limitations of implicit handling of error series in the GLUE methodology become apparent since it is possible that some hypothetical perfect model could be rejected if driven by poor input and boundary condition data or if compared with poor observation data. Thus, there is a need for a more explicit consideration of sources of error in this framework while retaining the possibility of model rejection. A potential methodology has been proposed by Beven (2005, 2006a). Equation 6.1 can be rewritten to reflect more sources of error as O(X, t) + ε O (X, t) + ε C (∆x, ∆t, X, t) = M( θ , ε θ , I, ε I , X, t) + ε M ( θ , ε θ , I, ε I , X, t) + ε r (6.5) The error terms on the left-hand side of Equation 6.5 represent the measurement error, ε O (X, t), and the commensurability error between observed and predicted vari- ables, ε C (∆x, ∆t, X, t). The model term, M( θ , ε θ , I, ε I , X, t), will reflect error in input and boundary conditions, model parameters, and model structure. The error term, ε M ( θ , ε θ , I, ε I , X, t), can now be interpreted as a compensatory error term for model defi- ciencies, analogous to the discrepancy function in the Bayesian statistical approach of O’Hagan (2004) but that must also reflect error in input and boundary conditions, model parameters, and model structure. Finally, there may be a random error term, ε r . Equation 6.5 has been written in this form to both highlight the importance of observation measurement errors and the commensurability error issue and to reflect the real difficulty of separating input and boundary condition errors, param- eter errors, and model structural error in nonlinear cases. There is no general theory available for doing this in nonlinear dynamic cases. One simplification can be made in Equation 6.5: If applied on a model-by-model basis, model parameter error has no real meaning. It is the model structure and set of effective parameter values together that process the nonerror-free input data and determine total model error in space and time. Thus, Equation 6.5 could be rewritten, for any model structure, as O(X, t) + ε O (X, t) + ε C (∆x, ∆t, X, t) = M( θ , I, ε I , X, t) + ε M ( θ , I, ε I , X, t) + ε r (6.6) and ε M ( θ , I, ε I , X, t) is a model specific error term. The question that then arises within this framework is whether ε M ( θ , I, ε I , X, t) is acceptable in relation to the terms ε O (X, t) + ε C (∆x, ∆t, X, t). This is equivalent to asking if the following inequality holds: O min (X, t) < M( θ , I, ε I , X, t) < O max (X, t) for all O(X, t) (6.7) © 2007 by Taylor & Francis Group, LLC Uncertainty Estimation in Phosphorus Models 139 where O min (X, t) and O max (X, t) are acceptable limits for the prediction of the output variables given ε O (X, t) and ε C (∆x, ∆t, X, t), which together might be termed an effective observation error. The effective observation error takes account of both real measurement errors and commensurability errors between observed and pre- dicted variables. When defined in this way, the effective observation error needs neither zero mean or constant variance nor to be Gaussian or stationary in the form of its distribution in space or time, particularly where there may be physical con- straints on the nature of that error. Note that the commensurability error might be expected to be model implementation dependent in that the difference between observed and predicted variables may depend on model time and space discretisa- tions and measurement scales in relation to expected time and space heterogeneities of the observable quantities. However, it should really be possible to develop a methodology for making prior estimates of both measurement and commensurability errors, since they should be independent of individual model runs. An objective evaluation of each model run using Equation 6.7 should then be possible. If a model does not provide predictions within the specified range, for any O(X, t), then it should be rejected as nonbehavioral. This rejectionist framework, based on the equifinality concept, is analogous to set-theoretic concepts previously used in environmental modeling (by, e.g., Klepper et al. 1991; Osidele et al. 2005; Rose et al. 1991; Spear et al. 1994; van Straten and Keesman 1991). It is also a generalization of the Hornberger-Spear-Young method of Generalized Sensitivity Analysis (Hornberger and Spear 1981; Young 1983), which was also based on a split of a series of Monte Carlo model runs into sets of behavioral and nonbehavioral models. It results in a set of provisionally behavioral models that satisfy all the evaluation criteria as expressed as criteria in the form of Equation 6.7. The approach can also be relativist in taking account of the performance of different models within the set of behavioral models (Beven 2004b, 2005). Within the behavioral range, for all O(X, t), a positive weight could be assigned to the model predictions, M( θ , I, ε I , X, t), according to the level of past performance. The simplest possible weighting scheme that need not be symmetric around the observed value, given an observation O(X, t) and the acceptable range [O min (X, t), O max (X, t)], is the triangular relative weighting scheme, but other bounded weighting schemes could be used — including truncated Gaussian forms. A core range of observational ambiguity, or equal weighting, could be added if required (Beven 2006a). This methodology gives rise to some interesting possibilities. Within this frame- work there is no possibility of a representation of model error being allowed to compensate for poor model performance, even for the “optimal model,” unless the acceptability limits are made artificially wide to avoid rejecting all of the models — but this might not generally be considered to be good practice. If no model proves to be behavioral, then it is an indication that there are conceptual, structural, or data errors, though it may still be difficult to decide which is the most important. There is perhaps then more possibility of learning from the modeling process on occasions when it proves necessary to reject all the models tried. However, this type of evaluation requires that consideration also be given to input and boundary condition errors, since, as noted before, even the perfect model © 2007 by Taylor & Francis Group, LLC 140 Modeling Phosphorus in the Environment might not provide behavioral predictions if it is driven with poor input data error. Thus, the combination of input and boundary data realization — within reasonable bounds — and model structure and parameter set in producing M( θ , I, ε I , X, t) should be evaluated against the effective observational error. The result will hopefully still be a set of behavioral models, each associated with some likelihood weight. Any compensation effect between an input realization — and initial and boundary con- ditions — and model parameter set in achieving success in the calibration period will then be implicitly included in the set of behavioral models. There is also the possibility that the behavioral models defined in this way do not provide predictions that span the complete range of the acceptable error around an observation. The behavioral models might, for example, provide sim- ulations of an observed variable O(X, t) that all lie in the range O(X, t) to O max (X, t) or even in just a small part of it. They are all still acceptable but are apparently biased. This provides real information about the performance of the model or other sources of error that can be investigated and allowed for specifically at that site in prediction rather than being lost in a statistical representation of model error. 6.6 UNCERTAINTY AS PART OF A LEARNING PROCESS Both Bayesian and equifinality (rejectionist set-theoretic) concepts allow the mod- eling process to be set up within a learning framework, using data assimilation to update the model each time new data become available. This can be for short-term forecasting with the aim of minimizing forecast uncertainty as conditioned on the new data or in a simulation context with the aim of refining the model representation of the system of interest as new information is received to update the Bayes likelihood function or the weights associated with the set of behavioral models using the Bayes equation, originally proposed by Thomas Bayes in 1724 (see Bernado and Smith 1994; Howson and Urbach 1993). In formal Bayes theory, the posterior likelihood is intended to represent the probability of predicting an observation, given the true model, L(Y | θ ) where Y is the observation vector and θ is the parameter vector. L p (O| θ ) ∝ L o ( θ ) L( θ |Y) (6.8) where L p (O| θ ) is the posterior probability of predicting observations O given a model with parameter set θ , L o ( θ ) is the prior likelihood of parameter set θ , and L( θ |Y) is the likelihood given data Y. However, Bayes’s equation was originally stated in the more general conditioning form for hypotheses, H, given evidence, E, as L p (H|E) ∝ L o (H) L(E|H) (6.9) or, in the discrete form for k potential hypotheses proposed independently by Pierre- Simon Laplace in 1820 as L p (H k |E) ∝ L o (H k ) L(E|H k ) / Σ k L(H k |E) (6.10) © 2007 by Taylor & Francis Group, LLC [...]... some other variable This can lead to the identification of the dominant nonlinear modes of behavior of the system based directly on the observations rather than prior conceptual assumptions about the system response 6. 7 AN EXAMPLE APPLICATION The following application of the methodology will illustrate some of the issues that arise in thinking about different potential sources of error in the modeling. ..Uncertainty Estimation in Phosphorus Models 141 where Lp(Hk |E) is the posterior likelihood for hypothesis Hk given the evidence E; Lo(Hk) is a prior likelihood for Hk; and L(E|Hk) is the likelihood of predicting the evidence E given Hk Here, the hypotheses of interest are each model of the system, including its parameter values and any other ancillary hypotheses When this type of Bayes conditioning is... model rejection within the modeling framework outlined in this chapter does lead to the possibility that all the models tried as hypotheses will be rejected, as in the application presented here (see also, e.g., Freer et al 2002; Page et al 20 06) This contrasts with the statistical approach within which at least the best model found will be retained, as if it were correct, and the remaining error can be... high-magnitude flux time steps This approach implicitly allowed the investigation of performance at individual time steps and the assessment of apparent errors in terms of their origins The relaxed criteria (Figure 6. 2) allowed the designation of behavioral parameterizations as given in Table 6. 2 Results of these most successful simulations are presented in the following in relation to the original... partitioned into micropore (soil matrix) and macropore domains The boundary between the two domains is described in the model by the air–entry soil–water tension in the Brooks-Corey equation The two domains function separately and are associated with their own degree of saturation, hydraulic conductivity, and flux Flow in the micropores is calculated using the Richards (1931) equation Flow in the macropores... in prediction However, again, examination of the reasons for model failure might be sufficient to justify such a relaxation This is a step requiring the modeler to act responsibly 6. 9 WHAT ARE THE IMPLICATIONS FOR P MODELS? In outlining the stages of the modeling process, Beven (2001) distinguished between the perceptual model of the processes governing a flux of interest and the conceptual model of those... on the importance of data Uncertainty estimation should not be the endpoint of a study It should be a starting point for the more important question of what data can be obtained that will allow that uncertainty to be further constrained or competing hypotheses to be distinguished as part of the learning process for the place and application of interest This is a question of science that goes beyond the. .. rather than relying on a statistical representation of model error to compensate for such problems In this study it does appear as if there are important errors in the inputs to the model, and in the observations with which it is being compared, as well as possible model structural errors There is, therefore, the possibility of unjustifiably rejecting a model because of errors in the data Thus, in the. .. rejecting a perfectly good model because of errors in the input data Differentiating between these sources of error may be very difficult, if only because the nature of both input errors and model structural errors may be nonstationary in time © 2007 by Taylor & Francis Group, LLC 142 Modeling Phosphorus in the Environment Thus, there is a question as to whether the use of data assimilation can, in a... 25 200 -1 MRP conc quantile deviation (mg l ) 0 (d) -2 00 5th th 25th 50th 75th 95 -4 00 -6 00 -8 00 -1 000 -1 200 -1 400 -1 60 0 0 5 10 15 20 25 Timestep FIGURE 6. 4 (a) Likelihood-weighed percentiles of simulated concentration compared to the stringent fuzzy rejection criteria for the study period from 1994 to 1995 (see legend) (b) Histograms of simulated concentration compared to the stringent (full line) and . Francis Group, LLC 132 Modeling Phosphorus in the Environment 6. 1 SOURCES OF UNCERTAINTY IN MODELING P TRANSPORT TO STREAM CHANNELS The starting point for this contribution is the extensive review. uncertainty exists in the input and boundary condition data used to drive a model. Such uncertainties include measurement errors in assess- ing the inputs at the measurement scale, together with interpolation. (1) 0. 36 (2) 0.075 ( 3-8 ) 1 × 10 − 26 (9) 6. 0(1) 1.92 (2) 0.4 ( 3-8 ) 1 × 10 − 26 (9) mm hr −1 THETAINI Initial soil moisture content 38.0 (1) 35 .6 ( 2-3 ) 29.0 (4) 32.2 (5) 32 .6 (6) 32.9 (7) 32.9 (8) 36. 3

Ngày đăng: 12/08/2014, 00:22

Từ khóa liên quan

Mục lục

  • Table of Contents

  • Chapter 6: Uncertainty Estimation in Phosphorus Models

    • CONTENTS

    • 6.1 SOURCES OF UNCERTAINTY IN MODELING P TRANSPORT TO STREAM CHANNELS

    • 6.2 SOURCES OF UNCERTAINTY

    • 6.3 UNCERTAINTY IS NOT ONLY STATISTICS

    • 6.4 UNCERTAINTY ESTIMATION: FORMAL BAYES METHODS

    • 6.5 UNCERTAINTY ESTIMATION BASED ON THE EQUIFINALITY CONCEPT AND FORMAL REJECTIONIST METHODS

    • 6.6 UNCERTAINTY AS PART OF A LEARNING PROCESS

    • 6.7 AN EXAMPLE APPLICATION

      • 6.7.1 THE MACRO MODEL

      • 6.7.2 STUDY SITE AND DATA

        • 6.7.2.1 Drainage Discharge and Phosphorous Concentrations

        • 6.7.2.2 Slurry Applications

        • 6.7.3 MACRO IMPLEMENTATION WITHIN A MODEL REJECTION FRAMEWORK

        • 6.7.4 RESULTS AND DISCUSSION

          • 6.7.4.1 Using Initial Rejection Criteria

          • 6.7.4.2 Using Relaxed Rejection Criteria

          • 6.7.4.3 Simulations for the Period from 1994 to 1995

          • 6.7.4.4 Simulations and Parameterizations for the Period 1995 to 1996

          • 6.8 LEARNING FROM REJECTION: WHAT IF ALL THE MODELS TRIED ARE NONBEHAVIORAL?

          • 6.9 WHAT ARE THE IMPLICATIONS FOR P MODELS?

          • ACKNOWLEDGMENTS

          • REFERENCES

Tài liệu cùng người dùng

  • Đang cập nhật ...

Tài liệu liên quan