Practical guide to chemometrics 2ed gemperline

520 191 0
Practical guide to chemometrics 2ed   gemperline

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

... statistical tools can be used to determine whether the stated hypothesis is found to be true One can consider the application of statistical tests and chemometric tools to be somewhat akin to torture—if... DK4712_C001.fm Page Tuesday, January 31, 2006 11:49 AM Practical Guide to Chemometrics The remaining chapters of the book introduce some of the advanced topics of chemometrics The coverage is fairly comprehensive,... Group, LLC DK4712_C001.fm Page Tuesday, January 31, 2006 11:49 AM Practical Guide to Chemometrics Jackson, J.E A User’s Guide to Principal Components John Wiley & Sons: New York 1991 Jollife,

Ngày đăng: 07/07/2018, 09:27

Từ khóa liên quan

Mục lục

  • PRACTICAL GUIDE to CHEMOMETRICS, SECOND EDITION

    • Preface

    • Editor

    • Contributors

    • Contents

  • Table of Contents

  • Chapter 1: Introduction to Chemometrics

    • CONTENTS

    • 1.1 CHEMICAL MEASUREMENTS — A BASIS FOR DECISION MAKING

    • 1.2 CHEMICAL MEASUREMENTS — THE THREE-LEGGED PLATFORM

    • 1.3 CHEMOMETRICS

    • 1.4 HOW TO USE THIS BOOK

      • 1.4.1 SOFTWARE APPLICATIONS

    • 1.5 GENERAL READING ON CHEMOMETRICS

      • JOURNALS

      • BOOKS

    • REFERENCES

  • Table of Contents

  • Chapter 2: Statistical Evaluation of Data

    • CONTENTS

    • INTRODUCTION

    • 2.1 SOURCES OF ERROR

      • 2.1.1 SOME COMMON TERMS

    • 2.2 PRECISION AND ACCURACY

    • 2.3 PROPERTIES OF THE NORMAL DISTRIBUTION

    • 2.4 SIGNIFICANCE TESTING

      • 2.4.1 THE F-TEST FOR COMPARISON OF VARIANCE (PRECISION)

      • 2.4.2 THE STUDENT T-TEST

      • 2.4.3 ONE-TAILED OR TWO-TAILED TESTS

      • 2.4.4 COMPARISON OF A SAMPLE MEAN WITH A CERTIFIED VALUE

      • 2.4.5 COMPARISON OF THE MEANS FROM TWO SAMPLES

      • 2.4.6 COMPARISON OF TWO METHODS WITH DIFFERENT TEST OBJECTS OR SPECIMENS

    • 2.5 ANALYSIS OF VARIANCE

      • 2.5.1 ANOVA TO TEST FOR DIFFERENCES BETWEEN MEANS

      • 2.5.2 THE WITHIN-SAMPLE VARIATION (WITHIN-TREATMENT VARIATION)

      • 2.5.3 BETWEEN-SAMPLE VARIATION (BETWEEN-TREATMENT VARIATION)

      • 2.5.4 ANALYSIS OF RESIDUALS

    • 2.6 OUTLIERS

    • 2.7 ROBUST ESTIMATES OF CENTRAL TENDENCY AND SPREAD

    • 2.8 SOFTWARE

      • 2.8.1 ANOVA USING EXCEL

    • RECOMMENDED READING

    • REFERENCES

  • Table of Contents

  • Chapter 3: Sampling Theory, Distribution Functions, and the Multivariate Normal Distribution

    • CONTENTS

    • 3.1 SAMPLING AND SAMPLING DISTRIBUTIONS

      • 3.1.1 THE NORMAL DISTRIBUTION

      • 3.1.2 STANDARD NORMAL DISTRIBUTION

    • 3.2 CENTRAL LIMIT THEOREM

      • 3.2.1 IMPLICATIONS OF THE CENTRAL LIMIT THEOREM

    • 3.3 SMALL SAMPLE DISTRIBUTIONS

      • 3.3.1 THE T-DISTRIBUTION

      • 3.3.2 CHI-SQUARE DISTRIBUTION

    • 3.4 UNIVARIATE HYPOTHESIS TESTING

      • 3.4.1 INFERENCES ABOUT MEANS

      • 3.4.2 INFERENCES ABOUT VARIANCE AND THE F-DISTRIBUTION

    • 3.5 THE MULTIVARIATE NORMAL DISTRIBUTION

      • 3.5.1 GENERALIZED OR MAHALANOBIS DISTANCES

      • 3.5.2 THE VARIANCE–COVARIANCE MATRIX

      • 3.5.3 ESTIMATION OF POPULATION PARAMETERS FROM SMALL SAMPLES

      • 3.5.4 COMMENTS ON ASSUMPTIONS

      • 3.5.5 GENERALIZED SAMPLE VARIANCE

      • 3.5.6 GRAPHICAL ILLUSTRATION OF SELECTED BIVARIATE NORMAL DISTRIBUTIONS

      • 3.5.7 CHI-SQUARE DISTRIBUTION

    • 3.6 HYPOTHESIS TEST FOR COMPARISON OF MULTIVARIATE MEANS

    • 3.7 EXAMPLE: MULTIVARIATE DISTANCES

      • 3.7.1 STEP 1: GRAPHICAL REVIEW OF SMX.MAT DATA FILE

      • 3.7.2 STEP 2: SELECTION OF VARIABLES (WAVELENGTHS)

      • 3.7.3 STEP 3: VIEW HISTOGRAMS OF SELECTED VARIABLES

      • 3.7.4 STEP 4: COMPUTE THE TRAINING SET MEAN AND VARIANCE–COVARIANCE MATRIX

      • 3.7.5 STEP 5: CALCULATE MAHALANOBIS DISTANCES AND PROBABILITY DENSITIES

      • 3.7.6 STEP 6: FIND “ACCEPTABLE” AND “UNACCEPTABLE” OBJECTS

    • RECOMMENDED READING

    • REFERENCES

  • Table of Contents

  • Chapter 4: Principal Component Analysis

    • CONTENTS

    • 4.1 INTRODUCTION

    • 4.2 SPECTROSCOPIC-CHROMATOGRAPHIC DATA

      • 4.2.1 BASIS VECTORS

    • 4.3 THE PRINCIPAL COMPONENT MODEL

      • 4.3.1 EIGENVECTORS AND EIGENVALUES

      • 4.3.2 THE SINGULAR-VALUE DECOMPOSITION

      • 4.3.3 ALTERNATIVE FORMULATIONS OF THE PRINCIPAL COMPONENT MODEL

    • 4.4 PREPROCESSING OPTIONS

      • 4.4.1 MEAN CENTERING

      • 4.4.2 VARIANCE SCALING

      • 4.4.3 BASELINE CORRECTION

      • 4.4.4 SMOOTHING AND FILTERING

      • 4.4.5 FIRST AND SECOND DERIVATIVES

      • 4.4.6 NORMALIZATION

      • 4.4.7 MULTIPLICATIVE SCATTER CORRECTION (MSC) AND STANDARD NORMAL VARIATE (SNV) TRANSFORMS

    • 4.5 PCA DATA EXPLORATION PROCEDURE

    • 4.6 INFLUENCING FACTORS

      • 4.6.1 VARIANCE AND RESIDUAL VARIANCE

      • 4.6.2 DISTRIBUTION OF ERROR IN EIGENVALUES

      • 4.6.3 F-TEST FOR DETERMINING THE NUMBER OF FACTORS

    • 4.7 BASIS VECTORS

      • 4.7.1 CLUSTERING AND CLASSIFICATION WITH PCA SCORE PLOTS

    • 4.8 RESIDUAL SPECTRA

      • 4.8.1 RESIDUAL VARIANCE ANALYSIS

    • 4.9 CONCLUSIONS

    • RECOMMENDED READING

    • REFERENCES

  • Table of Contents

  • Chapter 5: Calibration

    • CONTENTS

    • 5.1 DATA SETS

      • 5.1.1 NEAR INFRARED SPECTROSCOPY

      • 5.1.2 FUNDAMENTAL MODES OF VIBRATION, OVERTONES, AND COMBINATIONS

      • 5.1.3 WATER–METHANOL MIXTURES

      • 5.1.4 SOLVENT INTERACTIONS

    • 5.2 INTRODUCTION TO CALIBRATION

      • 5.2.1 UNIVARIATE CALIBRATION

      • 5.2.2 NONZERO INTERCEPTS

      • 5.2.3 MULTIVARIATE CALIBRATION

      • 5.2.4 CURVILINEAR CALIBRATION

      • 5.2.5 SELECTION OF CALIBRATION AND VALIDATION SAMPLES

      • 5.2.6 MEASUREMENT ERROR AND MEASURES OF PREDICTION ERROR

    • 5.3 A PRACTICAL CALIBRATION EXAMPLE

      • 5.3.1 GRAPHICAL SURVEY OF NIR WATER–METHANOL DATA

      • 5.3.2 UNIVARIATE CALIBRATION

        • 5.3.2.1 Without an Intercept Term

        • 5.3.2.2 With an Intercept Term

      • 5.3.3 MULTIVARIATE CALIBRATION

    • 5.4 STATISTICAL EVALUATION OF CALIBRATION MODELS OBTAINED BY LEAST SQUARES

      • 5.4.1 HYPOTHESIS TESTING

      • 5.4.2 PARTITIONING OF VARIANCE IN LEAST-SQUARES SOLUTIONS

      • 5.4.3 INTERPRETING REGRESSION ANOVA TABLES

      • 5.4.4 CONFIDENCE INTERVAL AND HYPOTHESIS TESTS FOR REGRESSION COEFFICIENTS

      • 5.4.5 PREDICTION CONFIDENCE INTERVALS

      • 5.4.6 LEVERAGE AND INFLUENCE

      • 5.4.7 MODEL DEPARTURES AND OUTLIERS

      • 5.4.8 COEFFICIENT OF DETERMINATION AND MULTIPLE CORRELATION COEFFICIENT

      • 5.4.9 SENSITIVITY AND LIMIT OF DETECTION

        • 5.4.9.1 Sensitivity

        • 5.4.9.2 Limit of Detection

          • 5.4.9.2.1 Univariate Decision Limit

          • 5.4.9.2.2 Univariate Detection Limit

          • 5.4.9.2.3 Determination Limit

          • 5.4.9.2.4 Multivariate Detection Limit

      • 5.4.10 INTERFERENCE EFFECTS AND SELECTIVITY

    • 5.5 VARIABLE SELECTION

      • 5.5.1 FORWARD SELECTION

      • 5.5.2 EFROYMSON’S STEPWISE REGRESSION ALGORITHM

        • 5.5.2.1 Variable-Addition Step

        • 5.5.2.2 Variable-Deletion Step

        • 5.5.2.3 Convergence of Algorithm

      • 5.5.3 BACKWARD ELIMINATION

      • 5.5.4 SEQUENTIAL-REPLACEMENT ALGORITHMS

      • 5.5.5 ALL POSSIBLE SUBSETS

      • 5.5.6 SIMULATED ANNEALING AND GENETIC ALGORITHM

      • 5.5.7 RECOMMENDATIONS AND PRECAUTIONS

    • 5.6 BIASED METHODS OF CALIBRATION

      • 5.6.1 PRINCIPAL COMPONENT REGRESSION

        • 5.6.1.1 Basis Vectors

        • 5.6.1.2 Mathematical Procedures

          • 5.6.1.2.1 Calibration Steps

          • 5.6.1.2.2 Unknown Prediction Steps

        • 5.6.1.3 Number of Basis Vectors

        • 5.6.1.4 Example PCR Results

      • 5.6.2 PARTIAL LEAST SQUARES

        • 5.6.2.1 Mathematical Procedure

        • 5.6.2.2 Number of Basis Vectors Selection

        • 5.6.2.3 Comparison with PCR

      • 5.6.3 A FEW OTHER CALIBRATION METHODS

        • 5.6.3.1 Common Basis Vectors and a Generic Model

      • 5.6.4 REGULARIZATION

      • 5.6.5 EXAMPLE REGULARIZATION RESULTS

    • 5.7 STANDARD ADDITION METHOD

      • 5.7.1 UNIVARIATE STANDARD ADDITION METHOD

      • 5.7.2 MULTIVARIATE STANDARD ADDITION METHOD

    • 5.8 INTERNAL STANDARDS

    • 5.9 PREPROCESSING TECHNIQUES

    • 5.10 CALIBRATION STANDARDIZATION

      • 5.10.1 STANDARDIZATION OF PREDICTED VALUES

      • 5.10.2 STANDARDIZATION OF INSTRUMENT RESPONSE

      • 5.10.3 STANDARDIZATION WITH PREPROCESSING TECHNIQUES

    • 5.11 SOFTWARE

    • RECOMMENDED READING

    • REFERENCES

  • Table of Contents

  • Chapter 6: Robust Calibration

    • CONTENTS

    • 6.1 INTRODUCTION

    • 6.2 LOCATION AND SCALE ESTIMATION

      • 6.2.1 THE MEAN AND THE STANDARD DEVIATION

      • 6.2.2 THE MEDIAN AND THE MEDIAN ABSOLUTE DEVIATION

      • 6.2.3 OTHER ROBUST ESTIMATORS OF LOCATION AND SCALE

    • 6.3 LOCATION AND COVARIANCE ESTIMATION IN LOW DIMENSIONS

      • 6.3.1 THE EMPIRICAL MEAN AND COVARIANCE MATRIX

      • 6.3.2 THE ROBUST MCD ESTIMATOR

      • 6.3.3 OTHER ROBUST ESTIMATORS OF LOCATION AND COVARIANCE

    • 6.4 LINEAR REGRESSION IN LOW DIMENSIONS

      • 6.4.1 LINEAR REGRESSION WITH ONE RESPONSE VARIABLE

        • 6.4.1.1 The Multiple Linear Regression Model

        • 6.4.1.2 The Classical Least-Squares Estimator

        • 6.4.1.3 The Robust LTS Estimator

        • 6.4.1.4 An Outlier Map

        • 6.4.1.5 Other Robust Regression Estimators

      • 6.4.2 LINEAR REGRESSION WITH SEVERAL RESPONSE VARIABLES

        • 6.4.2.1 The Multivariate Linear Regression Model

        • 6.4.2.2 The Robust MCD-Regression Estimator

        • 6.4.2.3 An Example

    • 6.5 PRINCIPAL COMPONENTS ANALYSIS

      • 6.5.1 CLASSICAL PCA

      • 6.5.2 ROBUST PCA BASED ON A ROBUST COVARIANCE ESTIMATOR

      • 6.5.3 ROBUST PCA BASED ON PROJECTION PURSUIT

      • 6.5.4 ROBUST PCA BASED ON PROJECTION PURSUIT AND THE MCD

      • 6.5.5 AN OUTLIER MAP

      • 6.5.6 SELECTING THE NUMBER OF PRINCIPAL COMPONENTS

      • 6.5.7 AN EXAMPLE

    • 6.6 PRINCIPAL COMPONENT REGRESSION

      • 6.6.1 CLASSICAL PCR

      • 6.6.2 ROBUST PCR

      • 6.6.3 MODEL CALIBRATION AND VALIDATION

      • 6.6.4 AN EXAMPLE

    • 6.7 PARTIAL LEAST-SQUARES REGRESSION

      • 6.7.1 CLASSICAL PLSR

      • 6.7.2 ROBUST PLSR

      • 6.7.3 AN EXAMPLE

    • 6.8 CLASSIFICATION

      • 6.8.1 CLASSIFICATION IN LOW DIMENSIONS

        • 6.8.1.1 Classical and Robust Discriminant Rules

        • 6.8.1.2 Evaluating the Discriminant Rules

        • 6.8.1.3 An Example

      • 6.8.2 CLASSIFICATION IN HIGH DIMENSIONS

    • 6.9 SOFTWARE AVAILABILITY

    • REFERENCES

  • Table of Contents

  • Chapter 7: Kinetic Modeling of Multivariate Measurements with Nonlinear Regression

    • CONTENTS

    • 7.1 INTRODUCTION

    • 7.2 MULTIVARIATE DATA, BEER-LAMBERT’S LAW, MATRIX NOTATION

    • 7.3 CALCULATION OF THE CONCENTRATION PROFILES: CASE I, SIMPLE MECHANISMS

    • 7.4 MODEL-BASED NONLINEAR FITTING

      • 7.4.1 DIRECT METHODS, SIMPLEX

      • 7.4.2 NONLINEAR FITTING USING EXCEL’S SOLVER

      • 7.4.3 LINEAR AND NONLINEAR PARAMETERS

      • 7.4.4 NEWTON-GAUSS-LEVENBERG/MARQUARDT (NGL/M)

      • 7.4.5 NONWHITE NOISE

    • 7.5 CALCULATION OF THE CONCENTRATION PROFILES: CASE II, COMPLEX MECHANISMS

      • 7.5.1 FOURTH-ORDER RUNGE-KUTTA METHOD IN EXCEL

      • 7.5.2 INTERESTING KINETIC EXAMPLES

        • 7.5.2.1 Autocatalysis

        • 7.5.2.2 Zeroth-Order Reaction

        • 7.5.2.3 Lotka-Volterra (Sheep and Wolves)

        • 7.5.2.4 The Belousov-Zhabotinsky (BZ) Reaction

    • 7.6 CALCULATION OF THE CONCENTRATION PROFILES: CASE III, VERY COMPLEX MECHANISMS

    • 7.7 RELATED ISSUES

      • 7.7.1 MEASUREMENT TECHNIQUES

      • 7.7.2 MODEL PARSER

      • 7.7.3 FLOW REACTORS

      • 7.7.4 GLOBALIZATION OF THE ANALYSIS

      • 7.7.5 SOFT-MODELING METHODS

      • 7.7.6 OTHER METHODS

    • APPENDIX

    • REFERENCES

  • Table of Contents

  • Chapter 8: Response-Surface Modeling and Experimental Design

    • CONTENTS

    • 8.1 INTRODUCTION

    • 8.2 RESPONSE-SURFACE MODELING

      • 8.2.1 THE GENERAL SCHEME OF RSM

      • 8.2.2 FACTOR SPACES

        • 8.2.2.1 Process Factor Spaces

        • 8.2.2.2 Mixture Factor Spaces

        • 8.2.2.3 Simplex-Lattice Designs

          • 8.2.2.3.1 Advantages of the Simplex-Lattice Designs

          • 8.2.2.3.2 Disadvantages of the Simplex-Lattice Designs

          • 8.2.2.3.3 Simplex-Lattice Designs, Example

        • 8.2.2.4 Simplex-Centroid Designs

          • 8.2.2.4.1 Advantages of Simplex-Centroid Designs

          • 8.2.2.4.2 Disadvantages of Simplex-Centroid Designs

          • 8.2.2.4.3 Simplex-Centroid Design, Example

        • 8.2.2.5 Constrained Mixture Spaces

        • 8.2.2.6 Mixture+Process Factor Spaces

      • 8.2.3 SOME REGRESSION-ANALYSIS-RELATED NOTATION

    • 8.3 ONE-VARIABLE-AT-A-TIME VS. OPTIMAL DESIGN

      • 8.3.1 BIVARIATE (MULTIVARIATE) EXAMPLE

      • 8.3.2 ADVANTAGES OF THE ONE-VARIABLE-AT-A-TIME APPROACH

      • 8.3.3 DISADVANTAGES

    • 8.4 SYMMETRIC OPTIMAL DESIGNS

      • 8.4.1 TWO-LEVEL FULL FACTORIAL DESIGNS

        • 8.4.1.1 Advantages of Factorial Designs

        • 8.4.1.2 Disadvantages of Factorial Designs

      • 8.4.2 THREE OR MORE LEVELS IN FULL FACTORIAL DESIGNS

      • 8.4.3 CENTRAL COMPOSITE DESIGNS

    • 8.5 THE TAGUCHI EXPERIMENTAL DESIGN APPROACH

    • 8.6 NONSYMMETRIC OPTIMAL DESIGNS

      • 8.6.1 OPTIMALITY CRITERIA

      • 8.6.2 OPTIMAL VS. EQUALLY DISTANCED DESIGNS

      • 8.6.3 DESIGN OPTIMALITY AND DESIGN EFFICIENCY CRITERIA

        • 8.6.3.1 Design Measures

        • 8.6.3.2 D-Optimality and D-Efficiency

        • 8.6.3.3 G-Optimality and G-Efficiency

        • 8.6.3.4 A-Optimality

        • 8.6.3.5 E-Optimality

    • 8.7 ALGORITHMS FOR THE SEARCH OF REALIZABLE OPTIMAL EXPERIMENTAL DESIGNS

      • 8.7.1 EXACT (OR N-POINT) D-OPTIMAL DESIGNS

        • 8.7.1.1 Fedorov’s Algorithm

        • 8.7.1.2 Wynn-Mitchell and van Schalkwyk Algorithms

        • 8.7.1.3 DETMAX Algorithm

        • 8.7.1.4 The MD Galil and Kiefer’s Algorithm

      • 8.7.2 SEQUENTIAL D-OPTIMAL DESIGNS

        • 8.7.2.1 Example

      • 8.7.3 SEQUENTIAL COMPOSITE D-OPTIMAL DESIGNS

    • 8.8 OFF-THE-SHELF SOFTWARE AND CATALOGS OF DESIGNS OF EXPERIMENTS

      • 8.8.1 OFF-THE-SHELF SOFTWARE PACKAGES

        • 8.8.1.1 MATLAB

        • 8.8.1.2 Design Expert

        • 8.8.1.3 Other Packages

      • 8.8.2 CATALOGS OF EXPERIMENTAL DESIGNS

    • 8.9 EXAMPLE: THE APPLICATION OF DOE IN MULTIVARIATE CALIBRATION

      • 8.9.1 CONSTRUCTION OF A CALIBRATION SAMPLE SET

        • 8.9.1.1 Identifying of the Number of Significant Factors

        • 8.9.1.2 Identifying the Type of the Regression Model

        • 8.9.1.3 Defining the Bounds of the Factor Space

        • 8.9.1.4 Estimating Extinction Coefficients

      • 8.9.2 IMPROVING QUALITY FROM HISTORICAL DATA

        • 8.9.2.1 Improving the Numerical Stability of the Data Set

        • 8.9.2.2 Prediction Ability

    • 8.10 CONCLUSION

    • REFERENCES

  • Table of Contents

  • Chapter 9: Classification and Pattern Recognition

    • CONTENTS

    • 9.1 INTRODUCTION

    • 9.2 DATA PREPROCESSING

    • 9.3 MAPPING AND DISPLAY

    • 9.4 CLUSTERING

    • 9.5 CLASSIFICATION

      • 9.5.1 K-NEAREST NEIGHBOR

      • 9.5.2 PARTIAL LEAST SQUARES

      • 9.5.3 SIMCA

    • 9.6 PRACTICAL CONSIDERATIONS

    • 9.7 APPLICATIONS OF PATTERN-RECOGNITION TECHNIQUES

      • 9.7.1 ARCHAEOLOGICAL ARTIFACTS

      • 9.7.2 FUEL SPILL IDENTIFICATION

      • 9.7.3 SORTING PLASTICS FOR RECYCLING

      • 9.7.4 TAXONOMY BASED ON CHEMICAL CONSTITUTION

    • REFERENCES

  • Table of Contents

  • Chapter 10: Signal Processing and Digital Filtering

    • CONTENTS

    • 10.1 INTRODUCTION

    • 10.2 NOISE REMOVAL AND THE PROBLEM OF PRIOR INFORMATION

      • 10.2.1 SIGNAL ESTIMATION AND SIGNAL DETECTION

    • 10.3 REEXPRESSING DATA IN ALTERNATE BASES TO ANALYZE STRUCTURE

      • 10.3.1 PROJECTION-BASED SIGNAL ANALYSIS AS SIGNAL PROCESSING

    • 10.4 FREQUENCY-DOMAIN SIGNAL PROCESSING

      • 10.4.1 THE FOURIER TRANSFORM

      • 10.4.2 THE SAMPLING THEOREM AND ALIASING

      • 10.4.3 THE BANDWIDTH-LIMITED, DISCRETE FOURIER TRANSFORM

      • 10.4.4 PROPERTIES OF THE FOURIER TRANSFORM

    • 10.5 FREQUENCY DOMAIN SMOOTHING

      • 10.5.1 SMOOTHING

      • 10.5.2 SMOOTHING WITH DESIGNER TRANSFER FUNCTIONS

    • 10.6 TIME-DOMAIN FILTERING AND SMOOTHING

      • 10.6.1 SMOOTHING

      • 10.6.2 FILTERING

      • 10.6.3 POLYNOMIAL MOVING-AVERAGE (SAVITSKY-GOLAY) FILTERS

    • 10.7 WAVELET-BASED SIGNAL PROCESSING

      • 10.7.1 THE WAVELET FUNCTION

      • 10.7.2 TIME AND FREQUENCY LOCALIZATIONS OF WAVELET FUNCTIONS

      • 10.7.3 THE DISCRETE WAVELET TRANSFORM

      • 10.7.4 SMOOTHING AND DENOISING WITH WAVELETS

    • REFERENCES

    • FURTHER READING

  • Table of Contents

  • Chapter 11: Multivariate Curve Resolution

    • CONTENTS

    • 11.1 INTRODUCTION: GENERAL CONCEPT, AMBIGUITIES, RESOLUTION THEOREMS

    • 11.2 HISTORICAL BACKGROUND

    • 11.3 LOCAL RANK AND RESOLUTION: EVOLVING FACTOR ANALYSIS AND RELATED TECHNIQUES

    • 11.4 NONITERATIVE RESOLUTION METHODS

      • 11.4.1 WINDOW FACTOR ANALYSIS (WFA)

      • 11.4.2 OTHER TECHNIQUES: SUBWINDOW FACTOR ANALYSIS (SFA) AND HEURISTIC EVOLVING LATENT PROJECTIONS (HELP)

    • 11.5 ITERATIVE METHODS

      • 11.5.1 GENERATION OF INITIAL ESTIMATES

      • 11.5.2 CONSTRAINTS, DEFINITION, CLASSIFICATION: EQUALITY AND INEQUALITY CONSTRAINTS BASED ON CHEMICAL OR MATHEMATICAL PROPERTIES

        • 11.5.2.1 Nonnegativity

        • 11.5.2.2 Unimodality

        • 11.5.2.3 Closure

        • 11.5.2.4 Known Profiles

        • 11.5.2.5 Hard-Modeling Constraints: Physicochemical Models

        • 11.5.2.6 Local-Rank Constraints, Selectivity, and Zero-Concentration Windows

      • 11.5.3 ITERATIVE TARGET TRANSFORMATION FACTOR ANALYSIS (ITTFA)

      • 11.5.4 MULTIVARIATE CURVE RESOLUTION-ALTERNATING LEAST SQUARES (MCR-ALS)

    • 11.6 EXTENSION OF SELF-MODELING CURVE RESOLUTION TO MULTIWAY DATA: MCR-ALS SIMULTANEOUS ANALYSIS OF MULTIPLE CORRELATED DATA MATRICES

    • 11.7 UNCERTAINTY IN RESOLUTION RESULTS, RANGE OF FEASIBLE SOLUTIONS, AND ERROR IN RESOLUTION

    • 11.8 APPLICATIONS

      • 11.8.1 BIOCHEMICAL PROCESSES

        • 11.8.1.1 Study of Changes in the Protein Secondary Structure

        • 11.8.1.2 Study of Changes in the Tertiary Structure

        • 11.8.1.3 Global Description of the Protein Folding Process

      • 11.8.2 ENVIRONMENTAL DATA

      • 11.8.3 SPECTROSCOPIC IMAGES

    • 11.9 SOFTWARE

    • REFERENCES

  • Table of Contents

  • Chapter 12: Three-Way Calibration with Hyphenated Data

    • CONTENTS

    • 12.1 INTRODUCTION

    • 12.2 BACKGROUND

    • 12.3 NOMENCLATURE OF THREE-WAY DATA

    • 12.4 THREE-WAY MODELS

    • 12.5 EXAMPLES

    • 12.6 RANK ANNIHILATION METHODS

      • 12.6.1 RANK ANNIHILATION FACTOR ANALYSIS

        • 12.6.1.1 RAFA Application

      • 12.6.2 GENERALIZED RANK ANNIHILATION METHOD

        • 12.6.2.1 GRAM Application

      • 12.6.3 DIRECT TRILINEAR DECOMPOSITION

        • 12.6.3.1 DTLD Application

    • 12.7 ALTERNATING LEAST-SQUARES METHODS

      • 12.7.1 PARAFAC / CANDECOMP

        • 12.7.1.1 Tuckals

        • 12.7.1.2 Solution Constraints

        • 12.7.1.3 PARAFAC Application

    • 12.8 EXTENSIONS OF THREE-WAY METHODS

    • 12.9 FIGURES OF MERIT

    • 12.10 CAVEATS

    • REFERENCES

    • APPENDIX 12.1 GRAM ALGORITHM

    • APPENDIX 12.2 DTLD ALGORITHM

    • APPENDIX 12.3 PARAFAC ALGORITHM

  • Table of Contents

  • Chapter 13: Future Trends in Chemometrics

    • CONTENTS

    • 13.1 HISTORICAL DEVELOPMENT OF CHEMOMETRICS

      • 13.1.1 CHEMOMETRICS — A MATURING DISCIPLINE

    • 13.2 REVIEWS OF CHEMOMETRICS AND FUTURE TRENDS

      • 13.2.1 PROCESS ANALYTICAL CHEMISTRY

      • 13.2.2 SPECTROSCOPY

      • 13.2.3 FOOD AND FEED CHEMISTRY

      • 13.2.4 OTHER INTERESTING APPLICATION AREAS

    • 13.3 DRIVERS OF GROWTH IN CHEMOMETRICS

      • 13.3.1 THE CHALLENGE OF LARGE DATA SETS

      • 13.3.2 CHEMOMETRICS AT THE INTERFACE OF CHEMICAL AND BIOLOGICAL SCIENCES

    • 13.4 CONCLUDING REMARKS

    • REFERENCES

Tài liệu cùng người dùng

Tài liệu liên quan