Tài liệu Xử lý hình ảnh kỹ thuật số P11 pdf

21 331 0
Tài liệu Xử lý hình ảnh kỹ thuật số P11 pdf

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

297 11 IMAGE RESTORATION MODELS Image restoration may be viewed as an estimation process in which operations are performed on an observed or measured image field to estimate the ideal image field that would be observed if no image degradation were present in an imaging system. Mathematical models are described in this chapter for image degradation in general classes of imaging systems. These models are then utilized in subsequent chapters as a basis for the development of image restoration techniques. 11.1. GENERAL IMAGE RESTORATION MODELS In order effectively to design a digital image restoration system, it is necessary quantitatively to characterize the image degradation effects of the physical imaging system, the image digitizer, and the image display. Basically, the procedure is to model the image degradation effects and then perform operations to undo the model to obtain a restored image. It should be emphasized that accurate image modeling is often the key to effective image restoration. There are two basic approaches to the modeling of image degradation effects: a priori modeling and a posteriori modeling. In the former case, measurements are made on the physical imaging system, digi- tizer, and display to determine their response for an arbitrary image field. In some instances it will be possible to model the system response deterministically, while in other situations it will only be possible to determine the system response in a sto- chastic sense. The a posteriori modeling approach is to develop the model for the image degradations based on measurements of a particular image to be restored. Basically, these two approaches differ only in the manner in which information is gathered to describe the character of the image degradation. Digital Image Processing: PIKS Inside, Third Edition. William K. Pratt Copyright © 2001 John Wiley & Sons, Inc. ISBNs: 0-471-37407-5 (Hardback); 0-471-22132-5 (Electronic) 298 IMAGE RESTORATION MODELS Figure 11.1-1 shows a general model of a digital imaging system and restoration process. In the model, a continuous image light distribution dependent on spatial coordinates (x, y), time (t), and spectral wavelength is assumed to exist as the driving force of a physical imaging system subject to point and spatial degradation effects and corrupted by deterministic and stochastic disturbances. Potential degradations include diffraction in the optical system, sensor nonlineari- ties, optical system aberrations, film nonlinearities, atmospheric turbulence effects, image motion blur, and geometric distortion. Noise disturbances may be caused by electronic imaging sensors or film granularity. In this model, the physical imaging system produces a set of output image fields at time instant described by the general relation (11.1-1) where represents a general operator that is dependent on the space coordi- nates (x, y), the time history (t), the wavelength , and the amplitude of the light distribution (C). For a monochrome imaging system, there will only be a single out- put field, while for a natural color imaging system, may denote the red, green, and blue tristimulus bands for i = 1, 2, 3, respectively. Multispectral imagery may also involve several output bands of data. In the general model of Figure 11.1-1, each observed image field is digitized, following the techniques outlined in Part 3, to produce an array of image samples at each time instant . The output samples of the digitizer are related to the input observed field by (11.1-2) FIGURE 11.1-1. Digital image restoration model. Cxytλ,,,() λ() F O i() xyt j ,,() t j F O i() xyt j ,,()O P Cxytλ,,,(){}= O P · {} λ() F O i() xyt j ,,() F O i() xyt j ,,() F S i() m 1 m 2 t j ,,() t j F S i() m 1 m 2 t j ,,()O G F O i() xyt j ,,(){}= GENERAL IMAGE RESTORATION MODELS 299 where is an operator modeling the image digitization process. A digital image restoration system that follows produces an output array by the transformation (11.1-3) where represents the designed restoration operator. Next, the output samples of the digital restoration system are interpolated by the image display system to pro- duce a continuous image estimate . This operation is governed by the relation (11.1-4) where models the display transformation. The function of the digital image restoration system is to compensate for degra- dations of the physical imaging system, the digitizer, and the image display system to produce an estimate of a hypothetical ideal image field that would be displayed if all physical elements were perfect. The perfect imaging system would produce an ideal image field modeled by (11.1-5) where is a desired temporal and spectral response function, T is the observa- tion period, and is a desired point and spatial response function. Usually, it will not be possible to restore perfectly the observed image such that the output image field is identical to the ideal image field. The design objective of the image restoration processor is to minimize some error measure between and . The discussion here is limited, for the most part, to a consideration of techniques that minimize the mean-square error between the ideal and estimated image fields as defined by (11.1-6) where denotes the expectation operator. Often, it will be desirable to place side constraints on the error minimization, for example, to require that the image estimate be strictly positive if it is to represent light intensities that are positive. Because the restoration process is to be performed digitally, it is often more con- venient to restrict the error measure to discrete points on the ideal and estimated image fields. These discrete arrays are obtained by mathematical models of perfect image digitizers that produce the arrays O G · {} F K i() k 1 k 2 t j ,,() F K i() k 1 k 2 t j ,,()O R F S i() m 1 m 2 t j ,,(){}= O R · {} F ˆ I i() xyt j ,,() F ˆ I i() xyt j ,,()O D F K i() k 1 k 2 t j ,,(){}= O D · {} F I i() xyt j ,,() F I i() xyt j ,,()O I Cxytλ,,,()U i t λ,()td λd t j T – t j ∫ 0 ∞ ∫    = U i t λ,() O I · {} F I i() xyt j ,,() F ˆ I i() xyt j ,,() E i EF I i() xyt j ,,()F ˆ I i() xyt j ,,()–[] 2    = E · {} 300 IMAGE RESTORATION MODELS (11.1-7a) (11.1-7b) It is assumed that continuous image fields are sampled at a spatial period satisfy- ing the Nyquist criterion. Also, quantization error is assumed negligible. It should be noted that the processes indicated by the blocks of Figure 11.1-1 above the dashed division line represent mathematical modeling and are not physical operations per- formed on physical image fields and arrays. With this discretization of the continu- ous ideal and estimated image fields, the corresponding mean-square restoration error becomes (11.1-8) With the relationships of Figure 11.1-1 quantitatively established, the restoration problem may be formulated as follows: Given the sampled observation expressed in terms of the image light distribution , determine the transfer function that minimizes the error measure between and subject to desired constraints. There are no general solutions for the restoration problem as formulated above because of the complexity of the physical imaging system. To proceed further, it is necessary to be more specific about the type of degradation and the method of resto- ration. The following sections describe models for the elements of the generalized imaging system of Figure 11.1-1. 11.2. OPTICAL SYSTEMS MODELS One of the major advances in the field of optics during the past 40 years has been the application of system concepts to optical imaging. Imaging devices consisting of lenses, mirrors, prisms, and so on, can be considered to provide a deterministic transformation of an input spatial light distribution to some output spatial light dis- tribution. Also, the system concept can be extended to encompass the spatial propa- gation of light through free space or some dielectric medium. In the study of geometric optics, it is assumed that light rays always travel in a straight-line path in a homogeneous medium. By this assumption, a bundle of rays passing through a clear aperture onto a screen produces a geometric light projection of the aperture. However, if the light distribution at the region between the light and F I i() n 1 n 2 t j ,,()F I i() xyt j ,,()δxn 1 ∆– yn 2 ∆–,()= F ˆ I i() n 1 n 2 t j ,,()F ˆ I i() xyt j ,,()δxn 1 ∆– yn 2 ∆–,()= ∆ E i EF I i() n 1 n 2 t j ,,()F ˆ I i() n 1 n 2 t j ,,()–[] 2    = F S i() m 1 m 2 t j ,,() Cxytλ,,,() O K · {} F I i() xyt j ,,()F ˆ I i() xyt j ,,() OPTICAL SYSTEMS MODELS 301 dark areas on the screen is examined in detail, it is found that the boundary is not sharp. This effect is more pronounced as the aperture size is decreased. For a pin- hole aperture, the entire screen appears diffusely illuminated. From a simplistic viewpoint, the aperture causes a bending of rays called diffraction. Diffraction of light can be quantitatively characterized by considering light as electromagnetic radiation that satisfies Maxwell's equations. The formulation of a complete theory of optical imaging from the basic electromagnetic principles of diffraction theory is a complex and lengthy task. In the following, only the key points of the formulation are presented; details may be found in References 1 to 3. Figure 11.2-1 is a diagram of a generalized optical imaging system. A point in the object plane at coordinate of intensity radiates energy toward an imaging system characterized by an entrance pupil, exit pupil, and intervening sys- tem transformation. Electromagnetic waves emanating from the optical system are focused to a point on the image plane producing an intensity . The imaging system is said to be diffraction limited if the light distribution at the image plane produced by a point-source object consists of a converging spherical wave whose extent is limited only by the exit pupil. If the wavefront of the electromag- netic radiation emanating from the exit pupil is not spherical, the optical system is said to possess aberrations. In most optical image formation systems, the optical radiation emitted by an object arises from light transmitted or reflected from an incoherent light source. The image radiation can often be regarded as quasimonochromatic in the sense that the spectral bandwidth of the image radiation detected at the image plane is small with respect to the center wavelength of the radiation. Under these joint assumptions, the imaging system of Figure 11.2-1 will respond as a linear system in terms of the intensity of its input and output fields. The relationship between the image intensity and object intensity for the optical system can then be represented by the superposi- tion integral equation (11.2-1) FIGURE 11.2-1. Generalized optical imaging system. x o y o ,() I o x o y o ,() x i y i ,() I i x i y i ,() I i x i y i ,() Hx i y i x o y o ,;,()I o x o y o ,()x o dy o d ∞ – ∞ ∫ ∞ – ∞ ∫ = 302 IMAGE RESTORATION MODELS where represents the image intensity response to a point source of light. Often, the intensity impulse response is space invariant and the input–output relationship is given by the convolution equation (11.2-2) In this case, the normalized Fourier transforms (11.2-3a) (11.2-3b) of the object and image intensity fields are related by (11.2-4) where , which is called the optical transfer function (OTF), is defined by (11.2-5) The absolute value of the OTF is known as the modulation transfer function (MTF) of the optical system. The most common optical image formation system is a circular thin lens. Figure 11.2-2 illustrates the OTF for such a lens as a function of its degree of misfocus (1, p. 486; 4). For extreme misfocus, the OTF will actually become negative at some spatial frequencies. In this state, the lens will cause a contrast reversal: Dark objects will appear light, and vice versa. Earth's atmosphere acts as an imaging system for optical radiation transversing a path through the atmosphere. Normally, the index of refraction of the atmos- phere remains relatively constant over the optical extent of an object, but in some instances atmospheric turbulence can produce a spatially variable index of Hx i y i x o y o ,;,() I i x i y i ,() Hx i x o – y i y o –,()I o x o y o ,()x o dy o d ∞ – ∞ ∫ ∞ – ∞ ∫ = I o ω x ω y ,() I o x o y o ,() i ω x x o ω y y o +()–{}exp x o dy o d ∞ – ∞ ∫ ∞ – ∞ ∫ I o x o y o ,()x o dy o d ∞ – ∞ ∫ ∞ – ∞ ∫ = I i ω x ω y ,() I i x i y i ,() i ω x x i ω y y i +()–{}exp x i dy i d ∞ – ∞ ∫ ∞ – ∞ ∫ I i x i y i ,()x i dy i d ∞ – ∞ ∫ ∞ – ∞ ∫ = I o ω x ω y ,()H ω x ω y ,() I i ω x ω y ,()= H ω x ω y ,() H ω x ω y ,() Hxy,() i ω x x ω y y+()–{}exp xdyd ∞ – ∞ ∫ ∞ – ∞ ∫ Hxy,()xdyd ∞ – ∞ ∫ ∞ – ∞ ∫ = H ω x ω y ,() OPTICAL SYSTEMS MODELS 303 refraction that leads to an effective blurring of any imaged object. An equivalent impulse response (11.2-6) where the K n are constants, has been predicted and verified mathematically by experimentation (5) for long-exposure image formation. For convenience in analy- sis, the function 5/6 is often replaced by unity to obtain a Gaussian-shaped impulse response model of the form (11.2-7) where K is an amplitude scaling constant and b x and b y are blur-spread factors. Under the assumption that the impulse response of a physical imaging system is independent of spectral wavelength and time, the observed image field can be mod- eled by the superposition integral equation (11.2-8) where is an operator that models the spectral and temporal characteristics of the physical imaging system. If the impulse response is spatially invariant, the model reduces to the convolution integral equation FIGURE 11.2-2. Cross section of transfer function of a lens. Numbers indicate degree of misfocus. Hxy,()K 1 K 2 x 2 K 3 y 2 +() 56⁄ –    exp= Hxy,()K x 2 2b x 2 y 2 2b y 2 +    –    exp= F O i() xyt j ,,()O C C αβt λ,,,()Hxy αβ,;,()αd βd ∞ – ∞ ∫ ∞ – ∞ ∫    = O C · {} 304 IMAGE RESTORATION MODELS (11.2-9) 11.3. PHOTOGRAPHIC PROCESS MODELS There are many different types of materials and chemical processes that have been utilized for photographic image recording. No attempt is made here either to survey the field of photography or to deeply investigate the physics of photography. Refer- ences 6 to 8 contain such discussions. Rather, the attempt here is to develop mathe- matical models of the photographic process in order to characterize quantitatively the photographic components of an imaging system. 11.3.1. Monochromatic Photography The most common material for photographic image recording is silver halide emul- sion, depicted in Figure 11.3-1. In this material, silver halide grains are suspended in a transparent layer of gelatin that is deposited on a glass, acetate, or paper backing. If the backing is transparent, a transparency can be produced, and if the backing is a white paper, a reflection print can be obtained. When light strikes a grain, an electro- chemical conversion process occurs, and part of the grain is converted to metallic silver. A development center is then said to exist in the grain. In the development process, a chemical developing agent causes grains with partial silver content to be converted entirely to metallic silver. Next, the film is fixed by chemically removing unexposed grains. The photographic process described above is called a non reversal process. It produces a negative image in the sense that the silver density is inversely propor- tional to the exposing light. A positive reflection print of an image can be obtained in a two-stage process with nonreversal materials. First, a negative transparency is produced, and then the negative transparency is illuminated to expose negative reflection print paper. The resulting silver density on the developed paper is then proportional to the light intensity that exposed the negative transparency. A positive transparency of an image can be obtained with a reversal type of film. This film is exposed and undergoes a first development similar to that of a nonreversal film. At this stage in the photographic process, all grains that have been exposed FIGURE 11.3-1. Cross section of silver halide emulsion. F O i() xyt j ,,()O C C αβt λ,,,()Hx α– y β–,()αd βd ∞ – ∞ ∫ ∞ – ∞ ∫    = PHOTOGRAPHIC PROCESS MODELS 305 to light are converted completely to metallic silver. In the next step, the metallic silver grains are chemically removed. The film is then uniformly exposed to light, or alternatively, a chemical process is performed to expose the remaining silver halide grains. Then the exposed grains are developed and fixed to produce a positive trans- parency whose density is proportional to the original light exposure. The relationships between light intensity exposing a film and the density of silver grains in a transparency or print can be described quantitatively by sensitometric measurements. Through sensitometry, a model is sought that will predict the spec- tral light distribution passing through an illuminated transparency or reflected from a print as a function of the spectral light distribution of the exposing light and certain physical parameters of the photographic process. The first stage of the photographic process, that of exposing the silver halide grains, can be modeled to a first-order approximation by the integral equation (11.3-1) where X(C) is the integrated exposure, represents the spectral energy distribu- tion of the exposing light, denotes the spectral sensitivity of the film or paper plus any spectral losses resulting from filters or optical elements, and k x is an expo- sure constant that is controllable by an aperture or exposure time setting. Equation 11.3-1 assumes a fixed exposure time. Ideally, if the exposure time were to be increased by a certain factor, the exposure would be increased by the same factor. Unfortunately, this relationship does not hold exactly. The departure from linearity is called a reciprocity failure of the film. Another anomaly in exposure prediction is the intermittency effect, in which the exposures for a constant intensity light and for an intermittently flashed light differ even though the incident energy is the same for both sources. Thus, if Eq. 11.3-1 is to be utilized as an exposure model, it is neces- sary to observe its limitations: The equation is strictly valid only for a fixed expo- sure time and constant-intensity illumination. The transmittance of a developed reversal or non-reversal transparency as a function of wavelength can be ideally related to the density of silver grains by the exponential law of absorption as given by (11.3-2) where represents the characteristic density as a function of wavelength for a reference exposure value, and d e is a variable proportional to the actual exposure. For monochrome transparencies, the characteristic density function is reason- ably constant over the visible region. As Eq. 11.3-2 indicates, high silver densities result in low transmittances, and vice versa. It is common practice to change the pro- portionality constant of Eq. 11.3-2 so that measurements are made in exponent ten units. Thus, the transparency transmittance can be equivalently written as XC() k x C λ()L λ()λd ∫ = C λ() L λ() τλ() τλ() d e D λ()–{}exp= D λ() D λ() 306 IMAGE RESTORATION MODELS (11.3-3) where d x is the density variable, inversely proportional to exposure, for exponent 10 units. From Eq. 11.3-3, it is seen that the photographic density is logarithmically related to the transmittance. Thus, (11.3-4) The reflectivity of a photographic print as a function of wavelength is also inversely proportional to its silver density, and follows the exponential law of absorption of Eq. 11.3-2. Thus, from Eqs. 11.3-3 and 11.3-4, one obtains directly (11.3-5) (11.3-6) where d x is an appropriately evaluated variable proportional to the exposure of the photographic paper. The relational model between photographic density and transmittance or reflectivity is straightforward and reasonably accurate. The major problem is the next step of modeling the relationship between the exposure X(C) and the density variable d x . Figure 11.3-2a shows a typical curve of the transmittance of a nonreversal transparency FIGURE 11.3-2. Relationships between transmittance, density, and exposure for a nonreversal film. τλ() 10 d x D λ() – = d x D λ() 10 log τλ()–= r o λ() r o λ() 10 d x D λ() – = d x D λ() 10 log r o λ()–= ( a )( b ) ( c )( d )

Ngày đăng: 21/01/2014, 15:20

Từ khóa liên quan

Tài liệu cùng người dùng

Tài liệu liên quan