Tài liệu 25 Signal Recovery from Partial Information ppt

22 272 0
Tài liệu 25 Signal Recovery from Partial Information ppt

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

Podilchuk, C “Signal Recovery from Partial Information” Digital Signal Processing Handbook Ed Vijay K Madisetti and Douglas B Williams Boca Raton: CRC Press LLC, 1999 1999 by CRC Press LLC c 25 Signal Recovery from Partial Information 25.1 Introduction 25.2 Formulation of the Signal Recovery Problem Prolate Spheroidal Wavefunctions 25.3 Least Squares Solutions Wiener Filtering • The Pseudoinverse Solution • Regularization Techniques 25.4 Signal Recovery using Projection onto Convex Sets (POCS) The POCS Framework Christine Podilchuk Bell Laboratories Lucent Technologies 25.1 25.5 Row-Based Methods 25.6 Block-Based Methods 25.7 Image Restoration Using POCS References Introduction Signal recovery has been an active area of research for applications in many different scientific disciplines A central reason for exploring the feasibility of signal recovery is due to the limitations imposed by a physical device on the amount of data one can record For example, for diffractionlimited systems, the finite aperture size of the lens constrains the amount of frequency information that can be captured The image degradation is due to attenuation of high frequency components resulting in a loss of details and other high frequency information In other words, the finite aperture size of the lens acts like a lowpass filter on the input data In some cases, the quality of the recorded image data can be improved by building a more costly recording device but many times the required condition for acceptable data quality is physically unrealizable or too costly Other times signal recovery may be necessary is for the recording of a unique event that cannot be reproduced under more ideal recording conditions Some of the earliest work on signal recovery includes the work by Sondhi [1] and Slepian [2] on recovering images from motion blur and Helstrom [3] on least squares restoration A sampling of some of the signal recovery algorithms applied to different types of problems can be found in [4]– [21] Further reading includes the other sections in this book, Chapter 53, and the extended list of references provided by all the authors The simple signal degradation model described in the next section turns out to be a useful representation for many different problems encountered in practice Some examples that can be formulated using the general signal recovery paradigm include image restoration, image reconstruction, spectral 1999 by CRC Press LLC c estimation, and filter design We distinguish between image restoration, which pertains to image recovery based on a measured distorted version of the original image, and image reconstruction, which refers most commonly to medical imaging where the image is reconstructed from a set of indirect measurements, usually projections For many of the signal recovery applications, it is desirable to extrapolate a signal outside of a known interval Extrapolating a signal in the spatial or temporal domain could result in improved spectral resolution and applies to such problems as power spectrum estimation, radio–astronomy, radar target detection, and geophysical exploration The dual problem, extrapolating the signal in the frequency domain, also known as superresolution, results in improved spatial or temporal resolution and is desirable in many image restoration problems As will be shown later, the standard inverse filtering techniques are not able to resolve the signal estimate beyond the diffraction limit imposed by the physical measuring device The observed signal is degraded from the original signal by both the measuring device as well as external conditions Besides the measured, distorted output signal we may have some additional information about the following: the measuring system and external conditions, such as noise, as well as some a priori knowledge about the desired signal to be restored or reconstructed In order to produce a good estimate of the original signal, we should take advantage of all the available information Although the data recovery algorithms described here apply in general to any data type, we derive most of the techniques based on two-dimensional input data for image processing applications For most cases, it is straightforward to adapt the algorithms to other data types Examples of data recovery techniques for different inputs are illustrated in the other sections in this book as well as Chapter 53 for image restoration The material in this section requires some basic knowledge of linear algebra as found in [22] Section 25.2 presents the signal degradation model and formulates the signal recovery problem The early attempts of signal recovery based on inverse filtering are presented in Section 25.3 The concept of Projection Onto Convex Sets (POCS) described in Section 25.4 allows us to introduce a priori knowledge about the original signal in the form of linear as well as nonlinear constraints into the recovery algorithm Convex set theoretic formulations allow us to design recovery algorithms that are extremely flexible and powerful Sections 25.5 and 25.6 present some basic POCS-based algorithms and Section 25.7 presents a POCS-based algorithm for image restoration as well as some results The sample algorithms presented here are not meant to be exhaustive and the reader is encouraged to read the other sections in this chapter as well as the references for more details 25.2 Formulation of the Signal Recovery Problem Signal recovery can be viewed as an estimation process in which operations are performed on an observed signal in order to estimate the ideal signal that would be observed if no degradation was present In order to design a signal recovery system effectively, it is necessary to characterize the degradation effects of the physical measuring system The basic idea is to model the signal degradation effects as accurately as possible and perform operations to undo the degradations and obtain a restored signal When the degradation cannot be modeled sufficiently, even the best recovery algorithms will not yield satisfactory results For many applications, the degradation system is assumed to be linear and can be modeled as a Fredholm integral equation of the first kind expressed as Z g(x) = +∞ −∞ h(x; a)f (a)da + n(x) (25.1) This is the general case for a one-dimensional signal where f and g are the original and measured signals, respectively, n represents noise, and h(x; a) is the impulse response or the response of the 1999 by CRC Press LLC c measuring system to an impulse at coordinate a.1 A block diagram illustrating the general onedimensional signal degradation system is shown in Fig 25.1 For image processing applications, we modify this equation to the two-dimensional case, that is, Z +∞ Z +∞ h(x, y; a, b)f (a, b)dadb + n(x, y) (25.2) g(x, y) = −∞ −∞ The degradation operator h is commonly referred to as a point spread function (PSF) in imaging applications because in optics, h is the measured response of an imaging system to a point of light FIGURE 25.1: Block diagram of the signal recovery problem The Fourier transform of the point spread function h(x, y) denoted as H(wx , wy ) is known as the optical transfer function (OTF) and can be expressed as R R∞   −∞ h(x, y)exp −i wx x + wy y dxdy R R∞ (25.3) H(wx , wy ) = −∞ h(x, y)dxdy The absolute value of the OTF is known as the modulation transfer function (MTF) A commonly used optical image formation system is a circular thin lens The recovery problem is considered ill-posed when a small change in the observed image, g, results in a large change in the solution, f Most signal recovery problems in practice are ill-posed The continuous version of the degradation system for two-dimensional signals formulated in Eq (25.2) can be expressed in discrete form by replacing the continuous arguments with arrays of samples in two dimensions, that is, XX h(i, j ; m, n)f (m, n) + n(i, j ) (25.4) g(i, j ) = m n It is convenient for image recovery purposes to represent the discrete formulation given in Eq (25.4) as a system of linear equations expressed as g = Hf + n, (25.5) where g, f, and n are the lexicographic row-stacked versions of the discretized versions of g, f , and n in Eq (25.4) and H is the degradation matrix composed of the PSF This section presents an overview of some of the techniques proposed to estimate f when the recovery problem can be modeled by Eq (25.5) If there is no external noise or measurement error This corresponds to the case of a shift–varying impulse response 1999 by CRC Press LLC c and the set of equations is consistent, Eq (25.5) reduces to g = Hf (25.6) It is usually not the case that a practical system can be described by Eq (25.6) In this section, we will focus on recovery algorithms where an estimate of the distortion operation represented by the matrix H is known For recovery problems where both the desired signal, f, and the degradation operator, H, are unknown, refer to other articles in this book For most systems, the degradation matrix H is highly structured and quite sparse The additive noise term due to measurement errors and external and internal noise sources is represented by the vector n At first glance, the solution to the signal recovery problem seems to be straightforward — find the inverse of the matrix H to solve for the unknown vector f It turns out that the solution is not so simple because in practice the degradation operator is usually ill-conditioned or rank-deficient and the problem of inconsistencies or noise must be addressed Other problems that may arise include computational complexity due to extremely large problem dimensions especially for image processing applications The algorithms described here try to address these issues for the general signal recovery problem described by Eq (25.5) 25.2.1 Prolate Spheroidal Wavefunctions We introduce the problem of signal recovery by examining a one-dimensional, linear, time-invariant system that can be expressed as Z g(x) = +T −T f (α)h(x − α)dα, (25.7) where g(x) is the observed signal, f (α) is the desired signal of finite support on the interval (−T , +T ), and h(x) denotes the degradation operator Assuming that the degradation operator in this case is an ideal lowpass filter, h can be described mathematically as h(x) = sin(x) x (25.8) For this particular case, it is possible to solve for the exact signal f (x) with prolate spheroidal wavefunctions [23] The key to successfully solving for f lies in the fact that prolate spheroidal wavefunctions are the eigenfunctions of the integral equation expressed by Eq (25.7) with Eq (25.8) as the degradation operator This relationship is expressed as: Z +T ψn (α) −T sin(x − α) dα = λn ψn (x), n = 0,1,2, , x−α (25.9) where ψn (x) are the prolate spheroidal wavefunctions and λn are the corresponding eigenvalues A critical feature of prolate spheroidal wavefunctions is that they are complete orthogonal bases in the interval (−∞, +∞) as well as the interval (−T , +T ), that is,  Z +∞ 1, if n = m, ψn (x)ψm (x)dx = (25.10) 0, if n = m, −∞ and Z +T −T 1999 by CRC Press LLC c  ψn (x)ψm (x)dx = λn , 0, if n = m, if n = m (25.11) This allows the functions g(x) and f (x) to be expressed as the series expansion, g(x) = ∞ X cn ψn (x), (25.12) dn ψLn (x), (25.13) n=0 ∞ X f (x) = n=0 where ψLn (x) are the prolate spheroidal functions truncated to the interval (−T , T ) The coefficients cn and dn are given by Z ∞ cn = −∞ and dn = Z λn g(x)ψn (x)dx T −T f (x)ψn (x)dx (25.14) (25.15) If we substitute the series expansions given by Eqs (25.12) and (25.13) into Eq (25.7), we get g(x) = ∞ X cn ψn (x) n=0 Z = = +T "∞ X −T ∞ X n=0 # dn ψLn (α) h(x − α)dα (25.16)  ψn (α)h(x − α)dα (25.17) n=0 Z dn +T −T Combining this result with Eq (25.9), ∞ X n=0 cn ψn (x) = ∞ X λn dn ψn (x), (25.18) n=0 where cn = λn dn , (25.19) cn λn (25.20) and dn = We get an exact solution for the unknown signal f (x) by substituting Eq (25.20) into Eq (25.13), that is, ∞ X cn ψLn (x) (25.21) f (x) = λn n=0 Therefore, in theory, it is possible to obtain the exact image f (x) from the diffraction-limited image, g(x), using prolate spheroidal wavefunctions The difficulties of signal recovery become more apparent when we examine the simple diffraction-limited case in relation to prolate spheroidal wavefunctions as described in Eq (25.21) The finite aperture size of a diffraction-limited system translates to eigenvalues λn which exhibit a unit–step response; that is, the several largest eigenvalues are approximately one followed by a succession of eigenvalues that rapidly fall off to zero The solution given by Eq (25.21) will be extremely sensitive to noise for small eigenvalues λn Therefore, for the general 1999 by CRC Press LLC c problem represented in vector–space by Eq (25.5), the degradation operator H is ill-conditioned or rank-deficient due to the small or zero-valued eigenvalues, and a simple inverse operation will not yield satisfactory results Many algorithms have been proposed to find a compromise between exact deblurring and noise amplification These techniques include Wiener filtering and pseudo-inverse filtering We begin our overview of signal recovery techniques by examining some of the methods that fall under the category of optimization-based approaches 25.3 Least Squares Solutions The earliest attempts toward signal recovery are based on the concept of inverting the degradation operator to restore the desired signal Because in practical applications the system will often be illconditioned, several problems can arise Specifically, high detail signal information may be masked by observation noise, or a small amount of observation noise may lead to an estimate that contains very large false high frequency components Another potential problem with such an approach is that for a rank-deficient degradation operator, the zero-valued eigenvalues cannot be inverted Therefore, the general inverse filtering approach will not be able to resolve the desired signal beyond the diffraction limit imposed by the measuring device In other words, referring to the vector–space description, the data that has been nulled out by the zero-valued eigenvalues cannot be recovered 25.3.1 Wiener Filtering Wiener filtering combines inverse filtering with a priori statistical knowledge about the noise and unknown signal [24] in order to deal with the problems associated with an ill-conditioned system The impulse response of the restoration filter is chosen to minimize the mean square error as defined by  2  (25.22) Ef = E f − fˆ where fˆ denotes the estimate of the ideal signal f and E{·} denotes the expected value The Wiener filter estimate is expressed as Rff HT −1 = (25.23) HW HR ff HT + Rnn where Rff and Rnn are the covariance matrices of f and n, respectively, and f and n are assumed to be uncorrelated; that is, n o (25.24) Rff = E ff T , n o Rnn = E nnT , (25.25) and Rfn = (25.26) The superscript T in the above equations denotes transpose The Wiener filter can also be expressed in the Fourier domain as H∗ Sff −1 HW = (25.27) |H|2 Sff + Snn where S denotes the power spectral density, the superscript ∗ denotes the complex conjugate, and H denotes the Fourier transform of H Note that when the noise power is zero, the Wiener filter reduces to the inverse filter; that is, −1 = H−1 (25.28) HW 1999 by CRC Press LLC c The Wiener filter approach for signal recovery assumes that the power spectra are known for the input signal and the noise Also, this approach assumes that finding a least squares solution that optimizes Eq (25.22) is meaningful For the case of image processing, it has been shown, specifically in the context of image compression, that the mean square error (mse) does not predict subjective image quality [25] Many signal processing algorithms are based on the least squares paradigm because the solutions are tractable and, in practice, such approaches have produced some useful results However, in order to define a more meaningful optimization metric in the design of image processing algorithms, we need to incorporate a human visual model into the algorithm design In the area of image coding, several coding schemes based on perceptual criteria have been shown to produce improved results over schemes based on maximizing signal–to–noise ratio or minimizing mse [25] Likewise, the Wiener filtering approach will not necessarily produce an estimate that maximizes perceived image or signal quality Another limitation of the Wiener filter approach is that the solution will not necessarily be consistent with any a priori knowledge about the desired signal characteristics In addition, the Wiener filter approach does not resolve the desired signal beyond the diffraction limit imposed by the measuring system For more details on Wiener filtering and the various applications, see other chapters in this book 25.3.2 The Pseudoinverse Solution The Wiener filters attempt to minimize the noise amplification obtained in a direct inverse by providing a taper determined by the statistics of the signal and noise process under consideration In practice, the power spectra of the noise and desired signal might not be known Here we present what is commonly referred to as the generalized inverse solution This will be the framework for some of the signal recovery algorithms described later The pseudoinverse solution is an optimization approach that seeks to minimize the least squares error as given by T (25.29) En = nT n = g − Hf (g − Hf ) The least squares solution is not unique when the rank of the M ×N matrix H is r < N ≤ M In other words, there are many solutions that satisfy Eq (25.29) However, the Moore-Penrose generalized inverse or pseudoinverse [26] does provide a unique least squares solution based on determining the least squares solution with minimum norm For a consistent set of equations as described in Eq (25.6), a solution is sought that minimizes the least squares estimation error; that is,  T f − fˆ (f − fˆ ) Ef =   T  ˆ ˆ (25.30) = tr (f − f ) f − f where f is the desired signal vector, fˆ is the estimate, and tr denotes the trace [22] The generalized inverse provides an optimum solution that minimizes the estimation error for a consistent set of equations Thus, the generalized inverse provides an optimum solution for both the consistent and inconsistent set of equations as defined by the performance functions Ef and En , respectively The generalized inverse solution satisfies the normal equations HT g = HT Hf (25.31) The generalized inverse solution, also known as the Moore-Penrose generalized inverse, pseudoinverse, or least squares solution with minimum norm is defined as  −1 HT g = H† g, (25.32) f † = HT H 1999 by CRC Press LLC c where the dagger † denotes the pseudoinverse and the rank of H is r = N ≤ M For the case of an inconsistent set of equations as described in Eq (25.5), the pseudoinverse solution becomes (25.33) f † = H† g = H† Hf + H† n where f † is the minimum norm, least squares solution If the set of equations are overdetermined with rank r = N < M, H† H becomes an identity matrix of size N denoted as IN and the pseudoinverse solution reduces to f† = f + H† n = f + 1f (25.34) A straightforward result from linear algebra is the bound on the relative error, knk k 1f k k H† kk H k , kf k kgk (25.35) where the product k H† kk H k is the condition number of H This quantity determines the relative error in the estimate in terms of the ratio of the vector norm of the noise to the vector norm of the observed image The condition number of H is defined as CH =k H† kk H k= σ1 σN (25.36) where σ1 and σN denote the largest and smallest singular values of the matrix H , respectively The larger the condition number, the greater the sensitivity to noise perturbations A matrix with a large condition number, typically greater than 100, results in an ill-conditioned system The pseudoinverse solution is best described by diagonalizing the degradation matrix H using singular value decomposition (SVD) [22] SVD provides a way to diagonalize any arbitrary M × N matrix In this case, we wish to diagonalize H; that is, H = U6VT (25.37) where U is a unitary matrix composed of the orthonormal eigenvectors of HT H, V is a unitary matrix composed of the orthonormal eigenvectors of HHT , and is a diagonal matrix composed of the singular values of H The number of nonzero diagonal terms denotes the rank of H The degradation matrix can be expressed in series form as H= r X i=1 σi ui viT (25.38) where ui and vi are the i-th columns of U and V, respectively and r is the rank of H From Eqs (25.37) and (25.38), the pseudoinverse of H becomes as H† = V6 † UT = r X i=1 σi−1 vi uiT (25.39) Therefore, from Eq (25.39), the pseudoinverse solution can be expressed as f † = H† g = V6 † UT g or f† = r X i=1 1999 by CRC Press LLC c σi−1 vi uiT g = r X i=1   σi−1 uiT g vi (25.40) (25.41) The series form of the pseudoinverse solution using SVD allows us to solve for the pseudoinverse solution using a sequential restoration algorithm expressed as   (25.42) f †(k+1) = f †(k) + σk−1 ukT g vk The iterative approach for finding the pseudoinverse solution is advantageous when dealing with ill-conditioned systems and noise corrupted data The iterative form can be terminated before the inversion of small singular values resulting in an unstable estimate This technique becomes quite easy to implement for the case of a circulant degradation matrix H, where the unitary matrices in Eq (25.37) reduce to the discrete Fourier transform (DFT) 25.3.3 Regularization Techniques Smoothing and regularization techniques [27, 28, 29] have been proposed in an attempt to overcome the problems associated with inverting ill-conditioned degradation operators for signal recovery These methods attempt to force smoothness on the solution of a least squares error problem The problem can be formulated in two different ways One way of formulating the problem is minimize: (25.43) fˆ T Sfˆ subject to:  T g − Hfˆ W(g − Hfˆ ) = e (25.44) where S represents a smoothing matrix, W is an error weighting matrix, and e is a residual scalar −1 The smoothing matrix is estimation error The error weighting matrix can be chosen as W = Rnn typically composed of the first or second order difference For this case, we wish to find the stationary point of the Lagrangian expression   T (25.45) F (fˆ , λ) = fˆ T Sfˆ + λ g − Hfˆ W(g − Hfˆ ) − e The solution is found by taking derivatives with respect to f and λ and setting them equal to zero The solution for a nonsingular overdetermined set of equations becomes −1  ˆf = HT WH + S HT Wg (25.46) λ where λ is chosen to satisfy the compromise between residual error and smoothness in the estimate Alternately, this problem can be formulated as minimize:  T (25.47) g − Hfˆ W(g − Hfˆ ) subject to: fˆ T Sfˆ = d (25.48) where d represents a fixed degree of smoothness The Lagrangean expression for this formulation becomes  T  T   G(fˆ , γ ) = g − Hfˆ W g − Hfˆ + γ fˆ T Sfˆ − d (25.49) and the solution for a nonsingular overdetermined set of equations becomes −1  HT Wg fˆ = HT WH + γ S 1999 by CRC Press LLC c (25.50) Note that for the two problem formulations, the results as given by Eq (25.46) and Eq (25.50) are identical if γ = 1/λ The shortcomings of such a regularization technique is that the smoothing function S must be estimated and either the degree of smoothness, d, or the degree of error, e, must be known to determine γ or λ Constrained restoration techniques have also been developed [30] to overcome the problem of an ill-conditioned system Linear equality constraints and linear inequality constraints have been enforced to yield one-step solutions similar to those described in this section All the techniques described thus far attempt to overcome the problem of noise corrupted data and ill-conditioned systems by forcing some sort of taper on the inverse of the degradation operator The sampling of algorithms discussed thus far fall under the category of optimization techniques where the objective function to be minimized is the least squares error Recovery algorithms that fall under the category of optimization-based algorithms include maximum likelihood, maximum a posteriori, and maximum entropy methods [17] We now introduce the concept of Projection onto Convex Sets (POCS), which will be the framework for a much broader and more powerful class of signal recovery algorithms 25.4 Signal Recovery using Projection onto Convex Sets (POCS) A broad set of recovery algorithms has been proposed to conform to the general framework introduced by the theory of projection onto convex sets (POCS) [31] The POCS framework enables one to define an iterative recovery algorithm that can incorporate a number of linear as well as nonlinear constraints that satisfy certain properties The more a priori information about the desired signal that one can incorporate into the algorithm, the more effective the algorithm becomes In [21], POCS is presented as a particular example of a much broader class of algorithms described as Set Theoretic Estimation The author distinguishes between two basic approaches to a signal estimation or recovery problem: optimization-based approaches and set theoretic approaches The effectiveness of optimizationbased approaches is highly dependent on defining a valid optimization criterion that, in practice, is usually determined by computational tractability rather than how well it models the problem The optimization-based approaches seek a unique solution based on some predefined optimization criterion The optimization-based approaches include the least squares techniques of the previous section as well as maximum likelihood (ML), maximum a posteriori (MAP), and maximum entropy techniques Set theoretic estimation is based on the concept of finding a feasible solution, that is, a solution that is consistent with all the available a priori information Unlike the optimization-based approaches which seek to find one optimum solution, the set theoretic approaches usually determine one of many possible feasible solutions Many problems in signal recovery can be approached using the set theoretic paradigm POCS has been one of the most extensively studied set theoretic approaches in the literature due to its convergence properties and flexibility to handle a wide range of signal characteristics We limit our discussion here to POCS–based algorithms The more general case of signal estimation using nonconvex as well as convex sets is covered in [21] The rest of this section will focus on defining the POCS framework and describing several useful algorithms that fall into this general category 25.4.1 The POCS Framework A projection operator onto a closed convex set is an example of a nonlinear mapping that is easily analyzed and contains some very useful properties Such projection operators minimize error distance and are nonexpansive These are two very important properties of ordinary linear orthogonal projections onto closed linear manifolds (CLMs) The benefit of using POCS for signal restoration 1999 by CRC Press LLC c is that one can incorporate nonlinear constraints of a certain type into the POCS framework Linear image restoration algorithms cannot take advantage of a priori information based on nonlinear constraints The method of POCS depends on the set of solutions that satisfies a priori characteristics of the desired signal to lie in a well-defined closed convex set For such properties, f is restricted to lie in the region defined by the intersection of all the convex sets, that is, f ∈ C0 = ∩li=1 Ci (25.51) Here Ci denotes the i-th closed convex set corresponding to the i-th property of f , Ci ∈ S, and i ∈ I The unknown signal f can be restored by using the corresponding projection operators Pi onto each convex set Ci A property of closed convex sets is that a projection of a point onto the convex set is unique This is known as the unique-nearest-neighbor property The general form of the POCS-based recovery algorithm is expressed as f (k+1) = Pik f (k) (25.52) where k denotes the iteration and ik denotes a sequence of indices in I A common technique for iterating through the projections is referred to as cyclic control where the projections are applied in a cyclic manner, that is, ik = k(modulol) + A geometric interpretation of the POCS algorithm for the simple case of two convex sets is illustrated in Fig (25.2) The original POCS formulation is FIGURE 25.2: Geometric interpretation of POCS further generalized by introducing a relaxation parameter expressed as f (k+1) = f (k) + λk (Pik (f (k) ) − f (k) ), < λk < (25.53) where λk denotes the relaxation parameter If λk < 1, the algorithm is said to be underrelaxed and if λk > 1, the algorithm is overrelaxed Refer to [31] for further details on the convergence properties of POCS 1999 by CRC Press LLC c Common constraints that apply to many different signals in practice and whose solution space obeys the properties of convex sets are described in [10] Some examples from [10] include frequency limits, spatial/temporal bounds, nonnegativity, sparseness, intensity or energy bounds, and partial knowledge of the spectral or spatial/temporal components For further details on commonly used convex sets, see [10] Most of the commonly used constraints for different signal processing applications fall under the category of convex sets which provide weak convergence However, in practice, most of the POCS algorithms provide strong convergence Many of the commonly used iterative signal restoration techniques are specific examples of the POCS algorithm The Kaczmarz algorithm [32], Landweber’s iteration [33], and the method of alternating projections [9] are all POCS-based algorithms It is worth noting that the image restoration technique developed independently by Gerchberg and Saxton [4] and Papoulis [5] are also versions of POCS The algorithm developed by Gerchberg addressed phase retrieval from two images and Papoulis addressed superresolution by iterative methods The Gerchberg–Papoulis (GP) algorithm is based on applying constraints on the estimate in the signal space and the Fourier space in an iterative fashion until the estimate converges to a solution For the image restoration problem, the high frequency components of the image are extrapolated by imposing the finite extent of the object in the spatial domain and by imposing the known low frequency components in the frequency domain The dual problem involves spectral estimation where the signal is extrapolated in the time or spatial domain The algorithm consists of imposing the known part of the signal in the time domain and imposing a finite bandwidth constraint in the frequency domain The GP algorithm assumes a space-invariant (or time-invariant) degradation operator We now present several signal recovery algorithms that conform to the POCS paradigm which are broadly classified under two categories: row-based and block-based algorithms 25.5 Row-Based Methods As early as 1937, Kaczmarz [32] developed an iterative projection technique to solve the inverse problem for a linear set of equations as given by Eq (25.5) The algorithm takes the following form: f (k+1) = f (k) + λk gik − (hik , f (k) ) hik k hik k2 (25.54) The relaxation parameter λk is bound by ≤ λk ≤ 2, h represents a row of the matrix H, ik denotes a sequence of indices corresponding to a row in H, gi represents the i-th element of the vector g, (·, ·) is the standard inner product between two vectors, k denotes the iteration, and k · k denotes the Euclidean or L2 norm of a vector defined as k g k= N X i=1 !1/2 gi2 (25.55) Kaczmarz proved that Eq (25.54) converges to the unique solution when the relaxation parameter is unity and H represents a square, nonsingular matrix, that is, H possesses an inverse and under certain conditions, the solution will converge to the minimum norm least squares or pseudoinverse solution For further reading on the Kaczmarz algorithm and conditions for convergence, see [7, 8, 34, 35] In general, the order in which one performs the Kaczmarz algorithm on the M existing equations can differ Cyclic control, where the algorithm iterates through the equations in a periodic fashion is described as ik = k(moduloM) + where M is the number of rows in H Almost cyclic control exists when M sequential iterations of the Kaczmarz algorithm yield exactly one operation per equation in any order Remotest set control exists when one performs the operations on the most distant equation 1999 by CRC Press LLC c first; most distant in the sense that the projection onto the hyperplane represented by the equation is the furthest away The measure of distance is determined by the norm This type of control is seldomly used since it requires a measurement dependent on all the equations The method of Kaczmarz for λ = 1.0, can be expressed geometrically as follows Given f (k) and the hyperplane Hik = {f ∈ R n | (hik , f ) = gik }, f (k+1) is the orthogonal projection of f (k) onto Hik This is illustrated in Fig 25.3 Note that by changing the relaxation parameter, the next iterate can be FIGURE 25.3: Geometric interpretation of the Kaczmarz algorithm a point anywhere along the line segment connecting the previous iterate and its orthogonal reflection with respect to the hyperplane The technique of Kaczmarz to solve for a set of linear equations has been rediscovered over the years for many different applications where the general problem formulation can be expressed as Eq (25.5) For this reason, the Kaczmarz algorithm appears as the algebraic reconstruction technique (ART) in the field of medical imaging for computerized tomography (CT) [7], as well as the WidrowHoff least mean squares (LMS) algorithm [36] for channel equalization, echo cancellation, system identification, and adaptive array processing For the case of solving linear inequalities where Eq (25.5) is replaced with Hf ≤ g, (25.56) a method very similar to Kaczmarz’s algorithm is developed by Agmon [37] and Motzkin and Schoenberg [38], f (k+1) c (k) = f (k) + c(k) hik = gik 0, λk − hik , f (k) k hik k2 ! (25.57) Once again, the relaxation parameter is defined on the interval ≤ λk ≤ The method of solving linear inequalities by Agmon and Motzkin and Schoenberg is mathematically identical to the perceptron convergence theorem from the theory of learning machines (see [39]) 1999 by CRC Press LLC c 25.6 Block-Based Methods A generalization of the Kaczmarz algorithm introduced in the previous section has been suggested by Eggermont [35] which can be described as a block, iterative algorithm Recall the set of linear equations given by Eq (25.5) where the dimensions of the problem are redefined so that H ∈ RLM×N , f ∈ RN , and g ∈ RLM In order to describe the generalization of the Kaczmarz algorithm, the matrix H is partitioned into M blocks of length L,   T   h1 H1  hT   H2      (25.58) H= =      T hLM and g is partitioned as    g=  g1 g2 gLM HM       =   G1 G2      (25.59) GM where Gi , i = 1, 2, , M is a vector of length L and the subblocks Hi are of dimension L × N The generalized group-iterative variation of the Kaczmarz algorithm is expressed as   (25.60) f (k+1) = f (k) + HiTk 6k Gik − Hik f (k) where f (0) ∈ RN Eggermont gives details of convergence as well as conditions for convergence to the pseudoinverse solution [35] A further generalization of Kaczmarz’s algorithm led Eggermont [35] to the following form of the general block Kaczmarz algorithm:   (25.61) f (k+1) = f (k) + Hi†k 3k Gik − Hik x (k) where once again Hi†k denotes the Moore–Penrose inverse of Hik , 3k is the L × L relaxation matrix, and cyclic control is defined as ik = k(moduloM) + When the block size L given in Eq (25.60) is equal to the number of equations M, the algorithm becomes identical to Landweber’s iteration [33] for solving Fredholm equations of the first kind; that is,   (25.62) f (k+1) = f (k) + HT 6k g − Hf (k) The resulting Landweber iteration becomes   f (k+1) = HT g + I − HT H f (k) (25.63) Another interesting approach that is similar to the generalized block-Kaczmarz algorithm, with the block size L equal to the number of equations M, is the method of alternating orthogonal projections described by Youla [9] where alternating orthogonal projections are made onto closed linear manifolds (CLM) The row-based and block-based algorithms described here correspond to a POCS framework where the only a priori information incorporated into the algorithm is the original problem formulation as 1999 by CRC Press LLC c described by Eq (25.5) At times, the only information we may have is the original measurement g and an estimate of the degradation operator H and these algorithms are suited for such applications However, for most applications, other a priori information is known about the desired signal and an effective algorithm should utilize this information We now describe a POCS-based algorithm suited for the problem of image restoration where additional a priori signal information is incorporated into the algorithm 25.7 Image Restoration Using POCS Here we describe an image recovery algorithm [18, 40] that is based on the POCS framework and show some image restoration results [19, 20] The list of references includes other examples of POCS-based recovery algorithms The least squares minimum norm or pseudoinverse solution can be formulated as f † = H† Hf = V3VT f (25.64) where the dagger † denotes the pseudoinverse, V is the unitary matrix found in the diagonalization of H, and is the following diagonal matrix whose first r diagonal terms are equal to one:   11   12       (25.65) 3=    1r     By defining P = V3VT , (25.66) the orthogonal complement to the operator P is given by the projection operator Q = I − P = V3C VT where      C =     (25.67)  1r+1          (25.68) The diagonal matrix 3C contains ones in the last N − r diagonal positions and zeroes elsewhere The superscript C denotes the complement Any arbitrary vector f can be decomposed as follows: f = Pf + Qf (25.69) where the projection operator P projects f onto the range space of the degradation matrix HT H and the orthogonal projection operator Q projects f onto the null space of the degradation matrix HT H The component Pf will be referred to as the “in-band” term and the component Qf will be referred to as the “out-of-band” term 1999 by CRC Press LLC c In general, the least squares family of solutions to the image restoration problem can be stated as f = fin−band + fout−of −band = f † + Kr+1 vr+1 + Kr+2 vr+2 + + KN vN , σ2 , , σ2 } {σr+1 N r+2 (25.70) HT H; The vectors vi correspond to the eigenvectors of for they are the eigenvectors associated with zero valued eigenvalues The out-of-band solution Kr+1 vr+1 + + KN vN must satisfy (25.71) Hfout−of −band = Adding the terms {Kr+1 vr+1 , Kr+2 vr+2 , , KN vN }, to the pseudoinverse solution f † does not change the L2 norm of the error since knk = = k g − Hf k   k g − H f † + Kr+1 vr+1 + + KN vN k (25.72) = k g − Hf † − HKr+1 vr+1 − − HKN vN k = k g − Hf † k which is the least squares error The terms HKr+1 vr+1 , , HKN vN are all equal to zero because the vectors vr+1 , , vN are in the null space of H Therefore, any linear combination of vi in the null space of H can be added to the pseudoinverse solution without affecting the least squares cost function The pseudoinverse solution, f † , provides the unique least squares estimate with minimum norm, (25.73) k fLS k=k f † k where fLS denotes the least squares solution In practice, it is unlikely that the desired solution is required to possess the minimum norm out of all feasible solutions so that f † is not necessarily the optimum solution The image restoration algorithm described here provides a framework that allows a priori information in the form of signal constraints to be incorporated into the algorithm in order to obtain a better estimate than the least squares minimum norm solution f † The constraint operator will be represented by C and can incorporate a variety of linear and nonlinear a priori signal characteristics as long as they obey the properties of convex set theory In the case of image restoration, the constraint operator C includes non-negativity which can be described by  fi fi ≥ (25.74) (C+ f )i = fi < Concatenating the vectors vi in Eq (25.70) yields f = f † + V3C K  where   K=  K1 K2 (25.75)      (25.76) KN  and V3C = v1 v2 vN                    1r+1 1N 1999 by CRC Press LLC c (25.77) We would like to find the solution to the unknown vector K in Eq (25.75) A reasonable approach is to start with the constrained pseudoinverse solution and solve for K in a least squares manner; that is, minimize: n o (25.78) k C+ f † − f † + V3C K k subject to: C+ f † = f † + V3C K (25.79) C+ f † − f † = V3C K   K = 3C VT C+ f † − f † (25.80) The least squares solution becomes Since 3C VT f † = 0, we get K = 3C VT C+ f † (25.81) Substituting Eq (25.81) into Eq (25.79) yields C+ f † = f † + QC+ f † + e (25.82) where e denotes a residual vector The process of enforcing the overall least squares solution and solving for the out-of-band component to fit the constraints can be implemented in an iterative fashion The resulting recursion is C+ f (k) = f † + QC+ f (k) + e(k) (25.83) f (k+1) ≡ C+ f (k) − e(k) (25.84) By defining the final iterative algorithm becomes f (0) f (k+1) = = f† f † + QC+ f (k) k = 0, 1, 2, (25.85) Note that the recursion yields the least squares solution while enforcing thea priori constraints through the out-of-band signal component It is apparent that such an approach will yield a better estimate for the unknown signal f than the minimum norm least squares solution f † Note that this algorithm can easily be generalized to other problems by replacing the non-negativity constraint C+ with the signal appropriate constraints In the case when f † satisfies all the constraints exactly, the solution to iterative algorithm reduces to the pseudoinverse solution For more details on this algorithm, convergence issues, and stopping criterion, refer to [18, 20, 40] By looking at this algorithmic framework from the set theoretic viewpoint described in [21], the original set of solutions is given by all the solutions that satisfy the least squares error criterion The addition of a priori signal constraints attempts to reduce the feasible set of solutions and to provide a better estimate than the pseudoinverse solution Finally, we would like to show some image restoration results based on the method described in [19, 20, chap 4] The technique is a modification of the Kaczmarz method described here using the theory of POCS Original, degraded, restored images using the original Kaczmarz algorithm and the restored images using the modified algorithm based on the POCS framework are shown in Fig (25.4) Similarly, we show the original, degraded and restored images in the frequency domain in Fig (25.5) The details of the algorithm are found in [19] 1999 by CRC Press LLC c FIGURE 25.4: (a) Original image (b) Degraded image at 25 dB SNR (c) Restored image using Kaczmarz iterations (d) Restored image using the modified Kaczmarz algorithm in a POCS framework (Courtesy of IEEE: Kuo, S.S and Mammone, R.J., Image restoration by convex projections using adaptive constraints and the l1 norm, IEEE Trans Signal Process., 40, 159–168, 1992.) 1999 by CRC Press LLC c FIGURE 25.5: Spatial frequency response of the (a) original image, (b) degraded image, and (c) restored image using the new algorithm (Courtesy of IEEE: Kuo, S.S and Mammone, R.J., Image restoration by convex projections using adaptive constraints and the l1 norm, IEEE Trans Signal Process., 40, 159–168, 1992.) 1999 by CRC Press LLC c .. .25 Signal Recovery from Partial Information 25. 1 Introduction 25. 2 Formulation of the Signal Recovery Problem Prolate Spheroidal Wavefunctions 25. 3 Least Squares Solutions... Section 25. 2 presents the signal degradation model and formulates the signal recovery problem The early attempts of signal recovery based on inverse filtering are presented in Section 25. 3 The... 25. 2 Formulation of the Signal Recovery Problem Signal recovery can be viewed as an estimation process in which operations are performed on an observed signal in order to estimate the ideal signal

Ngày đăng: 16/12/2013, 04:15

Từ khóa liên quan

Tài liệu cùng người dùng

Tài liệu liên quan