OPTICAL IMAGING AND SPECTROSCOPY Phần 2 ppt

52 323 0
OPTICAL IMAGING AND SPECTROSCOPY Phần 2 ppt

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

38 GEOMETRIC IMAGING Figure 2.28 Base transmission pattern, tiled mask, and inversion deconvolution for p ¼ Figure 2.29 Base transmission pattern, tiled mask, and inversion deconvolution for p ¼ 11 2.5 Figure 2.30 PINHOLE AND CODED APERTURE IMAGING 39 Base transmission pattern, tiled mask, and inversion deconvolution for p ¼ 59 implemented under cyclic boundary conditions rather than using zero padding In contrast with a pinhole system, the number of pixels in the reconstructed coded aperture image is equal to the number of pixels in the base transmission pattern Figures 2.31–2.33 are simulations of coded aperture imaging with the 59  59-element MURA code As illustrated in the figure, the measured 59  59-element data are strongly positive For this image the maximum noise-free measurement value is 100, and the minimum value is 58, for a measurement dynamic range of ,2 We will discuss noise and entropic measures of sensor system performance at various points in this text, in our first encounter with a multiplex measurement system we simply note that isomorphic measurement of the image would produce a much higher measurement dynamic range for this image In practice, noise sensitivity is a primary concern in coded aperture and other multiplex sensor systems For the MURA-based coded aperture system, Gottesman and Fenimore [102] argue that the pixel signal-to-noise ratio is Nfij SNRij ẳ p P P Nfij ỵ N kl fkl ỵ kl Bkl (2:47) where N is the number of holes in the coded aperture and Bkl is the noise in the (kl)th pixel The form of the SNR in this case is determined by signal-dependent, or “shot,” noise We discuss the noise sources in electronic optical detectors in Chapter and 40 GEOMETRIC IMAGING Figure 2.31 Coded aperture imaging simulation with no noise for the 59  59-element code of Fig 2.30 Figure 2.32 Coded aperture imaging simulation with shot noise for the 59  59-element code of Fig 2.30 2.6 PROJECTION TOMOGRAPHY 41 Figure 2.33 Coded aperture imaging simulation with additive noise for the 59  59-element code of Fig 2.30 derive the square-root characteristic form of shot noise in particular For the 59  59 MURA aperture, N ¼ 1749 If we assume that the object consists of binary values and 0, the maximum pixel SNR falls from 41 for a point object to for an object with 200 points active The smiley face object of Fig 2.31 consists of 155 points Dependence of the SNR on object complexity is a unique feature of multiplex sensor systems The equivalent of Eqn (2.47) for a focal imaging system is Nfij (2:48) SNRij ẳ p Nfij ỵ Bij p This system produces an SNR of approximately N independent of the number of points in the object As with the canonical wave and correlation field multiplex systems presented in Sections 10.2 and 6.4.2, coded aperture imaging provides a very high depth field image but also suffers from the same SNR deterioration in proportion to source complexity 2.6 PROJECTION TOMOGRAPHY To this point we have considered images as two-dimensional distributions, despite the fact that target objects and the space in which they are embedded are typically 42 GEOMETRIC IMAGING three-dimensional Historically, images were two-dimensional because focal imaging is a plane-to-plane transformation and because photochemical and electronic detector arrays are typically 2D films or focal planes Using computational image synthesis, however, it is now common to form 3D images from multiplex measurements Of course, visualization and display of 3D images then presents new and different challenges A variety of methods have been applied to 3D imaging, including techniques derived from analogy with biological stereo vision systems and actively illuminated acoustic and optical ranging systems Each approach has advantages specific to targeted object classes and applications Ranging and stereo vision are best adapted to opaque objects where the goal is to estimate a surface topology embedded in three dimensions The present section and the next briefly overview tomographic methods for multidimensional imaging These sections rely on analytical techniques and concepts, such as linear transform theory, the Fourier transform and vector spaces, which are not formally introduced until Chapter The reader unfamiliar with these concepts may find it useful to read the first few sections of that chapter before proceeding Our survey of computed tomography is necessarily brief; detailed surveys are presented by Kak and Slaney [131] and Buzug [37] Tomography relies on a simple 3D extension of the density-based object model that we have applied in this chapter The word tomography is derived from the Greek tomos, meaning slice or section, and graphia, meaning describing The word predates computational methods and originally referred to an analog technique for imaging a cross section of a moving object While tomography is sometimes used to refer to any method for measuring 3D distributions (i.e., optical coherence tomography; Section 6.5), computed tomography (CT) generally refers to the projection methods described in this section Despite our focus on 3D imaging, we begin by considering tomography of 2D objects using a one-dimensional detector array 2D analysis is mathematically simpler and is relevant to common X-ray illumination and measurement hardware 2D slice tomography systems are illustrated in Fig 2.34 In parallel beam systems, a collimated beam of X rays illuminates the object The object is rotated in front of the X-ray source and one-dimensional detector opposite the source measures the integrated absoption along a line through the object for each ray component As always, the object is described by a density function f(x, y) Defining, as illustrated in Fig 2.35, l to be the distance of a particular ray from the origin, u to be the angle between a normal to the ray and the x axis, and a to be the distance along the ray, measurements collected by a parallel beam tomography system take the form gl, uị ẳ f l cos u a sin u, l sin u ỵ a cos uÞd a (2:49) where g(l, u) is the Radon transform of f (x, y) The Radon transform is defined for f [ L 2(Rn) as the integral of f over all hyperplanes of dimension n Each 2.6 PROJECTION TOMOGRAPHY Figure 2.34 Tomographic sampling geometries Figure 2.35 Projection tomography geometry 43 44 GEOMETRIC IMAGING hyperplane is defined by a surface normal vector i n [in Eqn (2.49) in ẳ cos u i x ỵ sinu i y] The equation of the hyperplane is x i n ¼ l The Radon transform in Rn may be expressed R{f }(l, in ) ẳ R f (lin ỵ a)(d a)nÀ1 (2:50) nÀ1 where a is a vector in the hyperplane orthogonal to in With reference to the definition given in Eqn (3.10), the Fourier transform with respect to l of R{ f }(l, in ) is F l fR{ f }(l, in )g¼ ðð ð ^(u)e2pulin ỵaị e2p iul l (da)n1 (du)n dl f ¼ ^(u ¼ ul in ) f (2:51) Equation (2.51), relating the 1D Fourier transform of the Radon transform to the Fourier transform sampled on a line parallel to i n, is called the projection slice theorem In the case of the Radon transform on R2 , the Fourier transform with respect to l of Eqn (2.49) yields ðððð ^ðul , uị ẳ g(l, u)ej2pul l du dv dl da (2:52) g ¼ ^ðu ¼ ul cos u, v ¼ ul sin uÞ f (2:53) where ˆ is the Fourier transform of f If we sample uniformly in l space along an aperf ture of length Rs, then Dul ¼ 2p/Rs The sample period along l determines the spatial extent of the sample In principle, one could use Eqn (2.52) to sample the Fourier space of the object and then inverse-transform to estimate the object density In practice, difficulties in interpolation and sampling in the Fourier space make an alternative algorithm more attractive The alternative approach is termed convolution – backprojection The algorithm is as follows: Measure the projections g(l, u) ˆ Fourier-transform to obtain g(ul, u) Multiply ^(ul , u) by the filter jul j and inverse-transform This step consists of g convolving g(l, u) with the inverse transformation of jul j (the range of ul is limited to the maximum frequency sampled) This step produces the filtered Ð function Q(l, u) ¼ jul j^ðul , uÞ exp (i2pul l)dul g Sum the filtered functions Q(l, u) interpolated at points l ¼ x cos u ỵ y sin u to produce the reconstructed estimate of f This constitutes the backprojection step To understand the filtered backprojection approach, we express the inverse Fourier transform relationship ^u, vịei2p(uxỵvy) du dv (2:54) f (x, y) ẳ f 2.6 PROJECTION TOMOGRAPHY 45 in cylindrical coordinates as f (x, y) ẳ 2p ^w, uịei2pw(x cos uỵy sin u) w dw du f (2:55) p where w ẳ u2 ỵ v2 Equation (2.55) can be rearranged to yield f (x, y) ¼ p ^w, uịei2pw(x cos uỵy sin u) f 0 ỵ ^w, u ỵ pịei2pw(x cos uỵy sin u) w dw du f ẳ p ^w, uịei2pw(x cos uỵy sin u) jwjdw d u f (2:56) (2:57) À1 where we use the fact that for real-valued f(x, y), ^(w, u ) ¼ ^(w, u ỵ p) This f f means that p ð (2:58) f (x, y) ¼ Q(l ¼ x cos u ỵ y sin u, u)d u The convolution – backprojection algorithm is illustrated in Fig 2.36, which shows the Radon transform, the Fourier transform ^(ul , u), the Fourier transform g ^(u, v), Q(l, u), and the reconstructed object estimate Note, as expected from the prof jection slice theorem, that ^(ul , u) corresponds to slices of ^(u, v) “unrolled” around g f the origin Edges of the Radon transform are enhanced in Q(l, u), which is a “highpass” version of g(l, u) We turn finally to 3D tomography, where we choose to focus on projections measured by a camera A camera measures a bundle of rays passing through a principal point, (xo , yo , zo ) For example, we saw in Section 2.5 that a pinhole or coded aperture imaging captures ^(ux , uy ), where, repeating Eqn (2.31) f ð ^(ux , uy ) ¼ f (zo ux , zo uy , zo )dzo f (2:59) ˆ(ux, uy) is the integral of f(x, y, z) along a line through the origin of the (xo , yo , zo ) f coordinate system (ux , uy ) are angles describing the direction of the line on the unit sphere In two dimensions, a system that collects rays through series of principal points implements fan beam tomography Fan beam systems often combine a point x-ray source with a distributed array of detectors, as illustrated in Fig 2.34 In 3D, tomographic imaging using projections through a sequence of principal points is cone beam tomography Note that Eqn (2.59) over a range of vertex points is not the 3D Radon transform We refer to the transformation based on projections along ray bundles as the X-ray transform The X-ray transform is closely related 46 GEOMETRIC IMAGING Figure 2.36 Tomographic imaging with the convolution–backprojection algorithm to the Radon transform, however, and can be inverted by similar means The 4D X-ray transform of 3D objects, consisting of projections through all principal points on a sphere surrounding an object, overconstrains the object Tuy [233] describes reduced vertex paths that produce well-conditioned 3D X-ray transforms of 3D objects A discussion of cone beam tomography using optical imagers is presented by Marks et al [168], who apply the circular vertex path inversion algorithm 2.7 REFERENCE STRUCTURE TOMOGRAPHY Figure 2.37 47 Cone beam geometry developed by Feldkamp et al [68] The algorithm uses 3D convolution – backprojection based on the vertex path geometry and parameters illustrated in Fig 2.37 Projection data fF (ux , uy ) are weighted and convolved with the separable filters hy (uy ) ¼ vyo ð d vjvj exp Àvyo hx (ux ) ¼ ivuy À 2jvj vyo sin(ux vzo ) puz (2:60) where vyo and vzo are the angular sampling frequencies of the camera These filters produce the intermediate function QF (ux , uy ) ¼ ðð fF (ux0 , uy0 ) dux0 duy0 hx (ux À ux0 )hy (uy À uy0 ) q 2 ỵ ux0 uy0 (2:61) and the reconstructed 3D object density is fE (x, y, z) ¼ 2.7 ! ð d2 y z sin f Qf , df 4p2 (d ỵ x cos f)2 d ỵ x cos f d ỵ x cos f (2:62) REFERENCE STRUCTURE TOMOGRAPHY Optical sensor design boils down to compromises between mathematically attractive and physically attainable visibilities In most cases, physical mappings are the starting point of design So far in this chapter we have encountered two approaches driven 76 ANALYSIS some approximate maximum spatial extent N/2Bx ¼ 2X The approximate Fourier transform is ^ fapprox (u, v) ¼     (N=2)À1 X u v rect rect 4Bx By 2Bx 2By n¼À(N=2)  (N=2)À1 X m¼À(N=2)  ! nu mv fnm exp ip ỵ Bx By (3:96) The DFT of the truncated set of samples f fnmg is (N=2)À1 X ^n0 m0 ¼ f N n¼À(N=2) (N=2)À1 X m¼À(N=2) fnm exp Ài2p  ! nn mm0 ỵ N N (3:97) where N ẳ 4BX and, for simplicity, we set B ¼ Bx ¼ By, X ¼ Y The DFT samples approximate the Fourier transform of f (x,y) at sample points such that ^n0 m0 ¼ 4B2 N^approx (u ¼ n0 =2X, v ¼ m0 =2X) While truncation of nonzero terms f f ˆn0 m0 is not exactly equal to ^(u ¼ n0 =2X, v ¼ m0 =2X), the inverse DFT means that f f of ˆn0 m0 does produce exact values of the sampled function at sample points within f the truncation window Consider, as an example, the bandlimited one-dimensional function f (x) ¼ f sinc(x d), for d ( In this case ^(u) ¼ eÀ2pidu rect(u) For this pair of functions, B ¼ 1, f0 ¼ sinc(d) and, for n = 0, fn ¼ f(n) % (21)nd/n Sampling this function over the window from x ¼ 2N/2 to x ¼ N/2 21 produces N samples spaced by in the spatial domain The DFT produces N samples in Fourier space between u ¼ À1 and u ¼ À 1=N spaced by 1/N The error between the samples ˆn and f 2 ˆ(u) for various sampling ranges X for this function is shown in Fig 3.5 Note that f the error does not decrease at the edges of the sampling band, a phenomenum common in Fourier analysis of discontinuous functions However, the error between the numerical spectrum and the actual spectrum does decrease as the sampling window increases over much of the Fourier passband Despite our assumption that f(x) is bandlimited in deriving the sampling theorem, it is not uncommon to attempt numerical Fourier analysis of functions with infinite support in both x and u In numerically analyzing the Fourier transform of such a function, one selects a window size X and a sampling period D to obtain N ¼ 2X/D f samples fn ¼ f(nD) The DFT of the samples fn produces N discrete samples ˆn0 nominally corresponding to values of ˆ (u ¼ n0 /2X ) covering the frequency range (1/2X f 1/D) u 1/D The sampling period over the range of u is 1/2X As an example of these scaling laws, the DFT of the Gaussian function f(x) ¼ exp(2px 2), for which ˆ(u) ¼ exp(2pu 2), is illustrated in Fig 3.6 f Discrete Fourier analysis is attractive in considering linear transformations because shift-invariant transformations are modeled by simple multiplicative filter functions in Fourier transform space and because fast numerical algorithms for modeling 3.7 DISCRETE ANALYSIS OF LINEAR TRANSFORMATIONS 77 Figure 3.5 Magnitude of the difference between ˆ(u) and ˆn¼uX over the bandpass of f (x) ¼ f f sinc(x d) for various values of N ¼ 4XB and d ¼ 0.01 Figure 3.6 DFT of eÀpx sampled uniformly over the window 22 x for various values of N As we increase the number of samples while keeping X constant, the sample period in Fourier space stays constant and the number of samples within the significant region of the signal remains fixed Increasing the sampling window would increase the Fourier resolution Both the sampling window and the sampling rate must be increased to maintain resolution in both spaces 78 ANALYSIS Fourier transforms are readily available As we briefly consider through wavelet theory in the second half of this chapter, Fourier analysis is not a unique or universally attractive technique for modeling linear transformations As an example of the first attraction of Fourier analysis, consider again the Fresnel transform As described in Eqn (3.62), the transfer function for the Fresnel transform is ^t (u) ¼ exp (i(p=4)) exp (Àipt2 u2 ) We may model the Fresnel transform on a h function f (x) by modulating the DFT of a sampled version of f (x) by this transfer function and then applying an inverse DFT Returning to the example of a Gaussian signal, the analytic form of the Fresnel transform is given in Eqn (3.72) As illustrated in Fig 3.7, a numerical estimate of the Fresnel transform is obtained by multiplying the DFT of the sampled function by the transfer function and inverse transforming The spatial window spans jxj , 20 In total, 4096 samples were used with a sample spacing of 0.0098 Each plot shows the absolute value of the transformed signal as well as the real and imaginary components Values of t are at top running through 0.5, 1, 2, 4, and at the bottom As t increases, the transformed signal becomes increasingly diffuse For t ¼ 8, the numerically estimated transform encounters large errors as the transformed signal extends beyond the range of the window Numerical estimation of the transform is reasonably correct up to values of t such that the transformed signal extends beyond the input window Note that both f fnm and ˆn0 m0 act as Fourier series components when reconstructing the approximate Figure 3.7 Numerically estimated Fresnel transform of the fundamental Hermite – Gaussian mode 3.8 79 MULTISCALE SAMPLING continuous functions The series reconstructions are periodic, when the reconstruction extends beyond the transform window periodicity produces interference, or aliasing, between bandpass or spatial windows The second motivation for Fourier methods in linear systems analysis focuses on the computational efficiency of computing the Fourier transform Nominally, the DFT of a one-dimensional N element dataset is represented by a N  N transformation matrix multiplying a length N dataset This transformation would require O(N 2) operations In practical systems, the fast Fourier transform (FFT) is used to greatly reduce the number of computational operations required Hierarchical decimation is the heart of the FFT algorithm The one-dimensional DFT of length N, defined as (N=2)À1 X ^(n0 ) ¼ pffiffiffiffi f fn ei2p(nn =N) N n¼À(N=2) (3:98) is decimated into two DFTs of length N/2 by the arrangement (N=4)À1 (N=4)À1 X e(i2pn =N) X i2p(nn0=(N=2)) ^n0 ẳ pp p f2n e ỵ p f(2nỵ1) ei2p(nn =(N=2)) f N=2 nẳ(N=4) N=2 nẳ(N=4) ^en0 ^on0 f f ẳ p ỵ e(i2pn )=N pffiffiffi 2 (3:99) f where ˆen0 and ˆon0 are the length N/2 DFTs of the even and odd coefficients of fn f Since the two shorter transformations each require O(N 2/4) operations, decimation reduces the number of operations required by a factor of If N is a power of 2, recursive decimation reduces the number of operations required from O(N 2) to O(N log2 N ) For two-dimensional Fourier transforms, FFT algorithms are separably applied along rows or columns of data For an N  N dataset, the FFT reduces the computational order from O(N 4) to O(N log2 N) For images with N ¼ 1024, the FFT algor2 ithm reduces computational complexity by four orders of magnitude 3.8 MULTISCALE SAMPLING Concepts of discrete representation and sampling have evolved considerably in the half-century since Shannon presented the sampling theorem [234] The evolution of sampling theory has accelerated in the past quarter-century with the development of a generalized methodology for developing bases and representation spaces Wavelet theory is the most elegant means of understanding emerging strategies The primary goal of this text is to develop a framework for analysis and design of physical/digital interfaces in optical sensor systems Since more sophisticated models of signal sampling and analysis are enabling in pursuit of this goal, we present a brief introduction to wavelets and generalized sampling in this chapter Our hope is to be 80 ANALYSIS accessible to the optical engineer without unnecessarily insulting the mathematician We refer the mathematically inclined reader to the mathematics and signal processing literature [53,164] As discussed in Chapter 7, modern sampling theory must distinguish between sampling in the sense of measurement and in the sense of signal analysis and representation Sampling theory is applied to Signal Analysis In the sampling theorem we have shown that the signal space VB is spanned by the function sinc(Bx) We can easily analyze linear transformations of signals on this space by analyzing transformations of the basis function Signal Estimation Equation (3.100) is a recipe for estimating the signal value of f(x) at any point x from discrete samples Sensor System Design If we measure a signal at discrete points in space or time, the sampling theorem informs the rate at which the signal must be measured for accurate representation In practice, challenges arise to the application of the Shannon sampling approach to each of these uses In cases and 2, the fact that the function sinc(x) does not have compact support makes computation and analysis expensive In the third case, one must account for the fact that it is not generally possible to measure functions to infinite spatial or temporal resolution, meaning that true measurements of f (x) are not generally available We discuss details of actual measurements in optical systems in Chapters and For present purposes, it is helpful to simply consider the possibility that while expansion coefficients are somehow related to local features of the signal, they need not correspond to actual signal values As a first step to resolving all three of these challenges, it is helpful to consider sampling strategies that are not based on sinc(x) For simplicity, we consider wavelet representations of one-dimensional functions The 1D version of Eqn (3.92) is f (x) ¼ X n¼À1 f n sinc(2Bx À n) 2B (3:100) where we assume f(x) [ VB and VB is the subspace of bandlimited functions in L 2(R) To address the challenges described above, we maintain the concept of representation of f (x) by a discretely shift-invariant localized function, but we replace sinc(x) with a scaling function f(x) We imagine representing f(x) in terms of the scaling function as ff (x) ¼ X cn f(x À n) (3:101) n¼À1 f(x) is the generating function for the vector space V( f ) is spanned by {f(x À n)}[ Z 3.8 MULTISCALE SAMPLING 81 As an example, we consider a scaling function and decomposition discovered in 1910 by Haar [108] The Haar scaling function is b 0(x) ¼ rect(x 1) Decomposition on this function takes the form f0 (x) ¼ X cn b0 (x À n) (3:102) n¼À1 P With the restriction that n[Z jcn j2 is finite, f0 (x) [ L 2(R) and the family of functions {fn (x) ¼ b0 (x À n)} forms an orthonormal basis of a subspace V0 , L2 (R)   Evaluating the inner product b0 (x À m)j f0 using the orthogonality relationship   b (x À m)jb0 (x À n) ¼ dnm we see from Eqn (3.102) that cn ¼ ð b0 (x À n)f0 (x)dx (3:103) À1 The function f (x) may be decomposed into two components, f0 [ V0 and f? Ó V0 , such that f (x) ẳ f0 (x) ỵ f? (x) For all functions g(x) [ V0 , hgj f? i ¼ Thus for the orthonormal basis {b0 (x À n)}, we obtain       b0 (x À n)j f ẳ b0 (x n)j( f0 ỵ f? ) ¼ b0 (x À n)j f0 (3:104)   and cn ¼ b0 (x À n)j f f0(x) is the projection of f(x) onto V0, PV0 f An example of PV0 f for f (x) ¼ x2 =10 is shown in Fig 3.8 Figure 3.8 Projection of f (x) ¼ x 2/10 onto the Haar basis 82 ANALYSIS Less fidelity is observed in the projection of a more complex function onto V0 in Fig 3.9(a) To improve the fidelity of the representation,ffi we are tempted to use a pffiffi narrower generating function, for example, fÀ1,n (x) ¼ 2b0 ð2x À nÞ As illustrated in Fig 3.9(b), this rescaled generating function does, in fact, improve the representation fidelity Continuing on this train of thought, we choose to define families of sampling functions on scales j such that x  f j,n (x) ¼ pffiffiffiffi b0 j À n 2j (3:105) Each rescaled generating function corresponds to a new Hilbert space of functions Vj [ L2 (R) Note that the basis functions for the space Vj can be expressed in the space VjÀ1 as à  f j,n (x) ẳ p f j1,2n (x) ỵ f j1,2nỵ1 (x) (3:106) This means that V jỵ1 , Vj We observe, of course, that estimation of f (x) is more accurate on Vj than on Vjỵ1 For the Haar scaling function this refinement process continues indefinitely until in the limit lim Vj ¼ L2 (R) j7 !À1 (3:107) Figure 3.9 Projection of f (x) onto the Haar basis on scales 0, 21, and 23: (a) f (x) and PV0 f ; (b) f (x) and PVÀ1 f ; and (c) f (x) and PVÀ3 f 3.8 MULTISCALE SAMPLING 83 Nonredundant representation of f (x) on multiple scales is a goal of wavelet theory We achieve this goal in considering the subspace Wj, the orthogonal complement of Vj in VjÀ1 By “orthogonal” we mean Vj > Wj ¼ {0} By design, Vj and Wj span V jÀ1 , for example VjÀ1 ¼ Vj È Wj (3:108) The wavelet corresponding to the scaling function f(x) is the generating function for the basis of W0 For example, the wavelet corresponding to the Haar scaling function is c(x) ¼ b0 (2x) À b0 (2x À 1) ¼ fÀ1,0 (x) À fÀ1,1 (x) (3:109) The scaling function can also be expressed in the basis VÀ1 as f(x)¼ b0 (x) ẳ f1,0 (x) ỵ f1,1 (x) (3:110) The wavelet and scaling functions for this case are shown in Fig 3.10 Since the basis functions for the VÀ1 can be expressed in terms of the scaling and wavelet functions as > p [f(x n) ỵ c(x n)] for n even > < fÀ1,n (x) ¼ (3:111) > > pffiffiffi [f(x À n) À c(x À n)] for n odd : we see that the linear combination of the bases f0,n (x) and c(x À n) span the space VÀ1 Since c(x) is orthogonal to all of the basis vectors f0,n (x), c(x) [ V0 c(x À n) is also orthogonal to c(x À m) for m = n Thus, c(x À m) is an orthonormal basis for W0 By scaling the wavelet function, one arrives at a basis for Wj In exact correspondence to Eqn (3.105) the basis for Wj is  x c j,n (x) ¼ pffiffiffiffiffi c j À n 2j (3:112) There is a substantial difference between the subspaces generated by the scaling function and the subspaces generated by the wavelet, however While {0} , Á Á Á , V2 , V1 , Vo , VÀ1 , VÀ2 , Á Á Á , L2 (R) (3:113) the wavelet subspaces are not similarly nested Specifically, Wj , V jÀ1 but Wj å WjÀ1 In fact, all of the wavelet subspaces are orthogonal This means that >1 Wj ¼ {0} and

Ngày đăng: 05/08/2014, 14:20

Từ khóa liên quan

Tài liệu cùng người dùng

  • Đang cập nhật ...

Tài liệu liên quan