The Essential Guide to Image Processing- P18 ppsx

30 428 0
The Essential Guide to Image Processing- P18 ppsx

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

19.5 Approaches for Color and Multispectral Images 519 (a) (b) (c) (d) FIGURE 19.10 Canny edge detector of Eq. (19.22) applied after Gaussian smoothing over a range of ␴: (a) ␴ ϭ 0.5; (b) ␴ ϭ 1; (c) ␴ ϭ 2; and (d) ␴ ϭ 4. The thresholds are fixed in each case at T U ϭ 10 and T L ϭ 4. only computational cost beyond that for grayscale images is incurred in obtaining the luminance component image, if necessary. In many color spaces, such as YIQ, HSL, CIELUV , and CIELAB, the luminance image is simply one of the components in that representation. For others, such as RGB, computing the luminance image is usually easy and efficient. The main drawback to luminance-only processing is that important edges are often not confined to the luminance component. Therefore, a gray level difference in the luminance component is often not the most appropriate criterion for edge detection in color images. 520 CHAPTER 19 Gradient and Laplacian Edge Detection (a) (b) (c) (d) FIGURE 19.11 Canny edge detector of Eq. (19.22) applied after Gaussian smoothing with ␴ ϭ 2: (a) T U ϭ 10, T L ϭ 1; (b) T U ϭ T L ϭ 10; (c) T U ϭ 20, T L ϭ 1; (d) T U ϭ T L ϭ 20.AsT L is changed, notice the effect on the results of hysteresis thresholding. Another rather obvious approach is to apply a desired edge detection method sep- arately to each color component and construct a cumulative edge map. One possibility for overall gradient magnitude, shown here for the RGB color space, combines the component gradient magnitudes [24]:   ٌf c (x,y)   ϭ   ٌf R (x,y)   ϩ   ٌf G (x,y)   ϩ   ٌf B (x,y)   . The results, however, are biased according to the properties of the particular color space used. It is often important to employ a color space that is appropriate for the target 19.5 Approaches for Color and Multispectral Images 521 application. For example, edge detection that is intended to approximate the human visual system’s behavior should utilize a color space having a perceptual basis, such as CIELUV or perhaps HSL. Another complication is the fact that the components’gradient vectors may not always be similarly oriented, making the search for local maxima of |ٌf c | along the gradient direction more difficult. If a total gradient image were to be computed by summing the color component gradient vectors, not just their magnitudes, then inconsistent orientations of the component gradients could destructively interfere and nullify some edges. Vector approaches to color edge detection, while generally less computationally effi- cient, tend to have better theoretical justification. Euclidean distance in color space between the color vectors of a given pixel and its neighbors can be a good basis for an edge detector [24]. For the RGB case, the magnitude of the vector gradient is as follows:   ٌf c (x,y)   ϭ    ٌf R (x,y)   2 ϩ   ٌf G (x,y)   2 ϩ   ٌf B (x,y)   2 . Trahanias and Venetsanopoulos [29] described the use of vector order statistics as the basis for color edge detection. A later paper by Scharcanski and Venetsanopoulos [26] furthered the concept. While not strictly founded on the gradient or Laplacian, their techniques are effective and worth mention here because of their vector bases. The basic idea is to look for changes in local vector statistics, particularly vector dispersion, to indicate the presence of edges. Multispectral images can have many components, complicating the edge detection problem even further. Cebrián et al. [6] describe several methods that are useful for mul- tispectral images having any number of components. Their description uses the second directional derivative in the gradient direction as the basis for the edge detector, but other types of detectors can be used instead. The components-average method forms a gray- scale image by averaging all components, which have first been Gaussian-smoothed, and then finds the edges in that image. The method generally works well because multispec- tral images tend to have high correlation between components. However, it is possible for edge information to diminish or vanish if the components destructively interfere. Cumani [8] explored operators for computing the vector gradient and created an edge detection approach based on combining the component gradients. A multispectral contrast function is defined, and the image is searched for pixels having maximal direc- tional contrast. Cumani’s method does not always detect edges present in the component bands, but it better avoids the problem of destructive interference between bands. The maximal gradient method constructs asingle g radient image from the component images [6]. The overall gradient image’s magnitude and direction values at a given pixel are those of the component having the greatest gradient magnitude at that pixel. Some edges can be missed by the maximal gradient technique because they may be swamped by differently oriented, stronger edges present in another band. The method of combining component edge maps is the least efficient because an edge map must first be computed for every band. On the positive side, this method is capable of detecting any edge that is detectable in at least one component image. Combination 522 CHAPTER 19 Gradient and Laplacian Edge Detection of component edge maps into a single result is made more difficult by the edge location errors induced by Gaussian smoothing done in advance. The superimposed edges can become smeared in width because of the accumulated uncertainty in edge localization. A thinning step applied during the combination procedure can greatly reduce this edge blurring problem. 19.6 SUMMARY Gray level edge detection is most commonly performed by convolving an image, f , with a filter that is somehow based on the idea of the derivative. Conceptually, edges can be revealedby locating either thelocal extrema of the first derivative of f or the zero-crossings of its second derivative. The gradient and the Laplacian are the primary derivative-based functions used to construct such edge-detection filters. The gradient, ٌ, is a 2D extension of the first derivative while the Laplacian, ٌ 2 ,actsasa2Dsecondderivative.Avariety of edge detection algorithms and techniques have been de veloped that are based on the gradient or Laplacian in some way. Like any type of derivative-based filter, ones based on these two functions tend to be very sensitive to noise. Edge location errors, false edges, and broken or missing edge segments are often problems with edge detection applied to noisy images. For gradient techniques, thresholding is a common way to suppress noise and can be done adaptively for better results. Gaussian smoothing is also very helpful for noise suppression, especially when second-derivative methods such as the Laplacian are used. The Laplacian of Gaussian approach can also provide edge information over a range of scales, helping to further improve detection accuracy and noise suppression as well as providing clues that may be useful during subsequent processing. Recent comparisons of various edge detectors have been made by Heath et al. [13] and Bowyer et al. [4]. They have concluded that the subjective quality of the results of various edge detectors applied to real images is quite dependent on the images themselves. Thus, there is no single edge detector that produces a consistently best overall result. Furthermore, they found it difficult to predict the best choice of edge detector for a given situation. REFERENCES [1] D. H. Ballard and C. M. Brown, Computer Vision. Prentice-Hall, Englewood Cliffs, NJ, 1982. [2] V. Berzins. Accuracy of Laplacian edge detectors. Comput. Vis. Graph. Image Process., 27:195–210, 1984. [3] A. C. Bovik, T. S. Huang, and D. C. Munson, Jr. The effect of median filter ing on edge estimation and detection. IEEE Trans. Pattern Anal. Mach. Intell., PAMI-9:181–194, 1987. [4] K. Bowyer, C. Kranenburg,and S. Dougherty. Edge detectorevaluation using empirical ROC curves. Comput. Vis. Image Underst., 84:77–103, 2001. [5] J. Canny. A computational approach to edge detection. IEEE Trans. Pattern Anal. Mach. Intell., PAMI-8:679–698, 1986. References 523 [6] M. Cebrián, M. Perez-Luque, and G. Cisneros. Edge detection alternatives for multispectral remote sensing images. In Proceedings of the 8th Scandinavian Conference on Image Analysis, Vol. 2, 1047–1054. NOBIM-Norwegian Soc. Image Pross & Pattern Recognition, Tromso, Norway, 1993. [7] J. S. Chen, A. Huertas, and G. Medioni. Very fast convolution with Laplacian-of-Gaussian masks. In Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., 293–298. IEEE, New York, 1986. [8] A. Cumani. Edge detection in multispectral images. Comput. Vis. Graph. Image Process. Graph. Models Image Process., 53:40–51, 1991. [9] L. Ding and A. Goshtasby. On the Canny edge detector. Pattern Recognit., 34:721–725, 2001. [10] R. M. Haralick and L. G. Shapiro. Computer and Robot Vision, Vol. 1. Addison-Wesley, Reading, MA, 1992. [11] Q. Ji and R. M. Haralick. Efficient facet edge detection and quantitative performance evaluation. Pattern Recognit., 35:689–700, 2002. [12] R. C. Hardie and C. G. Boncelet. Gradient-based edge detection using nonlinear edge enhancing prefilters. IEEE Trans. Image Process., 4:1572–1577, 1995. [13] M. Heath, S. Sarkar, T. Sanocki, and K. Bowyer. Comparison of edge detectors, a methodology and initial study. Comput. Vis. Image Underst., 69(1):38–54, 1998. [14] A. Huertas and G. Medioni. Detection of intensity changes with subpixel accuracy using Laplacian- Gaussian masks. IEEE Trans. Pattern. Anal. Mach. Intell., PAMI-8(5):651–664, 1986. [15] S. R. Gunn. On the discrete representation of the Laplacian of Gaussian. Pattern Recognit., 32: 1463–1472, 1999. [16] A. K. Jain. Fundamentals of Digital Image Processing. Prentice-Hall, Englewood Cliffs, NJ, 1989. [17] J. S. Lim. Two-Dimensional Signal and Image Processing. Prentice-Hall, Englewood Cliffs, NJ, 1990. [18] D. Marr. Vision. W. H. Freeman, New York, 1982. [19] D. Marr and E. Hildreth. Theory of edge detection. Proc. R. Soc. Lond. B, 270:187–217, 1980. [20] B. Mathieu, P. Melchior, A. Oustaloup,and Ch. Ceyral. Fractional differentiation for edge detection. Signal Processing, 83:2421–2432, 2003. [21] J. Merron and M. Brady. Isotropic gradient estimation. In Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., 652–659. IEEE, New York, 1996. [22] V. S. Nalwa and T. O. Binford. On detecting edges. IEEE Trans. Pattern. Anal. Mach. Intell., PAMI- 8(6):699–714, 1986. [23] W. K. Pratt. Digital Image Processing, 2nd ed. Wiley, New York, 1991. [24] S. J. Sangw ine and R. E. N. Horne, editors. The Colour Image Processing Handbook. Chapman and Hall, London, 1998. [25] S. Sarkar and K. L. Boyer. Optimal infinite impulse response zero crossing based edge detectors. Comput. Vis. Graph. Image Process. Image Underst., 54(2):224–243, 1991. [26] J. Scharcanski and A. N. Venetsanopoulos. Edge detection of color images using directional operators. IEEE Trans. Circuits Syst. Video Technol., 7(2):397–401, 1997. [27] P. Siohan, D. Pele, and V. Ouvrard. Two design techniques for 2-D FIR LoG filters. In M. Kunt, editor, Proc. SPIE, Visual Communications and Image Processing, Vol. 1360, 970–981, 1990. [28] V. Torre and T. A. Poggio. On edge detection. IEEE Trans. Pattern Anal. Mach. Intell., PAMI- 8(2):147–163, 1986. 524 CHAPTER 19 Gradient and Laplacian Edge Detection [29] P. E. Trahanias and A. N. Venetsanopoulos. Color edge detection using vector order statistics. IEEE Trans. Image Process., 2(2):259–264, 1993. [30] A. P. Witkin. Scale-space filtering. In Proc. Int. Joint Conf. Artif. Intell., 1019–1022. William Kaufmann Inc., Karlsruhe, Germany, 1983. [31] D. Ziou and S. Wang. Isotropic processing for gradient estimation. In Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., 660–665. IEEE, New York, 1996. CHAPTER 20 Diffusion Partial Differential Equations for Edge Detection Scott T. Acton University of Virginia 20.1 INTRODUCTION AND MOTIVATION 20.1.1 Partial Differential Equations in Image and Video Processing The collision of imag ing and differential equations makes sense. Without motion or change of scene or changes within the scene, imaging is worthless. First, consider a static environment—we would not need vision in this environment, as the components of the scene are unchang ing. In a dynamic environment, however, vision becomes the most valuable sense. Second, consider a constant-valued image with no internal changes or edges. Such an image is devoid of value in the information-theoretic sense. The need for imaging is based on the presence of change. The mechanism for change in both time and space is described and governed by differential equations. The partial differential equations (PDEs) of interest in this chapter enact diffusion.In chemistry or heat transfer,diffusion is a process that equilibrates concentration differences without creating or destroying mass. In image and video processing, we can consider the mass to be the pixel intensities or the gradient magnitudes, for example. These important differential equations are PDEs, since they contain partial derivatives with respect to spatial coordinates and time. These equations, especially in the case of anisotropic diffusion, are nonlinear PDEs since the diffusion coefficient is typically nonlinear. 20.1.2 Edges and Anisotropic Diffusion Sudden, sustained changes in image intensity are called edges. We know that the human visual system makes extensive uses of edges to perform visual tasks such as object recog- nition [1]. Humans can recognize complex 3D objects using only line drawing s or image edge information. Similarly, the extraction of edges from digital imagery allows a valu- able abstraction of information and a reduction in processing and storage costs. Most 525 526 CHAPTER 20 Diffusion Partial Differential Equations for Edge Detection definitions of image edges involve some concept of feature scale. Edges are said to exist at certain scales—edges from detail existing at fine scales and edges from the boundaries of large objects existing at large scales. Furthermore, large-scale edges exist at fine scales, leading to a notion of edge causality. In order to locate edges of various scales within an image, it is desirable to have an image operator that computes a scaled version of a particular image or frame in a video sequence. This operator should preserve the position of such edges and facilitate the extraction of the edge map through the scale space. The tool of isotropic diffusion, a linear lowpass filtering process, is not able to preserve the position of important edges through the scale space. Anisotropic diffusion, however, meets this criterion and has been used effectively in conjunction with edge detection. The main benefit of anisotropic diffusion is edge preservation through the image smoothing process.Anisotropic diffusion yields intra-region smoothing,not inter-region smoothing, by impeding diffusion at the image edges. The anisotropic diffusion process can be used to retain image features of a specified scale. Furthermore, the localized computation of anisotropic diffusion allows efficient implementation on a locally- interconnected computer architecture. Caselles et al. furnish additional motivation for using diffusion in image and video processing [2]. The diffusion methods use localized models where discrete filters become PDEs as the sample spacing goes to zero. The PDE framework allows various properties to be proved or disproved including stability, locality, causality, and the existence and uniqueness of solutions. Through the establi- shed tools of numerical analysis, high degrees of accuracy and stability are possible. In this chapter, we introduce diffusion for image and video processing. We specifi- cally concentrate on the implementation of anisotropic diffusion, providing several alternatives for the diffusion coefficient and the diffusion PDE. Energy-based variational diffusion techniques are also reviewed. Recent advances in anisotropic diffusion proce- sses, including multiresolution techniques, multispectral techniques, and techniques for ultrasound and radar imagery, are discussed. Finally, the extraction of image edges after anisotropic diffusion is addressed, and vector diffusion processes for attracting active contours to boundaries are examined. 20.2 BACKGROUND ON DIFFUSION 20.2.1 Scale Space and Isotropic Diffusion In order to introduce the diffusion-basedprocessingmethods and theassociated processes of edge detection, let us define some notation. Let I represent an image with real-valued intensity I( x) image at position x in the domain ⍀. When defining the PDEs for diffusion, let I t be the image at time t with intensities I t (x). Corresponding with image I is the edge map e—the image of “edge pixels” e(x) with Boolean range (0 = no edge, 1 = edge), or real-valued range e(x) ∈[0,1]. The set of edge positions in an image is denoted by ⌿. The concept of scale space is at the heart of diffusion-based image and video processing. A scale space is a collection of images that begins with the original, fine 20.2 Background on Diffusion 527 scale image and progresses toward more coarse scale representations. Using a scale space, important image processing tasks such as hierarchical searches, image coding, and image segmentation may be efficiently realized. Implicit in the creation of a scale space is the scale generating filter. Traditionally, linear filters have been used to scale an image. In fact, the scale space of Witkin [3] can be derived using a Gaussian filter: I t ϭ G ␴ ∗I 0 , (20.1) where G ␴ is a Gaussian kernel with standard deviation (scale) of ␴, and I 0 ϭ I is the initial image. If ␴ ϭ √ t, (20.2) then the Gaussian filter result may be achieved through an isotropic diffusion process governed by ѨI t Ѩt ϭٌ 2 I t , (20.3) where ٌ 2 I t is the Laplacian of I t [3, 4]. To evolve one pixel of I, we have the follow- ing PDE: ѨI t (x) Ѩt ϭٌ 2 I t (x). (20.4) The Marr-Hildreth paradigm uses a Gaussian scale space to define multiscale edge detection. Using the Gaussian-convolved (or diffused) images, one may detect edges by applying the Laplacian operator and then finding zero-crossings [5]. This popular method of edge detection, called the Laplacian-of-a-Gaussian (LoG), is strongly moti- vated by the biological vision system. However, the edges detected from isotropic diffusion (Gaussian scale space) suffer from artifacts such as corner rounding and from edge localization error (deviation in detected edge position from the“true”edge position). The localization errors increase with increased scale, precluding straightforward multiscale image/video analysis. As a result, many researchers have pursued anisotropic diffusion as a viable alternative for generating images suitable for edge detection. This chapter focuses on such methods. 20.2.1.1 Anisotropic Diffusion The main idea behind anisotropic diffusion is the introduction of a function that inhibits smoothing at the image edges. This function, called the diffusion coefficient c(x), encour- ages intra-region smoothing over inter-region smoothing. For example,if c(x) is constant at all locations, then smoothing progresses in an isotropic manner. If c(x) is allowed to vary according to the local image gradient, we have anisotropic diffusion. A basic anisotropic diffusion PDE is ѨI t (x) Ѩt ϭ div { c(x)ٌI t (x) } (20.5) with I 0 ϭ I [6]. 528 CHAPTER 20 Diffusion Partial Differential Equations for Edge Detection The discrete formulation proposed in [6] w ill be used as a general framework for implementation of anisotropic diffusion in this chapter. Here the image intensities are updated according to [ I(x) ] tϩ1 ϭ ⎡ ⎣ I(x) ϩ (⌬T ) ⌫  dϭ1 c d (x)ٌ I d (x) ⎤ ⎦ t , (20.6) where ⌫ is the number of directions in which diffusion is computed, ٌ I d (x) is the direc- tional derivative (simple difference) in direction d at location x, and time (in iterations) is given by t. ⌬T is the time step—for stability, ⌬T Յ 1 2 in the 1D case, and ⌬T Յ 1 4 in the 2D case using four diffusion directions. For 1D discrete-domain signals, the simple differences ٌ I d (x) with respect to the “western” and “eastern” neighbors, respectively (neighbors to the left and right), are defined by ٌI 1 (x) ϭ I (x Ϫ h 1 ) Ϫ I(x) (20.7) and ٌI 2 (x) ϭ I (x ϩ h 2 ) Ϫ I(x). (20.8) The parameters h 1 and h 2 define the sample spacing used to estimate the directional derivatives. For the 2D case, the diffusion directions include the “northern” and “south- ern” directions (up and down), as well as the “western” and “eastern” directions (left and right). Given the motivation and basic definition of diffusion-based processing, we will now define several implementations of anisotropic diffusion that can be applied for edge extraction. 20.3 ANISOTROPIC DIFFUSION TECHNIQUES 20.3.1 The Diffusion Coefficient The link between edgedetection and anisotropic diffusion is found in the edge-preserving nature of anisotropic diffusion. The function that impedes smoothing at the edges is the diffusion coefficient. Therefore, the selection of the diffusion coefficient is the most critical step in performing diffusion-based edge detection. We will review several possible variants of the diffusion coefficient and discuss the associated positive and negative attributes. To simplify the notation, we will denote the diffusion coefficient at location x by c(x) in the continuous case. For the discrete-domain case, c d (x) represents the diffusion coefficient for direction d at location x. Although the diffusion coefficients here are defined using c(x) for the continuous case, the functions are equivalent in the discrete- domain case of c d (x). Typically c(x) is a nonincreasing function of |ٌI(x)|, the gradient magnitude at position x. As such, we often refer to the diffusion coefficient as c(|ٌI(x)|). For small values of |ٌI (x)|, c(x) tends to unity. As |ٌI(x)| increases, c(x) decreases to zero . Teboul et al. [7] establish three conditions for edge-preserving diffusion coefficients. These conditions are (1) lim |ٌI(x)|→0 c(x) ϭ M where 0 < M < ϱ, (2) lim |ٌI(x)|→ϱ c(x) ϭ 0, [...]... expressing, respectively, the degree of the resistance to stretching and bending of the contour The external energy Eexternal is typically defined such that the contour seeks the edges in the image, I : 1 Eexternal ϭ Ϫ f (X (s), Y (s))ds, where f (x, y) ϭ |ٌI (x, y)|2 (20.42) 0 To minimize the contour energy (ϭ Eexternal ϩ Einternal ), the calculus of variations [47, 48] is applied to obtain the following Euler... 20.3.2 The Diffusion PDE In addition to the basic anisotropic diffusion PDE given in Section 20.1.2, other diffusion mechanisms may be employed to adaptively filter an image for edge detection Nordstrom [18] used an additional term to maintain fidelity to the input image, to avoid the selection of a stopping time, and to avoid termination of the diffusion at a trivial solution, such as a constant image. .. 20.10(b) shows the external forces generated by gradient vector flow (GVF) for the circular target boundary shown in Fig 20.10(a) 20.5.2 Gradient Vector Flow Diffusion can be used to capture an edge that is distant from the initial active contour In this situation, the active contour is not driven toward the edge using (20.42), because the contour is not in “contact” with the gradient from the edge To alleviate... sensitive to noise and does not identify the important image boundaries The second method uses mean curvature motion and a multispectral gradient formula to achieve anisotropic, edge-preserving diffusion The idea behind mean curvature motion, as discussed above, is to diffuse in the direction opposite to the gradient such that the image level set objects move with a rate in proportion to their mean... ultrasound image of a prostate phantom with implanted radioactive seeds; (b) corresponding SRAD-diffused image; (c) corresponding ICOV edge strength image [43] The operator q(x) is called the instantaneous coefficient of variation (ICOV) The ICOV uses the absolute value of the difference of a normalized gradient magnitude and a normalized Laplacian operator to measure the strength of edges in speckled imagery... ultrasound image (Fig 20.5(a)) that has been processed by SRAD (Fig 20.5(b)) and where edges are displayed using the ICOV values (Fig 20.5(c)) Given an intensity image I having no zero-valued intensities over the image domain, the output image is evolved according to the following PDE: ѨIt (x)/Ѩt ϭ div c q(x) ٌIt (x) , (20.38) where ٌ is the gradient operator, div is the divergence operator, and || denotes the. .. scalegenerating operator, the presampling operation is constrained morphologically, not by the traditional sampling theorem In the nonlinear case, the scale-generating operator should remove image features not supported in the subsampled domain Therefore, morphological methods [32, 33] for creating image pyramids have also been used in conjunction with the morphological sampling theorem [34] The anisotropic... estimate of ELϩ1 ϭ 0 The new error estimate ELϩ1 after relaxation can then be transferred to the finer grid to correct the initial image estimate J in a simple two-grid scheme Alternatively, the process of transferring the residual to successively coarser grids can be continued until a grid is reached in which a closed form solution is possible Then, the error estimates are propagated back to the original grid... 20.10 (a) A circle; (b) the external forces on the active contour after vector diffusion by GVF; (c) the external forces on the active contour after vector diffusion by MGVF with (v x, v y ) ϭ (Ϫ1, 0); (d) the external forces on the active contour after vector diffusion by MGVF with (v x, v y ) ϭ (Ϫ1, Ϫ1) where ␮ is a nonnegative parameter expressing the degree of smoothness of the field (u, v) (and can... as the measure of edge strength: e(x) ϭ 1 if c(x) < T and e(x) ϭ 0 otherwise T ϭ ␴e of the image, as defined in [44]: ␴e ϭ 1.4826 med{|ٌI (x) Ϫ med(|ٌI (x)|)|} where the med operator is the median performed over the entire image domain ⍀ The constant used (1.4826) is derived from the mean absolute deviation of the normal distribution with unit variance [17] 20.4 Application of Anisotropic Diffusion to . on the images themselves. Thus, there is no single edge detector that produces a consistently best overall result. Furthermore, they found it difficult to predict the best choice of edge detector. additional term to maintain fidelity to the input image, to avoid the selection of a stopping time, and to avoid termination of the diffusion at a trivial solution, such as a constant image. This. mul- tispectral images having any number of components. Their description uses the second directional derivative in the gradient direction as the basis for the edge detector, but other types of detectors

Ngày đăng: 01/07/2014, 10:43

Mục lục

  • About the Author

    • About the Author

    • 1 Introduction to Digital Image Processing

      • 1 Introduction to Digital Image Processing

        • Types of Images

        • Size of Image Data

        • Objectives of this Guide

        • Organization of the Guide

        • 2 The SIVA Image Processing Demos

          • 2 The SIVA Image Processing Demos

            • Introduction

            • LabVIEW for Image Processing

              • The LabVIEW Development Environment

              • Image Processing and Machine Vision in LabVIEW

                • NI Vision

                • Examples from the SIVA Image Processing Demos

                • 3 Basic Gray Level Image Processing

                  • 3 Basic Gray Level Image Processing

                    • Introduction

                    • Linear Point Operations on Images

                      • Additive Image Offset

                      • Nonlinear Point Operations on Images

                        • Logarithmic Point Operations

                        • Arithmetic Operations Between Images

                          • Image Averaging for Noise Reduction

                          • Image Differencing for Change Detection

                          • Geometric Image Operations

                            • Nearest Neighbor Interpolation

                            • 4 Basic Binary Image Processing

                              • 4 Basic Binary Image Processing

                                • Introduction

                                • Region Labeling

                                  • Region Labeling Algorithm

                                  • Minor Region Removal Algorithm

                                  • Binary Image Morphology

                                    • Logical Operations

                                    • Binary Image Representation and Compression

                                      • Run-Length Coding

Tài liệu cùng người dùng

  • Đang cập nhật ...

Tài liệu liên quan