Volume 17 - Nondestructive Evaluation and Quality Control Part 13 docx

80 291 1
Volume 17 - Nondestructive Evaluation and Quality Control Part 13 docx

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

suited to specific applications. The vidicons offer the highest frame rates in formats of 1024 2 and more, the line scanners have excellent resolution at moderate prices, and the micro-densitometers are the most precise with regard to photometric and geometric quantities. Most microdensitometers will digitize to 12 bits, which limits their dynamic range to about 3.6 in film density. As noted in Table 2, the standard scanning microdensitometer (row one) will take longer (up to several hours) to scan film if the film density is over 2. The laser scanners are faster (depending on the make) because of the more intense laser light source and the fact that one axis is scanned with the light beam rather than mechanically. Table 2 Typical characteristics of image digitization devices Face plate illumination Type Image quality Pixel size or pixel number resolution Acquisition time per 1024 2 by 8-bit image Interscene dynamic range Image area, mm × mm (in. × in.) lx ftc Microdensitometer Excellent 5-75 m (200 in 0.0030 in.) (a) 15 min 4096 500 × 500 (20 × 20) . . . . . . Point source and detector Laser scanner Excellent 35 or 75 m (0.0014 or 0.0030 in.) (a) 1 min 4096 430 × 350 (17 × 14) . . . . . . Linear CCD array Good 4096 500 ms >1500 . . . . . . . . . Linear detector Photodiode array Good 1024 5 s 1000 . . . . . . . . . CCD Fair 1320 × 1035 140 ms 500 . . . 1.5 0.14 Solid-state cameras CID Fair 776 × 512 33 ms (b) 200 . . . 0.1 0.009 Vidicon (Sb 2 S 3 ) Fair >1500 33 ms, lag = 20% 80-200 . . . 5 0.5 Newvicon (ZnSe) Fair 800 33 ms, lag = 10-20% 50-200 . . . 0.5 0.05 Pasecon/Chalnicon (CdSe) Fair 1600 33 ms, lag = 5- 10% 30-60 . . . 1 0.09 Saticon (Se+Te+As) Fair 1200 33 ms, lag = 3% 50-160 . . . 6 0.6 Plumbicon (PbO) Fair 1200 33 ms, lag = 4% 80-160 . . . 2.5 0.23 Tube type cameras (vidicon family) SIT (Silicon) Fair 750 33 ms, lag = 7% 60 . . . 0.01 0.0009 (a) Aperture. (b) 776 × 512 Linear CCD or photodiode arrays are good choices for film digitization and can have even better dynamic ranges than those listed in Table 2. When these devices are cooled, each 7 °C (13 °F) reduction in temperature reduces the root mean square noise by a factor of two. The charge-coupled device and charge injection device are capable of good dynamic range and fair resolution, but have certain artifacts (Ref 12). Again, if the devices are cooled, the noise floor is reduced, and they can be integrated for long periods to enhance the dynamic range and sensitivity. Cooled CCDs are available that rival the low light sensitivity of silicon-intensified targets (SITs), but not the frame speed. In general, the frame rate, dynamic range, and resolution are all interrelated. The interscene dynamic range is listed as the maximum achievable for the microdensitometers and as the dynamic range that can be achieved at the given frame rate for the other devices. The faceplate illumination given for the tubes and the solid-state detectors assumes mid-level illumination (halfway between saturation and preamplifier noise) (Ref 13). This may vary among tubes and generic types by a factor of three to four (Ref 14). The interscene dynamic range will also vary greatly, but can be maximized by the proper selection of a tube such that a dynamic range of 200 can be achieved at 33 ms/field with a high resolution (>1000 lines). The quoted resolution for the tubes is at a modulation transfer function (MFT) of 5%, which means that the contrast between a black line and a white line is only 5% at the stated resolution. This, of course, is measured at optimum illumination and at the center of the tube image field. Under other conditions, the resolution will be less. For comparison, a 1024 2 CCD camera may have an MTF of 5% at 750 lines. The lags quoted for the tubes are typical for the particular type at 3 TV fields or 50 ms. Tube cameras are primarily used in radiation environments or in specialized applications, such as high-resolution real-time radiography, in which the frame rate is higher than that achievable by current CCD designs. Charge-coupled device camera design is rapidly evolving for high-resolution scientific use and can be expected to improve with regard to real-time frame rates (Ref 12, 15). It should be noted that a 355 × 432 mm (14 × 17 in.) film digitized at a resolution of 50 m (0.002 in.) with 12-bit accuracy will consume a 92-Mbyte file. Just writing or reading this file to or from a hard disk could take up to 8 min (optical disks take even longer). Even with high-density optical disks and data compression, the digital storing of high- resolution radiographs represents a formidable problem. References cited in this section 3. R.C. Gonzalez and P. Wintz, Digital Image Processing, Addison-Wesley, 1977 12. J.R. Janesick, T. Elliott, S. Collins, M.M. Blouke, and J. Freeman, Scientific Charge Coupled Devices, Opt. Eng., Vol 26 (No. 8), 1987, p 692-714 13. I.P. Csorba, Image Tubes, Howard W. Sams & Co., 1985 14. G.I. Yates, S.A. Jaramillo, V.H. Holmes, and J.P. Black, "Characterization of New FPS Vidicons for Scientific Imaging Applications," LA-11035-MS, US-37, Los Alamos National Laboratory, 1988 15. L.E. Rovich, Imaging Processes and Materials, Van Nostrand Reinhold, 1989 Digital Image Enhancement T.N. Claytor and M.H. Jones, Los Alamos National Laboratory Image Processing Prior to image processing, it may be necessary to perform some type of preprocessing on the data. Most often, an image file will need to be converted into the workstation standard from some other type of format; for example, the files may be 8-, 16-, or 24-bit uncompressed format with a variable-length header and need to be converted to 8-bit format with no header. Typically, a problem arises when a file must be read that is in an unknown size, a compressed format, or has a limited color or nonstandard palette. Other preprocessing functions operate on the raw data in some manner before inputting it for image processing. For example, the system should have the capability to average, filter, and acquire full frames and control the scanning parameters on the digitizer so that the noise level can be minimized. In other cases, more complicated operations are called for, such as tomographic reconstruction, synthetic aperture operations, or an FFT. Shown in Fig. 1(a) and (b) in the article "Use of Color for NDE" in this Volume are tomographs taken before and after oversampling by a factor of four and Wiener filtering of the input data set to correct for the point spread function (Ref 16). The preprocessing operation greatly increases the smoothness of the lines and reduces the artifacts. Reference cited in this section 16. K. Thompson, private communication, Sandia National Laboratories, 1988 Digital Image Enhancement T.N. Claytor and M.H. Jones, Los Alamos National Laboratory Image Enhancement There are three major scientific applications for image processing: • Enhancement of an image to facilitate viewing • Manipulation and restoration of an image • Measurement and separation of features These functions are listed in Table 3. Many of the functions have a dual purpose and, as will be shown, are usually combined to form other functions. Table 3 Image-processing software algorithms useful for NDE applications Image enhancement Image operations Information extraction Contrast stretching Scaling Image statistics Histogram equalization Translation Point, line, angle perimeter, area, measurement Contouring Rotating FFT transformation, one and two dimensional Thresholding Registration Correlation Composite image building Warping Edge detection Palette operations Combining Deblurring Color model selection Filtering Motion restoration True color representation Thickening, thinning Noise cleaning Trend removal Pattern recognition Contrast Stretching and Histogram Equalization. The concept of contrast stretching is shown in Fig. 3(a). The basic operation is given by the pixel value transform: s = T(r) s = T(r) = ar + b Linear T(r) = a log(r) + b Logarithmic (Eq 1) where r is the original pixel value and s is the transformed pixel value. Fig. 3 Concept of histogram stretching and equalization. (a) Histogram is stretched with a linear transformation. (b) Histogram is equalized such that the probability density is constant. The linear stretch and offset method is the most commonly applied, while transforms of the nonlinear type can be used to convert film density to integrated dose or to linearize the film transfer function to account for base density. Although the manual contrast stretch is often used, a more automated contrast stretch is available with an operation known as histogram equalization. The general form is: (Eq 2) A simple linear equalization example is shown in Fig. 3(b). Occasionally, it may be desirable to use the cubic or logarithmic equalization. The cubic emphasizes the lower values, making the image darker, while the exponential makes the image much brighter. Other types of transfer functions are often used to remove nonlinear response in the imaging or camera system. For isolated defects, a manual stretch usually gives good results. On textured materials, however, histogram equalization often works well. An example of a linear histogram equalization of a low-contrast ultrasonic image of a bond line in an explosively bonded steel-to-aluminum plate is shown in Fig. 2 in the article "Use of Color for NDE" in this Volume. Contouring and Thresholding. Contouring and thresholding are often combined with palette operations to show gradations in intensity or to highlight defects. Color should be used to detect gradients in intensity because the eye is sensitive to many more colors (approximately 10 4 shades) than gray levels. Thresholding can be done with a binary picture (black and white), a color scale, or a combination of gray and color. The concept of thresholding has particular application in the generation of a mask for future image-processing operations or the removal of the background. Contouring is used to change only those values between two pixel levels to a certain value. It can also be used in outlining high-definition features before further processing. Composite Image Building. The ability to display composite images is useful in comparing the difference between images produced by different modalities, such as a radiograph and an ultrasonic or NMR image. Because image quality suffers greatly when displayed at resolutions of less than 512 2 it is imperative to have at least a 1024 2 display or to use two monitors if detailed comparisons are to be made. Figure 4 illustrates how x-ray radiographs and neutron radiographs can be displayed together and combined to enhance voids in a composite material. As shown in Fig. 4(c), the images have simply been added to enhance the detection of voids. The absorption of neutrons in the epoxy is higher than in the alumina and the reverse is true for the absorption of x-rays. However, the voids do not absorb in either case, and the contrast is therefore improved if the images are added. Fig. 4 Voids detected in three 60 × 60 × 10 mm (2.4 × 2.4 × 1 in.) alumina- filled epoxy tiles using three different imaging techniques. (a) Conventional radiography. (b) Neutron radiography. (c) Combined x- ray and neutr on radiographic image. Building a composite image of both modalities indicates that there are voids in the sample. The addition of the two images enhances the voids because of the differential absorption in the matrix between the x-ray and neutron images. Palette Control. Color can be displayed as a true color map or as a pseudocolor representation. In a pseudocolor representation, the digital data values are mapped to any color that can be produced by the combination of the red, green, and blue (RGB) values specified by a look-up table (LUT). A typical 8-bit pseudocolor system is shown in Fig. 5(a). With only an 8-bit-deep pixel as shown in Fig. 2, a type of true color is possible; however, each color is represented only by eight or fewer color levels (Fig. 5b). With 8-bits per color (as in Fig. 5c), the colors can be shaded continuously and are suitable for three-dimensional displays in which depth cueing is indicated by decreasing the luminance. Eight-bit pixels with 24-bit LUTs can produce good color scenes by adjusting the LUT to include only those colors that appear in a scene. However, the palette is then image specific. Fig. 5 Concept of a look-up table for three types of color displays. (a) 8-bit pseudocolor representation. (b) 8- bit true color map. (c) 24-bit true color map. The LUT is controlled by the palette, which specifies what color composition (R,G,B) will be assigned to one of 256 levels, as in (a). The digital-to- analog converter (DAC) converts the digital signal to RGB for input to the monitor. Shown in (b) is a way to display true color with an 8- bit pixel. Better true color can be obtained by using a 24- bit pixel, with each byte (8 bits) assigned to a specific primary color, as in (c). Several types of pseudocolor and linear gray-scale palettes are shown in Fig. 4 in the article "Use of Color for NDE" in this Volume. The spectrum palette (or inverse spectrum) is preferred for range of color, but is not suitable for displaying rapid changes in pixel level. Other palettes, such as the complementary palettes shown in Fig. 4(d) and (e), can be used with good results on figures that have gradual changes in gray level. A typical threshold palette is shown in Fig. 4(b). All values below 192 will appear as a gray scale, while those equal to or above 192 will appear as red. The gray scale is still used to display lower-intensity data so that the operator can still detect unusual, yet not rejectable, anomalies. In contrast stretching, when further work on the image is anticipated, the palette (that is, LUT values) is altered, leaving the pixel values intact. Color Models and the Use of Color. There are two main color models: the red, green, blue (RGB) model and the hue, saturation, luminance (HSL) model. High-resolution imaging systems use the RGB model, while the National Television Systems Committee has adopted the HSL model for broadcast television. The colors in the RGB model are simply an additive mixture of the Commission Internationale de l'Éclairage monochromatic primary colors (red = 700 nm, or 7000 Å; green = 546.1 nm, or 5461 Å; and blue = 435.8 nm, or 4358 Å). When equal intensities of these colors are combined, a nearly white light results. In the HSL model, the primaries are transformed such that the hue value represents the color (0 to 360°; red = 0, 360; green = 120; and blue = 240), the saturation is the color intensity (values 0 to 1), and the luminance is the amount of brightness (0 to 1). If the saturation is 0 and the luminance is 1, the color will be white. The advantage of the HSL model is that a single hue can be manipulated in intensity more easily than with the RGB model. Although both color and black-and-white images contain the same information, there are features that are much more obvious in color plots than in corresponding black-and-white plots, and vice versa. In particular, a black-and-white representation is appropriate when there is a high spatial frequency inherent in the image; in such a case, the mind-eye combination interprets this as a texture. On the other hand, color is preferred when there are a few isolated objects or low spatial frequency. This makes it easier for the image interpreter to discern small changes in image density through the use of contrast enhancement. Black-and-white images are shown in Fig. 6 and 7, and the color versions are shown in Fig. 6(c) and (d) in the article "Use of Color for NDE" in this Volume. Fig. 6 Ultrasonic image of a hot isostatic pressed tungsten plate prepared from powder (99% dense). The black-and- white image shows small changes in density that are not easily discernible. The density changes are enhanced by the use of color, as shown in Fig. 6(c) of the article "Use of Color for NDE" in this Volume. The ring is caused by a transition from one thickness to another (the outer thickness was about 50% of the inner circle). Fig. 7 Tungsten plate similar to the one shown in Fig. 6 , except that this part was fabricated from a plate that was rolled rather than hot isostatic pressed. The black-and- white image shows a texture imparted to the material due to the inclusion of 2% porosity. The color version of the plate, Fig. 6(d) in the article "Use of Color for NDE" in t his Volume, shows how difficult the texture is to interpret in a color representation using a spectrum palette. Digital Image Enhancement T.N. Claytor and M.H. Jones, Los Alamos National Laboratory Image Operations Listed in the second column of Table 3 are some of the more important image operations that are routinely used to manipulate images. Geometric Processes. The scaling, rotation, translation, warping, and registration of an image are classified as geometric processes. The scaling of an image is an important function because it is often used before images are combined. Scaling is used to magnify or shrink the images permanently to fit printer sizes or simply to view areas closely. Rotation and translation are important mainly if two images need to be registered closely and combined or compared. The operations are described below. The scaling operation can involve interpolation, or it can merely involve a duplication of pixels. In the case of integer scaling (×2, ×3), the pixels are replicated in the x and y by the integer. For noninteger scaling (such as magnification by 1.5), every other pixel is replicated. Image interpolation and noninteger scaling should be avoided unless the image is subsequently filtered or unless interpolation and scaling are the last steps in a processing chain before printing. The printer will often perform interpolation and dithering (adding a small random number to the value) to obtain an image with a smoother appearance: [...]... a 2 × 2 neighborhood and will enhance high frequencies, while the Sobel filter operates with a 3 × 3 kernel and will produce good (but thick) outlines of images The Sobel filter algorithm is: g(x,y) = ({[(f(x + 1, y - 1) + 2 · f(x + 1, y) + f(x + 1, y + 1)] - [(f(x - 1, y - 1) + 2 · f(x - 1, y) + f(x - 1, y + 1)]}2 + {[f(x - 1, y - 1) + 2 · f(x, y - 1) + f(x + 1, y - 1)] - [f(x - 1, y + 1) + 2 · f(x,... Physics, T.S Huang, Ed., Springer-Verlag, 1979, p 177 10 D.H Janney and R.P Kruger, Digital Image Analysis Applied to Industrial Nondestructive Evaluation and Automated Parts Assembly, Int Adv Nondestr Test., Vol 6, 1979, p 3 9-9 3 17 C.S Burrus and T.W Parks, DFT/FFT and Convolution Algorithms: Theory and Implementation, John Wiley & Sons, 1985 Digital Image Enhancement T.N Claytor and M.H Jones, Los Alamos... Vol 13, M Kaveh, R.K Mueller, and T.F Greenleaf, Ed., Plenum Press, 1984, p 39 7-4 08 11 F.J Cichanski, Method and System for Dual Phase Scanning Acoustic Microscopy, Patent Pending 12 J.S Foster and D Rugar, Low-Temperature Acoustic Microscopy, Trans IEEE, Vol SU-32, 1985, p 13 9- 151 13 K.K Liang, G.S Kino, and B.T Khuri-Yakub, Material Characterization by the Inversion of V(z), Trans IEEE, Vol SU-32,... frequency and wavelength The most popular application of SLAM and C-SAM is the nondestructive evaluation of bonding, delamination, and cracks in materials These instruments are often used for process and quality control, although a significant percentage of the devices are placed in analytical and failure analysis laboratories The most popular application of SAM utilizes its very high magnification mode and. .. display Printers Various color and black -and- white printers are available for hard copy A simple method of hard copy for color and black -and- white is to photograph the CRT screen or a small (80 mm, or 3 in.) very flat screen directly with an 8 × 11 in or smaller Polaroid-type camera The disadvantages of this method are the cost of the instant film in the large format and the small size of the image... Khuri-Yakub, Material Characterization by the Inversion of V(z), Trans IEEE, Vol SU-32, 1985, p 21 3-2 24 14 R.D Weglein, Acoustic Micro-Metrology, Trans IEEE, Vol SU-32, 1985, p 22 5-2 34 15 J Kushibiki and N Chubachi, Material Characterization by Line-Focus-Beam Acoustic Microscope, Trans IEEE, Vol SU-32, 1985, p 18 9-2 12 Acoustic Microscopy Lawrence W Kessler, Sonoscan, Inc Acoustic Microscopy Applications*... The techniques of SLAM, C-SAM, and SAM all produce quantitative data in addition to images Acoustic microscopy methods are compared below with a typical C-scan ultrasound method in terms of frequency employed: Method Frequency range, MHz C-scan ultrasound 1-1 0 Acoustic microscopy C-SAM 1 0-1 00 SLAM 1 0-5 00 SAM 10 0-2 000 Fig 9 Comparison of acoustic microscopy applications with C-scan applications, based... Jaramillo, V.H Holmes, and J.P Black, "Characterization of New FPS Vidicons for Scientific Imaging Applications," LA-11035-MS, US-37, Los Alamos National Laboratory, 1988 L.E Rovich, Imaging Processes and Materials, Van Nostrand Reinhold, 1989 K Thompson, private communication, Sandia National Laboratories, 1988 C.S Burrus and T.W Parks, DFT/FFT and Convolution Algorithms: Theory and Implementation, John... pixels The standards for video output are the RS -1 70 , which specifies 525 lines of interlaced video at a frame rate of 30 Hz, and RS-343-A, which specifies 1023 interlaced lines also at a 30-Hz frame rate Most modern monitors have special circuitry that enables the monitor to sync to various standards with either the sync superimposed on the green signal or as a separate input Most high-resolution (1280... Parameter SLAM C-SAM SAM General description Utilizes CW, plane wave ultrasonic illumination of sample and scanning focused laser beam detection of ultrasound; simultaneous optical images and acoustic images are produced SLAM produces images in real time, which is the fastest of all acoustic microscopy techniques High-resolution focused-beam C-scan(a) Utilizes pulse-echo mode and has full gray-scale image . ({[(f(x + 1, y - 1) + 2 · f(x + 1, y) + f(x + 1, y + 1)] - [(f(x - 1, y - 1) + 2 · f(x - 1, y) + f(x - 1, y + 1)]} 2 + {[f(x - 1, y - 1) + 2 · f(x, y - 1) + f(x + 1, y - 1)] - [f(x - 1, y +. color and black -and- white images contain the same information, there are features that are much more obvious in color plots than in corresponding black -and- white plots, and vice versa. In particular,. enhancement. Black -and- white images are shown in Fig. 6 and 7, and the color versions are shown in Fig. 6(c) and (d) in the article "Use of Color for NDE" in this Volume. Fig. 6

Ngày đăng: 10/08/2014, 13:20

Từ khóa liên quan

Tài liệu cùng người dùng

Tài liệu liên quan