Pediatric PET Imaging - part 4 ppt

59 288 0
Pediatric PET Imaging - part 4 ppt

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

W. An easy way to graphically represent the imaging geometry is to denote each projection p q,f (u, v) as a dot on a unit sphere, correspond- ing to the spherical angles q and f. This unit sphere is also called Orlov’s sphere (89). Plotting the imaging geometry on the Orlov sphere helps to determine whether the necessary and sufficient conditions are satisfied, to faithfully reconstruct f(x). The imaging geometry W can be considered to be complete, and a faithful reconstruction can be obtained by inverting Equation 1, provided the imaging geometry W intersects every great circle on the Orlov sphere. Let us now illustrate some of the 3D imaging geometries used in PET, as well as in GCPET, on the Orlov sphere. The simplest 3D imaging geometry is the one obtained by using a parallel slat collimator and rotating the GCPET system around the longitudinal axis of the patient for 360 degrees. The data are acquired in the 2D mode, and the pro- jection data is rebinned as a set of 2D parallel projections. If we assume the longitudinal axis of the patient as the z-axis on the Orlov sphere, then this geometry corresponds to an equatorial circle perpendicular to the z-axis on Orlov’s sphere. The center of this equatorial circle coin- cides with the center on the sphere as shown in Figure 10.13A. This equatorial circle is also called a great circle and the geometry is mathe- matically denoted as W 2p = {q; q=p/2, fŒ[0, 2p)}. Apopular PET geometry is the equatorial band on the Orlov sphere that can be obtained by either rotating an uncollimated 2D planar GCPET detector around the longitudinal axis of the patient or using a stationary truncated spherical/cylindrical PET detector. For the GCPET system the data are acquired in the fully 3D mode with the oblique rays considered as parallel ray projections. As shown in Figure 10.13B, the parallel projections are obtained for the polar angle q ranging from p/2 -yto p/2 +yand for the azimuthal angle f varying from 0 to 2p. Mathematically this geometry is represented as W B(y,0,2p) = {q; q=[p/2 - y, p/2 +y), fŒ[0, 2p)}. 158Chapter 10 Coincidence Imaging ABC Figure 10.13. A: A 2D parallel projection on the Orlov sphere is represented as a dot corresponding to the vector direction of the measured projection. B: The imaging geometry obtained using a GCPET system in the 2D mode for a 360-degree rotation of the gantry around the patient. C: The imaging geom- etry on the Orlov sphere for a 3D acquisition using a GCPET system. In this case the imaging geome- try takes the shape of an equatorial band. The shape of the imaging geometry helps to graphically establish whether the unknown image f(x) is sampled completely as well as to determine the shape of the point-spread function (PSF) h(x) used in the backprojection filtering (BPF) algorithm. In the next section, we describe the BPF algorithm used for 3D image reconstruction. Analytical Reconstruction Algorithm Backprojection Filtering The 3D backprojected image b(x ) is obtained by simply smearing back the values in the 2D projection data p q,f (u, v) along the vector direction -q. The above step is repeated for all projections measured along the imaging geometry W to give the backprojected image where x is a point in the 3D image space. The dot products x.a and x.b help to determine the u and v locations in the projection space, which contributes to the point x. This step is repeated for all locations x in the image space to get the 3D backprojected image. However, the simple backprojection step results in a backprojected image b(x) that is equal to the original image f(x) convolved with a 3D PSF h(x) given by This PSF depends on the imaging geometry and can be expressed as Hence, to get the function f(x) back, we need to deconvolve the PSF from the backprojected image, a step also known as BPF. The decon- volution is implemented as a division in the Fourier domain to give f(x ) = F -1 3D {F(v)} = F -1 3D {B(v)/H(v)}, whereand likewise for B(v ) and H(v ). The capital letters are used to denote the functions in the Fourier domain, whereas in the cartesian coordinate system v = (v x , v y , v z ) is used. The 3D BPF filter function G(v) can be denoted as The multiplication of the BPF G(v ) with the Fourier transform of the backprojected image B(v) compensates for the variations in the sam- pling density due to the imaging geometry given by Gv Hv ifHv otherwise () = () () π Ï Ì Ó 10 0 . Fv fx i x v dx () = () - () -• • -• • -• • ÚÚÚ exp .p hx x x x if x x otherwise () = () () = Œ Ê Ë ˆ ¯ Ï Ì Ô Ó Ô c c W W W 2 1 0 where b x f xxhxdx () =-¢ () ¢ () ¢ -• • -• • -• • ÚÚÚ . b x Pu v dd u x v x () = () = = ÚÚ qf a b qqf ,, , , sin , W G. Bal et al. 159 f(x) = F -1 3D {F(v)} = F -1 3D {B(v)/H(v)} = F -1 3D {B(v)}G(v)} for all H(v) π 0 (2) In the above equation, if H(v) = 0 for any v, then the imaging geom- etry does not satisfy Orlov’s condition. Such imaging geometries can lead to limited angle artifacts in the reconstructed image (90). The 3D Fourier transform of the PSF gives us the 3D transfer func- tion H(v) in the Fourier domain. A 3D illustration of this transfer func- tion tells us the sampling density of all frequencies in the Fourier domain. As mentioned by Schorr and Townsend (91), determining the 3D transfer function from the PSF is a complicated and lengthy process and is largely dependent on the imaging geometry used. Thus, for a given imaging geometry, finding the 3D closed-form solution for H(v) is nontrivial and has been solved only for few imaging geometries. Some of the previous work done in determining H(v ) were (1) by Tanaka (92) for the 4p geometry; (2) by Pelc (93), Colsher (94), Ra et al. (95), and Defrise et al. (96) for the equatorial band; (3) by Schorr and Townsend (91) for the planar stationary PET detector; (4) by Pelc (97), Knutsson et al. (98), Harauz and vanHeel (99), Defrise et al. (100), and Wessell (101) for the ectomography case; and (5) by Bal et al. (102) for a circular arc on the Orlov’s sphere. Once the 3D transfer function is determined, then finding its inverse to obtain the BPF G(v) is trivial. One of the advantages of the imaging geometry dependent closed- form expression for H(v) and G(v) is the elimination of tedious geom- etry-dependent numerical integration in equation (2). Thus, the implementation of the 3D BPF algorithm, used to determine the func- tion f(x), is simplified and accurate results can be obtained. Apart from BPF, another analytical reconstruction algorithm widely used to invert Equation A is filtered backprojection (FBP), which is preferred over BPF as (1) the filtering and the reconstruction process can be done simulta- neously as the data are being acquired, (2) less computer memory is required to store the filter, and (3) the support of the BPF algorithm is not compact. In the next section, we explain some of the basic princi- ples of FBP. Filtered Backprojection (FBP) The relationship shown in Equation 1 can also be written in the Fourier domain using the central section theorem (CST) (103) as P q,f (v u , v v ) = F(v u a + v v b). This means the 2D Fourier transform of the projection p q,f (u, v) is the same as a planar slice through the 3D Fourier transform of the unknown function f(x) (Fig. 10.14). This 2D plane is perpendicular to the unit vector q and passes through the origin of the 3D function F(v). In other words, each projection p q,f (u, v) contains some information cor- responding to certain frequencies of the 3D function (103,104). Hence, a set of projections that satisfies Orlov’s condition is needed to sample all the 3D frequencies of the function F(v). In the above example, four variables are required to define the pro- jection data p q,f (u, v) obtained from a 3D image, whereas only two vari- 160 Chapter 10 Coincidence Imaging ables are sufficient to define the measured projections from a 2D object. For the 3D case, if the imaging geometry does not satisfy Orlov’s con- dition, then the reconstructed image will contain limited angle artifacts. On the other hand, if the imaging geometry oversamples certain fre- quencies and satisfies Orlov’s condition, then an infinite number of valid filters can be determined. In this chapter, we determine the “optimal” factorizable filter that can be obtained, assuming all projec- tions have the same noise level. This optimal 2D FBP filter is deter- mined by taking central sections through the inverse of the transfer function H(v). In FBP, the 2D projection data p q,f (u, v) is first convolved with a 2D filter g q,f (u, v) and then backprojected onto a 3D matrix. The 2D filter for each projection depends on the direction of the measured projec- G. Bal et al. 161 Z Y X F 3D V yr V z V x V y V xr F 2D V yr V z V x V y V xr y r p(φ,θ) Z r X r Figure 10.14. The central section theorem shows that the 2D Fourier transform of the projection data, in a certain direction, corresponds to a slice through the 3D Fourier transform of the 3D image. tion q and the imaging geometry W of the system. Hence, a set of 2D FBP filters corresponding to the angle at which the projection data was measured is determined. The 2D filters are then convolved with the corresponding projections, to obtain the set of filtered projections. The convolution operation in the spatial domain can be replaced by a mul- tiplication operation in the Fourier domain. In the Fourier domain, the 2D filter Q q,f (v u , v v ) is obtained by taking the central section through the 3D filter G(v) along a plane normal to q given by Q q,f (v u , v v ) = G(v u a + v v b). The inverse Fourier transform of the function obtained by multiply- ing the 2D filter with the 2D Fourier transform of the projection image, gives us the filtered projection in that direction represented as p* q,f (u, v) = F -1 2D {P q,f (v u , v v ) ¥ Q q,f (v u , v v )}. Hence, in this method, the 2D filter “precorrects” the measured pro- jections, for the blurring caused by the imaging geometry–dependent PSF. The filtered projections are then backprojected along the imaging geometry, to give the 3D reconstructed image A fully 3D reprojection algorithm (87) based on the above principle of filtered backprojection was developed for GCPET and widely used for image reconstruction (60). Some of the advantages of using analytical reconstruction algorithms are (1) increased accuracy, (2) ease of implementation, and (3) reduced computational effort result- ing in the reconstruction of the 3D volume in a very short period of time. Iterative Reconstruction Algorithm The speed and simplicity of analytical reconstruction have made it the method of choice for clinical applications. However, using analytic methods it is difficult to model and compensate for the numerous image degradation factors such as scatter, spatially variant sensitivity, and asymmetric point-spread function. Iterative algorithms, on the other hand, can compensate for these degradation factors better than analytic algorithms. Yet iterative algorithms were not liberally used in the past due to limitations in computing power. Now with the increas- ing computing capabilities of modern-day computers, the development and use of iterative algorithms is becoming increasingly popular for image reconstruction. Maximum Likelihood Expectation Maximization Algorithm In GCPET, iterative reconstruction algorithms based on statistical prop- erties such as maximum likelihood expectation maximization (MLEM) (105,106), OSEM (107,108), conjugate gradient (109), and COSEM (70) are widely used. Statistical methods try to statistically find the most f x Pu v dd u x v x () = () = = ÚÚ qf a b qqf ,. , * , sin . W 162 Chapter 10 Coincidence Imaging probable value of the image vector F for the measured projection P. For example, the MLEM algorithm was designed to maximize the poste- rior probability of the reconstructed image for a given projection data with Poisson statistics, whereas the iterative expectation maximization (EM) procedure of the MLEM algorithm maximizes the log likelihood function with respect to F. Thus the log likelihood function increases with each iteration, and hence the EM algorithm always converges to a more likely solution. Mathematically, the MLEM algorithm is written as where f i new and f i old are vectors representing the current (updated) and previous estimates of the image. The summation over j and l is the backprojection of all the bins for all projection angles, whereas the summation over k is the projection of the previous image estimate. The element a ji corresponds to the probability that a photon emitted by the i th pixel will be detected at the j th bin (i.e., a ji is an element of the transfer or projection matrix A while its transpose A T is a backpro- jection matrix). The algorithm converges, that is, f i new = f i old when . If the initial estimate and the transfer matrix are non- negative, then the final image is nonnegative. Further, because it is easy to model the image degradation factors in the transfer matrix, the images obtained using MLEM can be potentially better than those obtained using analytical algorithms such as FBP. Thus, the MLEM algorithm is capable of reconstructing images with a decent degree of quantitative accuracy and hence preferable for clinical applications. However, the MLEM algorithm is extremely slow and requires many iterations to reconstruct the original image. To solve this problem, a variation of MLEM called OSEM is routinely used for clinical applications. OSEM is very similar to MLEM, except that the projection data are ordered into subsets and the image is updated after going through every projection in a subset, given by where S n is the n th subset of the projection data. During reconstruction, the image is updated after using all the projection bins in a subset, that is, the image estimate is updated multiple times in an iteration depend- ing on the number of subsets used. These multiple updates in turn accelerate the convergence speed of the OSEM algorithm by a factor proportional to the number of subsets used (107). A detailed study com- paring OSEM and FBPreconstruction for dual-head coincidence f f a a p af i new i old ji j S li lS l lkk old k S n n n = Œ Œ Œ    paf llkk old k =  f f a a p af i new i old ji j M li l M l lkk old k N = = = =    0 0 0 G. Bal et al. 163 imaging was performed by Gutman et al. (110). They observed that though the OSEM reconstructed images showed better visual quality, the overall detectability of lung nodules using the two methods was similar for a large set of patient studies. Summarizing the above discussions, the five main steps of an itera- tive algorithm are (1) start with an initial estimate of the image to be reconstructed, (2) simulate a measurement using the image estimate mentioned in step 1, (3) compare the original measurement and the simulated measurement, (4) update the image estimate based on the above comparison in step 3, and (5) repeat steps 2, 3, and 4 until the image converges, or for some predetermined number of iterations or until some stopping criterion is reached. List-Mode Reconstruction The measured data obtained using GCPET system can be either rebinned or stored as list-mode data. In list-mode format each coinci- dence event is stored sequentially and each stored event contains the detection position on both detectors as well as the energy information of the two photons. In routine GCPET scans, the acquired number of coincidence events is typically about 20 ¥ 10 3 counts per second. Because GCPET systems have a larger axial aperture compared to ded- icated PET, rebinning the sparse data into large set of 2D projections over a large number of azimuthal and polar angles results in a huge number of mostly empty bins. In such cases, it is advantageous to save the data in a list-mode format (70,71,111). To reconstruct these data, by avoiding rebinning the data during reconstruction, a maximum likeli- hood expectation maximization (MLEM)–based list mode reconstruc- tion approach has been developed (70,111–114). The MLEM list-mode algorithm is given by where f i (t) is the expected number of photons emitted from source voxel i per unit time and t is the iteration number. A total acquisition time is denoted by T, the total number of measured LORs is equal to N, and the number of voxels is equal to M; p(A j /l) is the probability that a detected event from voxel l leads to a measurement in LOR j, whereas the term is the forward projector that calculates the value that will be measured at LOR j with a distribution f i (t) and sensitivity s i . Various modifications of the above MLEM based list-mode algorithm have been proposed and used clinically (70,111,114). As shown in Figure 10.15, small improvements in the resolution and contrast were observed for the list-mode reconstructed images com- pared to FBP and MLEM reconstructed images of single-slice rebinned data. The patient data was obtained using axial collimation with a maximum acceptance angle of 9 degrees (114). P A i s f ji l t i M () () =  1 f P Alf TPA i s f l t j l t ji l t i M j N + () () () = = = () ()   1 1 1 164 Chapter 10 Coincidence Imaging Commercial GCPET Systems Table 10.5 lists features provided by various GCPET manufacturers over the years (28). Though this list is not exhaustive and is constantly being updated by the manufacturers, it serves as a good starting point to understand the various hardware and software modifications that went into the design of GCPET systems. G. Bal et al. 165 SSRB-MLEM Listmode 20 18 16 14 12 10 8 6 4 2 0 axial FWHM (mm) radial tangential axial 3D Listmode SSRB+FBP SSRB+ML-EM AB Table 10.5. Photo-peak detection efficiency versus crystal thickness in GCPET 18 F Crystal 201 Tl 99m Tc 67 Ga511keV thickness (mm) 70 keV (%) 140 keV (%) 300 keV (%) (%) 9.5100 84 33 13 12.7 100 91 4117 15.9 100 95 48 21 19.1100 98 54 24 Source: Data from Patton and Turkington (6), with permission of the Society of Nuclear Medicine. Figure 10.15. A: Resolution of the reconstructed image along the radial, transaxial, and axial direction, for 3D list-mode, SSRB + FBP and SSRB + MLEM reconstruction. B: Different coronal slices through a patient data after 20 iterations of SSRB + MLEM and 20 iteration of list-mode reconstruction. References 1. Anger HO, Gottschalk A. Localization of brain tumors with the positron scintillation camera. J Nucl Med 1963;77:326–330. 2. Anger HO. Scintillation camera. Rev Sci Instrum 1958;29:27–33. 3. Muehllehner G. Positron camera with exte nded counting rate capability. J Nucl Med 1975;16(7):653–657. 4.Muehllehner G, Buchin MP, Dudek JH. Performance parameters of a positron imaging camera. IEEE Trans Nucl Sci 1976;23(1):528–537. 5. Jarritt PH, Acton P D. PET imaging using gamma camera systems: a review. Nucl Med Commun 1996;17(9):758–766. 6.Patton JA, Turkington TG. Coincidence imaging with a dual-head scintil- lation camera. J Nucl Med 1999;40(3):432–441. 7. Karp JS, Muehllehner G, Mankof FD, et al. Continuous-slice PENN-PET: a positron tomograph with volume imaging capability. J Nucl Med 1990;31(5):617–627. 8. Smith RJ, Karp JS, Muehllehner G. The count rate performance of the volume imaging PENN-PET scanner. IEEE Trans Med Imag 1994;13(4): 610–618. 9.Miyaoka RS, Lewellen TK, Kim JS, et al. Performance of a dual headed SPECT system modified for coincidence. Proc IEEE Nucl Sci Symp 1995;3:1348–1352. 10 .Nellemann P, Hines H, Braymer W, Muehllehner G, Geagan M. Perfor- mance characteristics of a dual head SPECT scanner with PET. In: Pro- ceedings of the IEEE Nuclear Science Symposium, San Francisco, 1995; 1751–1755. 11.Coleman RE. Camera-based PET: the best is yet to come. J Nucl Med 1997;38(11):1796–1797. 12. Leichner PK, Morgan HT, Holdeman KP, et al. SPECT imaging of fluo- rine-18. J Nucl Med 1995;36(8):1472–1475 . 13. Martin WH, Delbeke D, Patton JA, et al. FDG-SPECT: correlation with FDG-PET. J Nucl Med 1995;36(6):988–995. 14. Burt R. Dual isotope F-18 FDG and Tc-99m RBC imaging for lung cancer. Clin Nucl Med 1998;23(12):807–809. 15.Chen EQ, MacIntyre WJ, Go RT, et al. Myocardial viability studies using fluorine-18–FDG SPECT: a comparison with fluorine-18–FDG PET. J Nucl Med 1997;38(4):582–586. 16. Bax JJ , Cornel JH, Visser FC, et al. Prediction of recovery of myocardial dysfunction after revascularization. Comparison of fluorine-18 fluo- rodeoxyglucose/thallium-201 SPECT, thallium-201 stress-reinjection SPECT and dobutamine echocardiography. J Am Coll Cardiol 1996;28(3): 558–564. 17. Bax JJ, Visser FC, Blanksma PK, et al. Comparison of myocardial uptake of fluorine-18–fluorodeoxyglucose imaged with PET and SPECT in dyssynergic myocardium. J Nucl Med 1996;37(10):1631–1636. 18. Matsunari I, Yoneyama T, Kanayama S, et al. Phantom studies for esti- mation of defect size on cardiac (18)F SPECT and PET: implications for myocardial viability assessment. J Nucl Med 2001;42(10):1579–1585. 19. Srinivasan G, Kitsiou AN, Bacharach SL, Bartlett ML, Miller-Davis C, Dilsizian V. [18F]fluorodeoxyglucose single photon emission computed tomography: can it replace PET and thallium SPECT for the assessment of myocardial viability? Circulation 1998;97(9):843–850. 20.Martin WH, Delbeke D, Patton JA, Sandler MP. Detection of malignancies with SPECT versus PET, with 2-[fluorine-18]fluoro-2-deoxy-D-glucose. Radiology 1996;198(1):225–231. 166 Chapter 10 Coincidence Imaging 21.Martin WH, Jones RC, Delbeke D, Sandler MP. A simplified intravenous glucose loading protocol for fluorine-18 fluorodeoxyglucose cardiac single-photon emission tomography. Eur J Nucl Med 1997;24(10):1291– 129 7. 22. Delbeke D, Videlefsky S, Patton JA, et al. Rest myocardial perfusion/ metabolism imaging using simultaneous dual-isotope acquisition SPECT with technetium-99m-MIBI/fluorine-18–FDG. J Nucl Med 1995;36(11): 2110–2119. 23. Sandler MP, Videlefsky S, Delbeke D, et al. Evaluation of myocardial ischemia using a rest metabolism/stress perfusion protocol with fluorine- 18 deoxyglucose/technetium-99m MIB I and dual-isotope simultaneous- acquisition single-photon emission computed tomography. J Am Coll Cardiol 1995;26(4):870–878. 24. Burt RW , Perkins OW, Oppenheim BE, et al. Direct comparison of fluo- rine-18–FDG SPECT, fluorine-18–FDG PET and rest thallium-201 SPECT for detection of myocardial viability. J Nucl Med 1995;36(2):176–1769. 25.Zeng GL, Gullberg GT, Bai C, et al. Iterative reconstruction of fluorine-18 SPECT using geometric point response correction. J Nucl Med 1998;39(1): 124–130. 26. DiBella EVR, Gullberg GT, Ross SG, Christian PE. Compartmental mod- eling of 18 FDG in the heart using. In: Proceedings of IEEE Nuclear Science Symposium, Albuquerque, NM, 1997:1460–1463. 27. Muehllehner G. Effect of crystal thickness on scintillation camera perfor- mance. J Nucl Med 1979;20(9):992–993. 28. Fleming JS, Hillel P. Basics of Gamma Camera Positron Emission Tomography. New York: Institute of Physics and Engineering in Medicine, 2004. 29. Sossi V, Morin O, Celler A, Rempel TD, Belzberg A, Carhart C. PET and SPECT perfo rmance of the Siemens HD3 E.Cam/sup duet//spl reg/: a 1≤ Na(I) hybrid camera. IEEE Trans Nucl Sci 2003;50(5):1504–1509. 30.Patton JA, Delbeke D, Sandler MP. Image fusion using an integrated, dual- head coincidence camera with x-ray tube-based attenuation maps. J Nucl Med 2000;41(8):1364–1368. 31.Tarantola G, Zito F, Gerundini P. PET instrumentation and reconstruction algorithms in whole-body applicatio ns. J Nucl Med 2003;44(5):756– 769. 32. Schmand M, Dahlbom M, Eriksson L, et al. Performance of a LSO/NaI(Tl) phoswich detector for a combined PET/SPECT imaging system. J Nucl Med 1998;39(5):9P. 33. Dahlbom M, MacDonald LR, Eriksson L, et al. Performance of a YSO/LSO phoswich detector for use in a PET/SPECT. IEEE Trans Nucl Sci 1997; 44(3):1114–1119. 34. Dahlbom M, MacDonald LR, Schmand M, Eriksson L, Andreaco M, Williams C. A YSO/LSO phoswich array detector for single and coinci- dence photon. IEEE Trans Nucl Sci 1998;45(3):1128–1132. 35. Swan WL. Exact rotational weights for coincidence imaging with a. IEEE Trans Nucl Sci 2000;47(4):1660–1664. 36. Lewellen TK, Miyaoka RS, Jansen F, Kaplan MS. A data acquisition system for coincidence imaging using a conventional dual head gamma camera. IEEE Trans Nucl Sci 1997;44(3):1214–1218. 37. Matthews CG. Triple-head coincidence imaging. In: Proceedings of IEEE Nuclear Science Symposium, Seattle, 2000:907–909. 38. Soares EJ, Germino KW, Glick SJ, Stodilka RZ. Determination of three- dimensional voxel sensitivity for two- and three-headed coincidence imaging. IEEE Trans Nucl Sci 2003;50(3):405–412. G. Bal et al. 167 [...]... Coincidence Imaging 39 D’Asseler Y, Vandenberghe S, Matthews CG, et al.—Three-dimensional geometric sensitivity calculation for IEEE Trans Nucl Sci 2001 ;48 (4) : 145 1 40 D’Asseler Y, Vandenberghe S, Matthews CG, et al Three-dimensional geometric sensitivity calculation for three-headed coincidence imaging IEEE Trans Nucl Sci 2001 ;48 (4) : 144 6– 145 1 41 Swan WL Exact rotational weights for coincidence imaging. .. to dedicated PET and SPECT Phys Med Biol 20 04; 49( 24) : 541 9– 543 2 53 Joung J, Miyaoka RS, Kohlmyer SG, Harrison RL, Lewellen TK Slat collimator design issues for dual-head coincidence Imaging systems IEEE Trans Nucl Sci 2002 ;49 (1): 141 – 146 54 Glick SJ, Groiselle CJ, Kolthammer J, Stodilka RZ Optimization of septal spacing in hybrid PET using estimation task performance IEEE T Nucl Sci 2002 ;49 (5):2127–2132... camera IEEE Trans Nucl Sci 2001 ;48 (1):98–105 48 Mankoff DA, Muehllehner G, Karp JS The high count rate performance of a two-dimensionally position-sensitive detector for positron emission tomography Phys Med Biol 1989; 34( 4) :43 7 45 6 49 Mankoff Da, Muehllehner G, Karp JS The effect of detector performance on high count-rate PET imaging with a tomograph based on position-sensitive detectors IEEE Trans... dual-headed gamma camera IEEE Trans Nucl Sci 2000 ;47 (4) :1660–16 64 42 D’asseler Y, Vandenberghe S, Koole M, et al A method for the calculation of the geometric sensitivity for stationary 3D PET using a triple-headed gamma camera Eur J Nucl Med 2001;28(8):1007 43 Kadrmas DJ, Rust TC Converging slat collimators for PET imaging with large-area detectors detectors IEEE Trans Nucl Sci 2003;50(1):17–23 44 Hamilton... 1999;26 (4) :379–387 64 Visvikis D, Fryer T, Downey S Optimisation of noise equivalent count rates for brain and body FDG imaging using gamma camera PET IEEE Trans Nucl Sci 1999 ;46 (3):6 24 630 65 Thompson CJ, Picard Y 2 New strategies to increase the signal-to-noise ratio in positron volume imaging IEEE Trans Nucl Sci 1993 ;40 (4) :956–961 66 Stodilka RZ, Glick SJ Evaluation of geometric sensitivity for hybrid PET. .. approximate rebinning algorithms for 3–D PET data IEEE Trans Med Imaging 1997;16(2): 145 –158 62 Wang W, Matthews CG Geometric calibration of a triple-head gammacamera PET system In: Proceedings of IEEE Nuclear Science Symposium, 2000:16 /44 –16 /48 63 Boren EL Jr, Delbeke D, Patton JA, Sandler MP Comparison of FDG PET and positron coincidence detection imaging using a dual-head gamma camera with 5/8–inch NaI(Tl)... that of low-grade gliomas (7 .4 versus 4. 0) In addition, they reported that FDG uptake was better than contrast-enhancement on CT in predicting tumor grade (60) Delbeke et al (61) evaluated FDG uptake in 32 high-grade and 26 low-grade brain tumors, most of which were gliomas They defined an “optimal cut-off level” that distinguished between high-grade and low-grade tumors Ratios of tumor-to-gray matter... Rohren et al (20) no longer recommend routine FDG -PET imaging of the brain in patients undergoing staging with whole-body PET Instead, they recommend anatomic imaging when indicated Many of the studies of FDG -PET for brain tumors include pediatric cases together with adult cases (6,8,20–27) Comparatively few studies focus exclusively on the evaluation of pediatric brain tumors (Table 11.1) Examples of... outcome, 94% (16 of 17) of patients with stable disease had no increase in FDG uptake, but all seven patients with progressive disease had increased uptake Overall, 96% (23 of 24) of patients had clinical outcomes that correlated with FDG activity Amino Acid PET MET -PET has also been explored for predicting the prognosis of primary brain tumors In a study of 54 low- and high-grade gliomas, a tumor-to-mean... Lewittt RM, Muehllehner G, Karpt JS Three-dimensional image reconstruction for PET by multi-slice rebinning and axial image filtering Phys Med Biol 19 94; 39(3):321–339 86 Matej S, Lewitt RM 3D-FRP: Direct Fourier reconstruction with Fourier reprojection for fully 3–D PET IEEE Trans Nucl Sci 2001 ;48 (4) :1378–1385 87 Kinahan PE, Rogers JG, Harrop R, Johnson RR Three-dimensional image reconstruction in object . Coincidence imaging with a dual-head scintil- lation camera. J Nucl Med 1999 ;40 (3) :43 2 44 1. 7. Karp JS, Muehllehner G, Mankof FD, et al. Continuous-slice PENN -PET: a positron tomograph with volume imaging. calculation for three-headed coincidence imaging. IEEE Trans Nucl Sci 2001 ;48 (4) : 144 6– 145 1. 41 . Swan WL. Exact rotational weights for coincidence imaging with a con- tinuously rotating dual-headed gamma. 20 04; 49( 24) : 541 9– 543 2. 53. Joung J, Miyaoka RS, Kohlmyer SG, Harrison RL, Lewellen TK. Slat col- limator design issues for dual-head coincidence Imaging systems. IEEE Trans Nucl Sci 2002 ;49 (1): 141 – 146 . 54.

Ngày đăng: 11/08/2014, 06:21

Từ khóa liên quan

Tài liệu cùng người dùng

  • Đang cập nhật ...

Tài liệu liên quan