Feasibility of direct brain 18F-fluorodeoxyglucose-positron emission tomography attenuation and high-resolution correction methods using deep learning

Document Type : Original Article


1 Graduate School of Health Sciences, Kumamoto University, Japan

2 2Kumamoto University Hospital, Japan

3 Department of Central Radiology Kumamoto University Hospital, Japan

4 Department of Diagnostic Radiology, Faculty of Life Sciences,Kumamoto University, Japan

5 Department of Information and Communication Technology, Faculty of Engineering, University of Miyazaki, Japan

6 Department of Medical Radiation Sciences, Faculty of Life Sciences, Kumamoto University, Japan



Objective(s): To develop the following three attenuation correction (AC) methods for brain 18F-fluorodeoxyglucose-positron emission tomography (PET), using deep learning, and to ascertain their precision levels: (i) indirect method; (ii) direct method; and (iii) direct and high-resolution correction (direct+HRC) method.
Methods: We included 53 patients who underwent cranial magnetic resonance imaging (MRI) and computed tomography (CT) and 27 patients who underwent cranial MRI, CT, and PET. After fusion of the magnetic resonance, CT, and PET images, resampling was performed to standardize the field of view and matrix size and prepare the data set. In the indirect method, synthetic CT (SCT) images were generated, whereas in the direct and direct+HRC methods, a U-net structure was used to generate AC images. In the indirect method, attenuation correction was performed using SCT images generated from MRI findings using U-net instead of CT images. In the direct and direct+HRC methods, AC images were generated directly from non-AC images using U-net, followed by image evaluation. The precision levels of AC images generated using the indirect and direct methods were compared based on the normalized mean squared error (NMSE) and structural similarity (SSIM).
Results: Visual inspection revealed no difference between the AC images prepared using CT-based attenuation correction and those prepared using the three methods. The NMSE increased in the order indirect, direct, and direct+HRC methods, with values of 0.281×10-3, 4.62×10-3, and 12.7×10-3, respectively. Moreover, the SSIM of the direct+HRC method was 0.975.
Conclusion: The direct+HRC method enables accurate attenuation without CT exposure and high-resolution correction without dedicated correction programs.


Main Subjects

  1. Mosconi L. Glucose metabolism in normal aging and Alzheimer's disease: Methodological and physiological considerations for PET studies. Clin Transl Imaging. 2013; 1:10.
  2. Tomasi G, Turkheimer F, Aboagye E. Importance of quantification for the analysis of PET data in oncology: review of current methods and trends for the future. Mol Imaging Biol. 2012; 14:131-146.
  3. Bertoldo A, Rizzo G, Veronese M. Deriving physiological information from PET images: from SUV to compartmental modelling, Clin Transl Imaging. 2014; 2: 239-251.
  4. Rogasch JMM, Hofheinz F, van Heek L, Voltin CA, Boellaard R, Kobe C. Influences on PET Quantification and Interpretation. Diagnostics (Basel). 2022; 10; 12: 451.
  5. Burger C, Goerres G, Schoenes S, Buck A, Lonn AH, Von Schulthess GK. PET attenuation coefficients from CT images: experimental evaluation of the trans-formation of CT into PET 511-keV attenuation coefficients. Eur J Nucl Med Mol Imaging. 2002; 29:922-927.
  6. Visvikis D, Costa DC, Croasdale I, Lonn AH, Bomanji J, Gacinovic S, et al. CT-based attenuation correction in the calculation of semi-quantitative indices of [18F]FDG uptake in PET. Eur J Nucl Med Mol Imaging. 2003; 30:344-353.
  7. Blankespoor SC, Xu X, Kaiki K, Brown JK, Tang HR, Cann CE, et al. Attenuation correction of SPECT using X-ray CT on an emission-transmission CT system: Myocordial perfusion assessment. IEEE Trans Nucl Sci. 1996; 43:2263-2274.
  8. LeCun Y, Bengio Y, Hinton G. Deep learning. Nature 2015; 521:436-444.
  9. Yamashita R, Nishio M, Do RKG, Togashi K. Convolutional neural networks: an overview and application in radiology. Insights Imaging. 2018; 9: 611-629.
  10. Lee J.S. A review of deep-learning-based approaches for attenuation correction in positron emission tomography. IEEE Transactions on Radiation and Plasma Medical Sciences. 2021; 5(2); 160-84.
  11. Shiri I, Ghafarian P, Geramifar P, Leung KH, Ghelichoghli M, Oveisi M, et al. Direct attenuation correction of brain PET images using only emission data via a deep convolutional encoder-decoder (Deep-DAC). Eur Radiol. 2019; 29:6867-6879.
  12. Dong X, Lei Y, Wang T, Higgins K, Liu T, Curran WJ, et al. Deep learning-based attenuation correction in the absence of structural information for whole-body positron emission tomography imaging. Phys Med Biol. 2020; 65:055011.
  13. Arabi H, Zeng G, Zheng G, Zaidi H. Novel adversarial semantic structure deep learning for MRI-guided attenuation correction in brain PET/MRI. Eur J Nucl Med Mol Imaging. 2019; 46:2746-2759.
  14. Hashimoto F, Ito M, Ote K, Isobe T, Okada H, Ouchi Y. Deep learning-based attenuation correction for brain PET with various Ann Nucl Med. 2021; 35: 691-701.
  15. National Cancer Institute. Cancer imaging https://www.cancerimagingarchive. net/ access-data/. Accessed 14 Feb 2023.
  16. Zimmermann L, Knäus B, Stock M, Lütgendorf-Caucig C, Georg D , Kuess P. An MRI sequence independent convolutional neural network for synthetic head CT generation in proton therapy. Z Med Phys. 2022; 32(2):218-227.
  17. Salvadori J, Odille F, Verger A, Olivier P, Karcher G, Marie PY, et al. Head-to-head comparison between digital and analog PET of human and phantom images when optimized for maximizing the signal-to-noise ratio from small lesions. EJNMMI Phys. 2020; 7: 11.
  18. Salvadori J, Imbert, Perrin M, Karcher G, Lamiral Z, Marie PY, et al. Head-to-head comparison of image quality between brain 18F-FDG images recorded with a fully digital versus a last-generation analog PET camera .EJNMMI Res. 2019;9(1):61.
  19. Richardson WH. Bayesian-based iterative method of image restoration. JOSA. 1972; 62: 55-59.
  20. Lucy LB. An iterative technique for the rectification of observed distributions. Astron J. 1974; 79: 745-754.
  21. Brant-Zawadzki M, Gillan GD, Nitz WR. MP RAGE: a three-dimensional, T1-weighted, gradient-echo sequence--initial experience in the brain. Radiology. 1992; 182:769-775.
  22. Wetzel SG, Johnson G, Tan AG, Cha S, Knopp EA, Lee VS, et al. Three-dimensional, T1-weighted gradient-echo imaging of the brain with a volumetric interpolated examination. AJNR Am J Neuroradiol. 2002; 23:995-1002.
  23. Yin XX, Sun L, Fu Y, Lu R, Zhang Y. U-Net-Based medical image segmentation. J Healthc Eng. 2022; 2022:4189781.
  24. Sun H, Jiang Y, Yuan J, Wang H, Liang D, Fan W, et al. High-quality PET image synthesis from ultra-low-dose PET/MRI using bi-task deep learning. Quant Imaging Med Surg. 2022; 12: 5326-5342.
  25. Ronneberger O, Fischer P, Brox T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In: Navab N, Hornegger J, Wells W, Frangi A, editors. Medical Image Computing and Computer-Assisted Intervention - MICCAI 2015. MICCAI 2015. Lecture Notes in Computer Science, Springer; 2015.
  26. Wang Z, Bovik AC, Sheikh HR, Simoncelli EP.Image quality assessment: from error visibility to structural similarity. IEEE Trans Image Process. 2004; 13: 600-612.
  27. Kanda Y. Investigation of the freely available easy-to-use software ‘EZR’ for medical Bone Marrow Transplant. 2013; 48: 452-458.
  28. Raymond C, Jurkiewicz MT, Orunmuyi A, Liu L, Dada MO, Ladefoged CN, Teuho J, Anazodo UC. The performance of machine learning approaches for attenuation correction of PET in neuroimaging: A meta- J Neuroradiol. 2023; 50(3):315-326.
  29. Arabi H, Bortolin K, Ginovart N, Garibotto V, Zaidi H. Deep learning-guided joint attenuation and scatter correction in multitracer neuroimaging studies. Hum Brain Mapp. 2020; 41(13):3667-79.
  30. Narayanan M. Perkins A. Resolution recovery in the lngenuity TF PET/ White Paper, Philips Healthcare, 2013.
  31. Goodfellow IJ, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, et al. Generative adversarial nets. NIPS'14: Proceedings of the 27th International Conference on Neural Information Processing Systems-Volume 2. Montreal, Canada: MIT Press; 2014. pp. 2672-2680.