To study radiotherapy-related adverse effects, detailed dose information (3D distribution) is needed for accurate dose-effect modeling. For childhood cancer survivors who underwent radiotherapy in... Show moreTo study radiotherapy-related adverse effects, detailed dose information (3D distribution) is needed for accurate dose-effect modeling. For childhood cancer survivors who underwent radiotherapy in the pre-CT era, only 2D radiographs were acquired, thus 3D dose distributions must be reconstructed from limited information. State-of-the-art methods achieve this by using 3D surrogate anatomies. These can however lack personalization and lead to coarse reconstructions. We present and validate a surrogate-free dose reconstruction method based on Machine Learning (ML). Abdominal planning CTs (n = 142) of recently-treated childhood cancer patients were gathered, their organs at risk were segmented, and 300 artificial Wilms' tumor plans were sampled automatically. Each artificial plan was automatically emulated on the 142 CTs, resulting in 42,600 3D dose distributions from which dose-volume metrics were derived. Anatomical features were extracted from digitally reconstructed radiographs simulated from the CTs to resemble historical radiographs. Further, patient and radiotherapy plan features typically available from historical treatment records were collected. An evolutionary ML algorithm was then used to link features to dose-volume metrics. Besides 5-fold cross validation, a further evaluation was done on an independent dataset of five CTs each associated with two clinical plans. Cross-validation resulted in mean absolute errors <= 0.6 Gy for organs completely inside or outside the field. For organs positioned at the edge of the field, mean absolute errors <= 1.7 Gy for D-mean, <= 2.9 Gy for D-2cc, and <= 13% for V-5 Gy and V-10 Gy, were obtained, without systematic bias. Similar results were found for the independent dataset. To conclude, we proposed a novel organ dose reconstruction method that uses ML models to predict dose-volume metric values given patient and plan features. Our approach is not only accurate, but also efficient, as the setup of a surrogate is no longer needed. Show less
Virgolin, M.; Wang, Z.Y.; Alderliesten, T.; Bosman, P.A.N. 2020
Purpose: Current phantoms used for the dose reconstruction of long-term childhood cancer survivors lack individualization. We design a method to predict highly individualized abdominal three... Show morePurpose: Current phantoms used for the dose reconstruction of long-term childhood cancer survivors lack individualization. We design a method to predict highly individualized abdominal three-dimensional (3-D) phantoms automatically.Approach: We train machine learning (ML) models to map (2-D) patient features to 3-D organat-risk (OAR) metrics upon a database of 60 pediatric abdominal computed tomographies with liver and spleen segmentations. Next, we use the models in an automatic pipeline that outputs a personalized phantom given the patient's features, by assembling 3-D imaging from the database. A step to improve phantom realism (i.e., avoid OAR overlap) is included. We compare five ML algorithms, in terms of predicting OAR left-right (LR), anterior-posterior (AP), inferior-superior (IS) positions, and surface Dice-Sorensen coefficient (sDSC). Furthermore, two existing human-designed phantom construction criteria and two additional control methods are investigated for comparison.Results: Different ML algorithms result in similar test mean absolute errors: similar to 8 mm for liver LR, IS, and spleen AP, IS; similar to 5 mm for liver AP and spleen LR; similar to 80% for abdomen sDSC; and similar to 60% to 65% for liver and spleen sDSC. One ML algorithm (GP-GOMEA) significantly performs the best for 6/9 metrics. The control methods and the human-designed criteria in particular perform generally worse, sometimes substantially (+5-mm error for spleen IS, -10% sDSC for liver). The automatic step to improve realism generally results in limited metric accuracy loss, but fails in one case (out of 60).Conclusion: Our ML-based pipeline leads to phantoms that are significantly and substantially more individualized than currently used human-designed criteria. (C) 2020 Society of Photo Optical Instrumentation Engineers (SPIE) Show less
Wang, Z.Y.; Virgolin, M.; Bosman, P.A.N.; Crama, K.F.; Balgobind, B.V.; Bel, A.; Alderliesten, T. 2020
Performing large-scale three-dimensional radiation dose reconstruction for patients requires a large amount of manual work. We present an image processing-based pipeline to automatically... Show morePerforming large-scale three-dimensional radiation dose reconstruction for patients requires a large amount of manual work. We present an image processing-based pipeline to automatically reconstruct radiation dose. The pipeline was designed for childhood cancer survivors that received abdominal radiotherapy with anterior-to-posterior and posterior-to-anterior field set-up. First, anatomical landmarks are automatically identified on two-dimensional radiographs. Second, these landmarks are used to derive parameters to emulate the geometry of the plan on a surrogate computed tomography. Finally, the plan is emulated and used as input for dose calculation. For qualitative evaluation, 100 cases of automatic and manual plan emulations were assessed by two experienced radiation dosimetrists in a blinded comparison. The two radiation dosimetrists approved 100%/100% and 92%/91% of the automatic/manual plan emulations, respectively. Similar approval rates of 100% and 94% hold when the automatic pipeline is applied on another 50 cases. Further, quantitative comparisons resulted in on average <5 mm difference in plan isocenter/borders, and <0.9 Gy in organ mean dose (prescribed dose: 14.4 Gy) calculated from the automatic and manual plan emulations. No statistically significant difference in terms of dose reconstruction accuracy was found for most organs at risk. Ultimately, our automatic pipeline results are of sufficient quality to enable effortless scaling of dose reconstruction data generation. (C) 2020 Society of Photo-Optical Instrumentation Engineers (SPIE) Show less