Imaging and image processing is the fundamental pillar of interventional oncology in which diagnostic, procedure planning, treatment and follow-up are sustained. Knowing all the possibilities that... Show moreImaging and image processing is the fundamental pillar of interventional oncology in which diagnostic, procedure planning, treatment and follow-up are sustained. Knowing all the possibilities that the different image modalities can offer is capital to select the most appropriate and accurate guidance for interventional procedures. Despite there is a wide variability in physicians preferences and availability of the different image modalities to guide interventional procedures, it is important to recognize the advantages and limitations for each of them. In this review, we aim to provide an overview of the most frequently used image guidance modalities for interventional procedures and its typical and future applications including angiography, computed tomography (CT) and spectral CT, magnetic resonance imaging, Ultrasound and the use of hybrid systems. Finally, we resume the possible role of artificial intelligence related to image in patient selection, treatment and follow-up. Show less
OBJECTIVE A major obstacle to improving bedside neurosurgical procedure safety and accuracy with image guidance technologies is the lack of a rapidly deployable, real-time registration and tracking... Show moreOBJECTIVE A major obstacle to improving bedside neurosurgical procedure safety and accuracy with image guidance technologies is the lack of a rapidly deployable, real-time registration and tracking system for a moving patient. This deficiency explains the persistence of freehand placement of external ventricular drains, which has an inherent risk of inaccurate positioning, multiple passes, tract hemorrhage, and injury to adjacent brain parenchyma. Here, the authors introduce and validate a novel image registration and real-time tracking system for frameless stereotactic neuronavigation and catheter placement in the nonimmobilized patient.METHODS Computer vision technology was used to develop an algorithm that performed near- continuous, automatic, and marker-less image registration. The program fuses a subject's preprocedure CT scans to live 3D camera images (Snap-Surface), and patient movement is incorporated by artificial intelligence- driven recalibration (Real-Track). The surface registration error (SRE) and target registration error (TRE) were calculated for 5 cadaveric heads that underwent serial movements (fast and slow velocity roll, pitch, and yaw motions) and several test conditions, such as surgical draping with limited anatomical exposure and differential subject lighting. Six catheters were placed in each cadaveric head (30 total placements) with a simulated sterile technique. Postprocedure CT scans allowed comparison of planned and actual catheter positions for user error calculation.RESULTS Registration was successful for all 5 cadaveric specimens, with an overall mean (+/- standard deviation) SRE of 0.429 +/- 0.108 mm for the catheter placements. Accuracy of TRE was maintained under 1.2 mm throughout specimen movements of low and high velocities of roll, pitch, and yaw, with the slowest recalibration time of 0.23 seconds. There were no statistically significant differences in SRE when the specimens were draped or fully undraped (p = 0.336). Performing registration in a bright versus a dimly lit environment had no statistically significant effect on SRE (p = 0.742 and 0.859, respectively). For the catheter placements, mean TRE was 0.862 +/- 0.322 mm and mean user error (difference between target and actual catheter tip) was 1.674 +/- 1.195 mm.CONCLUSIONS This computer vision-based registration system provided real-time tracking of cadaveric heads with a recalibration time of less than one-quarter of a second with submillimetric accuracy and enabled catheter placements with millimetric accuracy. Using this approach to guide bedside ventriculostomy could reduce complications, improve safety, and be extrapolated to other frameless stereotactic applications in awake, nonimmobilized patients. Show less
Purpose The surgical navigation system that provides guidance throughout the surgery can facilitate safer and more radical liver resections, but such a system should also be able to handle organ... Show morePurpose The surgical navigation system that provides guidance throughout the surgery can facilitate safer and more radical liver resections, but such a system should also be able to handle organ motion. This work investigates the accuracy of intraoperative surgical guidance during open liver resection, with a semi-rigid organ approximation and electromagnetic tracking of the target area.Methods The suggested navigation technique incorporates a preoperative 3D liver model based on diagnostic 4D MRI scan, intraoperative contrast-enhanced CBCT imaging and electromagnetic (EM) tracking of the liver surface, as well as surgical instruments, by means of six degrees-of-freedom micro-EM sensors.Results The system was evaluated during surgeries with 35 patients and resulted in an accurate and intuitive real-time visualization of liver anatomy and tumor's location, confirmed by intraoperative checks on visible anatomical landmarks. Based on accuracy measurements verified by intraoperative CBCT, the system's average accuracy was 4.0 +/- 3.0 mm, while the total surgical delay due to navigation stayed below 20 min.Conclusions The electromagnetic navigation system for open liver surgery developed in this work allows for accurate localization of liver lesions and critical anatomical structures surrounding the resection area, even when the liver was manipulated. However, further clinically integrating the method requires shortening the guidance-related surgical delay, which can be achieved by shifting to faster intraoperative imaging like ultrasound. Our approach is adaptable to navigation on other mobile and deformable organs, and therefore may benefit various clinical applications. Show less
Schoot, A.J.A.J. van de; Schooneveldt, G.; Wognum, S.; Hoogeman, M.S.; Chai, X.; Stalpers, L.J.A.; ... ; Bel, A. 2014
Purpose: The aim of this study is to develop and validate a generic method for automatic bladder segmentation on cone beam computed tomography (CBCT), independent of gender and treatment position ... Show morePurpose: The aim of this study is to develop and validate a generic method for automatic bladder segmentation on cone beam computed tomography (CBCT), independent of gender and treatment position (prone or supine), using only pretreatment imaging data.Methods: Data of 20 patients, treated for tumors in the pelvic region with the entire bladder visible on CT and CBCT, were divided into four equally sized groups based on gender and treatment position. The full and empty bladder contour, that can be acquired with pretreatment CT imaging, were used to generate a patient-specific bladder shape model. This model was used to guide the segmentation process on CBCT. To obtain the bladder segmentation, the reference bladder contour was deformed iteratively by maximizing the cross-correlation between directional grey value gradients over the reference and CBCT bladder edge. To overcome incorrect segmentations caused by CBCT image artifacts, automatic adaptations were implemented. Moreover, locally incorrect segmentations could be adapted manually. After each adapted segmentation, the bladder shape model was expanded and new shape patterns were calculated for following segmentations. All available CBCTs were used to validate the segmentation algorithm. The bladder segmentations were validated by comparison with the manual delineations and the segmentation performance was quantified using the Dice similarity coefficient (DSC), surface distance error (SDE) and SD of contour-to-contour distances. Also, bladder volumes obtained by manual delineations and segmentations were compared using a Bland-Altman error analysis.Results: The mean DSC, mean SDE, and mean SD of contour-to-contour distances between segmentations and manual delineations were 0.87, 0.27 cm and 0.22 cm (female, prone), 0.85, 0.28 cm and 0.22 cm (female, supine), 0.89, 0.21 cm and 0.17 cm (male, supine) and 0.88, 0.23 cm and 0.17 cm (male, prone), respectively. Manual local adaptations improved the segmentation results significantly (p < 0.01) based on DSC (6.72%) and SD of contour-to-contour distances (0.08 cm) and decreased the 95% confidence intervals of the bladder volume differences. Moreover, expanding the shape model improved the segmentation results significantly (p < 0.01) based on DSC and SD of contour-to-contour distances.Conclusions: This patient-specific shape model based automatic bladder segmentation method on CBCT is accurate and generic. Our segmentation method only needs two pretreatment imaging data sets as prior knowledge, is independent of patient gender and patient treatment position and has the possibility to manually adapt the segmentation locally. (C) 2014 American Association of Physicists in Medicine. Show less