Program Information
Mapping Endoscope Images to CT: Methods and Uncertainties
W S Ingram1*, J Yang1, J Qiu2, R Weersink2,3, B Beadle4, R Wendt5, A Rao6, L Court1 (1) Dept. of Radiation Physics, UT MD Anderson Cancer Center, Houston, TX (2) Dept. of Radiation Medicine, University Health Network, Toronto, ON (3) Dept. of Radiation Oncology, University of Toronto, Toronto, ON (4) Dept. of Radiation Oncology, Stanford University, Stanford, CA (5) Dept. of Imaging Physics, UT MD Anderson Cancer Center, Houston, TC (6) Dept. of Bioinformatics and Computational Biology, UT MD Anderson Cancer Center, Houston, TX
Presentations
SU-K-FS4-5 (Sunday, July 30, 2017) 4:00 PM - 6:00 PM Room: Four Seasons 4
Purpose: To develop an endoscopy-CT image-registration framework for head-and-neck radiotherapy patients.
Methods: Registration was developed and tested on phantoms with point and bolus fiducials and a patient set (n=3, 46 frames total). Endoscope video frames were registered to CT by optimizing virtual endoscope placement to maximize similarity between frames and virtual endoscope images, which were rendered using a CT isosurface (0.6g/cm³ threshold). Frames were mapped to CT by projecting pixels through the registered virtual image onto the isosurface. A novel registration technique was developed to search virtual-endoscope coordinate space by optimizing view directions from a set of positions initialized using a manually-drawn possible endoscope path through the volume. Two patients’ exams were recorded in simulation position with electromagnetic endoscope tracking, providing a second technique to measure virtual-endoscope coordinates. First, ground-truth virtual-endoscope coordinates were obtained by aligning frame-to-virtual point correspondences. Then, patient-image registration errors were quantified by mapping all pixels using measured and ground-truth coordinates and calculating their distances. The frame-to-virtual similarity measure (edge-alignment-weighted mutual information) was chosen by comparing registered coordinates to ground-truth coordinates. The virtual-endoscope lighting model, which influences similarity calculation, was chosen by frame-to-virtual histogram comparison. Potential impacts of (1) daily anatomical variations and (2) patient-positioning differences were evaluated on a separate patient set (n=13) using treatment-room CT-on-rails and diagnostic CTs, respectively. These registrations were virtual-to-virtual, with a simulation-CT reference image and a test image positioned via deformable registration to the simulation CT.
Results: In phantoms, point error was 5.5±6.0mm and bolus-contour mean surface distance was 2.9±1.5mm. In patients, the median registration error with our novel technique (8.3±6.1mm) was less than that with electromagnetically-tracked coordinates (11.7±3.9mm, p<0.01). Anatomical variations and patient-positioning differences contributed median errors of 3.8±2.0mm and 4.0±2.2mm, respectively.
Conclusion: Head-and-neck endoscopy-CT image registration is feasible, with a lower bound on accuracy in patients of ~5 mm.
Contact Email: