Nonrigid Image Registration for Colorectal Cancer Surgery

 

Investigators:

WSU:    Ardeshir Goshtasby (PhD), Lyubomir Zagorchev (PhD)

KMC:    Martin Satter (PhD), Joseph Mantil (MD, PhD)

OSU:    Edward Martin, Jr. (MD), Ergun Kocak (MD)

Funding Agency: NIH


This study is aimed at developing methodology and software for the nonrigid registration of whole-body PET and CT volumes.  Colorectal cancer patients are selected for resection based on their presurgical PET/CT registered data. In the operating room (OR), an intraoperative CT image is acquired prior to surgical exploration. This intraoperative data is used to correct the presurgical PET/CT image volume via nonrigid image registration software to match the anatomy of the patient on the surgical bed. The modified PET/CT image volume is then input to an image-guided surgical navigation system, which is used in conjunction with a positron-detector surgical probe to improve the surgeons' ability to detect and remove disease. The overall goal of this project is to extend the survival rates for a greater proportion of colorectal cancer patients. 

Registration of PET and CT scans is performed on confirmed/suspected colorectal cancer patients. Based on selection criteria, a subset of these patients is scheduled for colorectal cancer surgery. The high-resolution preoperative CT scans of these patients are “painted” with regions of interest (ROI’s) demarcating the location of lesions observed on the registered PET scan. In the OR, intraoperative CT scans are obtained of the patient (Ohio Medical Instruments Mayfield MobileScan CT). These intraoperative CT scans are used to transform the high-resolution CT scan to reflect new shapes and locations of anatomical structures during surgery. The modified high-resolution CT scans provide the input to a Medtronic StealthStation for image-guided colorectal cancer surgery.

The procedure to register preoperative CT and PET images involves an initial rigid-body registration followed by a nonrigid registration. The rigid-body registration is based on a multimodal rigid-body registration software recently developed by the investigators to geometrically align volumetric brain PET scans with MR/CT data sets [1]. The approach finds a number of corresponding subvolumes (templates) in the images using mutual information as the similarity measure. Then it uses the correspondences to determine the transformation. The flexibility of utilizing mutual information is that if a function exists that can map intensities between the target and reference templates, a high similarity will be obtained for them. To achieve an accurate registration, the templates selected from the reference (the volume to be re-oriented and re-sampled) must represent highly detailed and unique regions. The detail or information content in a template is determined by assessing the sum of gradient magnitudes in the template. The templates are then ordered as a function of their gradient magnitudes and the top 5% are selected. As some templates may overlap each other, those that overlap by more than 50% are removed. Among the remaining reference templates, the most unique dozen are kept for matching to templates in the target (the volume to which the reference is aligned). The coordinates of centers of corresponding templates are used to determine a rigid-body transformation that will bring the preoperative CT and PET images into the same coordinate system so that corresponding points in the images will have the same coordinates.

The following shows the registration interface. After reading in a CT image and a PET image, the images are approximately aligned manually in 20 to 30 seconds. Then, the images are rigidly registered automatically. This takes about a minute.The rigidly aligned images are then locally deformed and registered by a nonrigid registration method. Nonrigid registration is anticipated to take several minutes.

After the images are registered (manually, rigidly, or nonrigidly), a spin image of the PET, CT, or registered PET/CT can be generated. By clicking at a point in the spin image, the corresponding point in 3-D will be marked in the three orthogonal views and shown. The user may delineate a region of interest manually in the PET image and view the delineated region in the registered CT image.

[1] L. Ding, A. Goshtasby, and M. Satter, Volume image registration by template matching, Image and Vision Computing, vol. 19, no. 12, 2001, pp. 821-832.


For more information contact Ardy Goshtasby (ardy@cs.wright.edu).