Registration of Multimodal Volume Images
Goshtasby (WSU), M. Satter (WKNI)
Students: Lijun Ding, Thisath Kularatna
Sponsor: Wallace-Kettering Neuroscience Institute
A method for registering multimodal volume images is developed. The method is currently optimized to register brain images, but it can be adjusted to register whole-body images also. In the current implementation, it is assumed that the images do not have nonlinear geometric differences. Therefore, it is suitable for registering multimodal brain images of the same individual taken a short time apart. Mutual information is used as the similarity measure in a template-matching process to determine a number of corresponding points in the images. From among the corresponding points, the four pairs that produce the least error when a linear function is fitted to them is determined. This process eliminates the outliers and inaccurate correspondences due to image noise. Spherical templates are used to increase the reliability of similarity measures when using images with rotational differences. The current method can be extended to register whole-body images with nonlinear geometric differences. This method has been tested on CT, MR, and PET images of the brain, with very encouraging results.
Volume image registration techniques are becoming more and more important mainly due to the demand of clinical applications. Generally, more than one tomographic radiological imaging modality will be required for the purposes of diagnosis or treatment planning. They provide complementary information about the pathology and anatomy of the patient. Variations in patient orientation, and differences in resolution and contrast of the modalities can make it difficult for a clinician to mentally fuse all the image information accurately. It is necessary to use image registration techniques to transfer multimodalities into a common coordinate frame.
We implement the registration based on template matching. The whole procedure consists of two steps. The first is to get an approximate transform interactively. Then we refine the transform with template matching. The adoption of getting an approximate transform interactively can speed up the registration greatly.
Registration environment and 3-DView of original overlaid images (PET over MR)
1. Approximately register the images.
We have three methods to approximately register the images:
(1) Select corresponding points in the two images manually and calculate the approximate transformation function.
(2) First obtain the surfaces, and then determine the approximate transformation by manually matching the surfaces.
(3) Adjust the transformation function in real time through visually observing three overlaid cross sections of the two images.
Accurately register the images.
We can regard this as two steps. First select highly detailed and unique templates. Then find the correspondences with subvoxel accuracy to calculate the refined transformation.
[WSU Home Page] [CSE Department Home Page] [Intelligent Systems Lab Page]
For more information contact: A. Goshtasby (firstname.lastname@example.org).
Last modified: 9/2/99.