Registration of Deformed Images

Marty Satter & Ardy Goshtasby


Proc. Image Registration Workshop, Washington, DC, Nov. 20-21, 1997.


Abstract: Determination of transformation functions for registration of deformed images is discussed. Each component of a transformation is viewed as a surface, and by surface-fitting, transformations are determined. Viewing the components of a transformation as surfaces enables detection of wrong homologous points and makes it possible to estimate the geometry of the scene. Based on the ideas presented in this paper, a coarse-to-fine method is developed that can register images with local geometric differences. At the coarsest resolution, the images are aligned using planes as the components of the transformation. At mid resolutions, surfaces with radial basis functions are used to register the images, and at the highest resolution, elastic surfaces are used to accommodate large local geometric differences between the images. The results of the proposed registration on a variety of images are presented.

Figure 1. Homologous points in two images.

Figure 2. x and y components of a transformation function registering the points shown in Figure 1.

Figure 3. The two components of a transformation function registering points in Figure 1 except point 7.

Figure 4. Determining wrong homologous points. (a) and (b) are the same as Figure 2a and 2b, respectively. (c) and (d) are the same as (a) and (b), respectively, except that the components of the transformation were rotated about the y and the x axis by 45 degrees and the negative values were inverted. (e) and (f) show overlaying of points in Figure 1a and (c) and (d). Both (e) and (f) show that point 7 in the images do not correspond to each other.

Figure 5. Detecting wrong homologous points in images with nonlinear geometric differences. (a) and (b) are a pair of stereo images from the CMU stereo collection. (c) and (d) are the components of the transformation function registering points in the images. These images show disparities vertically and horizontally in the images. (e) is the image obtained by combining horizontal and vertical disparities. Intensities in this image show overall disparities in the images, which could be from scene structure, wrong correspondences, or both.

Figure 6. Separating disparities due to scene structure from disparities due to wrong homologous points. (a) and (b) are disparities obtained when using odd homologous points shown in Figures 5a and 5b. (c) and (d) are disparities obtained using even homologous points in the images. Subtracting corresponding disparity images, we obtain the disparities shown in (e) and (f), corresponding to wrong homologous points. Combining (e) and (f) we obtain image (g), showing the overall disparities. Overlaying points in Figure 5a with image (g), we obtain image (h). This image identifies the wrong homologous points.

Figure 7. (a) Images of Figures 5a and 5b after being overlaid. This shows registration at the coarsest resolution. (b) Overlaying the images after completing registration at mid resolution. (c) Overlaying the images using registration result at the highest resolution. (d) Depth map of the scene computed from the image disparities representing the components of the transformation function.

Figure 8. Registering stereo images. (a), (b) A pair of stereo images showing the surface of the Mars obtained by NASA Mars Pathfinder Mission. (c), (b), (e) Result of registration at the lowest, mid, and highest resolution, respectively. (f) Depth map of the scene obtained from the components of the transformation function registering the images.


(a)                                                                                                             (b)

(c)                                                                                                          (d)

(e)                                                                                               (f)

(g)                                                                                                       (h)

Figure 9. (a) - (d) Four images of a coastline from a NASA Shuttle Mission. (e) - (g) Registration of images (a) and (b), (a) and (c), and (a) and (d), respectively by our coarse-to-fine registration method. (h) Registration of all four images (a) - (d).


(a)

(b)

(c)

(d)

(e)

(f)

Figure 10. (a) A color image showing a slice from the Visible Human male dataset from the National Library of Medicine. (b) The intensities of image (a). (c) The same slice obtained by a CT scanner. Images (a) and (c), correspondingly, show same cross sections from the frozen and fresh cadaver. Small local geometric differences are anticipated between these images. We will register images (b) and (c) and use the obtained result to register images (a) and (c). (d) and (e) show registration results at the mid and highest resolutions, respectively. (f) Registration of images (a) and (b) at the highest resolution.

Figure 11. Registering brain MR images. (a), (b) Serial images of a patient with a tumor. (c) Registration of (a) and (b) by a linear transformation. This is the result of our registration at the lowest resolution, which basically aligns the images. (d) Final registration result.


[WSU Home Page] [CSE Department Home Page] [Intelligent Systems Lab Page]

For more information contact A. Goshtasby (agoshtas@cs.wright.edu).

Last modified: 9/18/97.