This project is set out to develop a range scanner with 10-micron accuracy based on subtractive light principle. Rather than projecting a laser light onto a surface and analyzing the shape of the light stripe on the surface to determine the geometry of the surfaces, light is obstructed by a thin fiber and the shape of the shadow profile is analyzed to recover the surface geometry. The hardware organization of the scanner is shown below.
Fig. 1. Hardware organization of the scanner.
White light is guided through a thin fiber to the focal point of a lens, creating cylindrical light toward the surface being scanned. A thin opaque fiber in front of the surface creates a shadow on the surface. The shadow profile is detected by computer vision techniques and from its shape the 3-D geometry of the surface at the shadow is estimated. By sweeping the fiber over the surface and processing images of the shadow, the 3-D geometry of the surface is recovered.
The process of detecting the shadow profile is shown below. A particular frame is first convolved with two different smoothing operators and subtracted. This is similar to determining the Laplacian of Gaussian of the image. The resultant image will have positive and negative values. The zero-crossings in the image identify the shadow boundaries.
Fig. 2. The process involved in detecting the shadow boundaries.
The zero-crossings not only detect the shadow boundaries, they detect all sharp intensity changes on the surface. There is a need to distinguish zero-crossings that are due to the shadow of the fiber and those that are due to sharp changes in surface intensity. Information from consecutive frames is used to determine that. Since the surface and the camera are fixed and only the fiber moves over the surface, by subtracting consecutive frames, all edges except shadow edges can be filtered out. The process is depicted below. Image differencing keeps information only from the motion of the fiber and filters out other image details and, thus, removes edges that do not belong to the shadow boundaries.
Fig. 3. Subtracting consecutive frames erases all image details except in the neighborhood of the shadow profile.
An example shadow boundary determined by this algorithm is shown below. From the shape of such a boundary, the 3-D geometry of the surface along the boundary is calculated.
Fig. 4. A detected shadow boundary.
A scan of the back of the Penny is shown below. Due to occlusion, noise, and other factors, surface height at all points cannot be determined. Heights at missing points are estimated by an inverse distance method. Fig. 6 shows the image after estimating the missing heights in this manner.
Fig. 5. An example scan of the back of a Penny by the scanner. Black spots show points where surface height could not be calculated by processing the shadow profiles.
Fig. 6. Estimating missing heights by inverse distance interpolation.