next up previous
Next: Results and Conclusion Up: Evaluating the Match Previous: Competitive Object Learning

Estimating Matching Quality

Given two registered point sets that contain an equal number of points, e.g., 250 points derived under the premise of minimization of the expected quantization error and entropy maximization, the quality of a matching can be evaluated using the following method: The distribution of shortest distances $ d_{ij}$ between the $ i$th and the $ j$th point (closest points) after registering two models with a fixed (here: 250) number of points show a typical structure (Fig. [*]). Many distances are very small, i.e., less than 0.3 cm, and there are also many larger distances, e.g., greater than 1 cm. To our experience it is always easy to find a good threshold to separate the two maximas. After dividing the set of distances $ d_{i}$, the algorithm computes the mean and the standard deviation of the matching, i.e.,

$\displaystyle \mu = \frac{1}{N'}\sum_{i=1}^{N'} d_i \qquad \qquad
\sigma = \sqrt{\frac{1}{N'}\sum_{i=1}^{N'} (d_i-\mu)^2 }$      

Based on these values one estimates the matching quality by computing a measure $ D$ as a function of $ \mu$ and $ \sigma$ (we have been using $ D = \mu + 3\sigma$). Small values of $ D$ correspond to a high quality matching whereas increasing values represent lower qualities.

Figure: A typical distribution of distances between closest points after registering two models with a fixed (here: 250) number of points.
\includegraphics[width=75mm]{barchart_color}

Figure: Examples of object detection and localization. From Left to right: (1) Detection using single cascade of classifiers. Green: detection in reflection image, yellow: detection in depth image. (2) Detection using the combined cascade. (3) Superimposed to the depth image is the matched 3D model. (4) Detected object in the raw scanner data, i.e., point representation.
\includegraphics[width=43mm,height=43mm]{kurt_009_singleCascades} \includegraphics[width=43mm,height=43mm]{kurt_009_combinedCascades} Image kurt_009_combinedCascades_with_model Image kurt_009_points_with_model

\includegraphics[width=43mm,height=43mm]{volksbot080_singleCascades} \includegraphics[width=43mm,height=43mm]{volksbot080_combinedCascades.eps} \includegraphics[width=43mm,height=43mm]{volksbot080_combinedCascades_with_model.eps} \includegraphics[width=43mm,height=43mm]{volksbot080_points_with_model.eps}

\includegraphics[width=43mm,height=43mm]{human023_singleCascades} \includegraphics[width=43mm,height=43mm]{human023_combinedCascades} \includegraphics[width=43mm,height=43mm]{human023_combinedCascades_with_model} \includegraphics[width=43mm,height=43mm]{human023_points_with_model}

\includegraphics[width=43mm,height=43mm]{volksbot014_singleCascades} \includegraphics[width=43mm,height=43mm]{volksbot014_combinedCascades} \includegraphics[width=43mm,height=43mm]{volksbot014_combinedCascades_with_model} \includegraphics[width=43mm,height=43mm]{volksbot014_points_with_model}



Table: Object name, number of stages used for classification versus hit rate and the total number of false alarms using the single and combined cascades. The test sets consist of 89 images rendered from 20 3D scans. The average processing time is also given, including the rendering, classification, ray tracing, matching and evaluation time.
object # stages detection rate (reflect. img. / depth img.) false alarms (reflect. img. / depth img.) average proc. time
chair 15 0.767 (0.867 / 0.767) 12 (47 / 33) 1.9 sec
kurt robot 19 0.912 (0.912 / 0.947)  0 ( 5 / 7) 1.7 sec
volksbot robot 13 0.844 (0.844 / 0.851)  5 (42 / 23) 2.3 sec
human 8 0.961 (0.963 / 0.961)  1 (13 / 17) 1.6 sec


next up previous
Next: Results and Conclusion Up: Evaluating the Match Previous: Competitive Object Learning
root 2005-05-03