Archives

  • 2018-07
  • 2018-10
  • 2018-11
  • 2019-04
  • 2019-05
  • 2019-06
  • 2019-07
  • 2019-08
  • 2019-09
  • 2019-10
  • 2019-11
  • 2019-12
  • 2020-01
  • 2020-02
  • 2020-03
  • 2020-04
  • 2020-05
  • 2020-06
  • 2020-07
  • 2020-08
  • 2020-09
  • 2020-10
  • 2020-11
  • 2020-12
  • 2021-01
  • 2021-02
  • 2021-03
  • 2021-04
  • 2021-05
  • 2021-06
  • 2021-07
  • 2021-08
  • 2021-09
  • 2021-10
  • 2021-11
  • 2021-12
  • 2022-01
  • 2022-02
  • 2022-03
  • 2022-04
  • 2022-05
  • 2022-06
  • 2022-07
  • 2022-08
  • 2022-09
  • 2022-10
  • 2022-11
  • 2022-12
  • 2023-01
  • 2023-02
  • 2023-03
  • 2023-04
  • 2023-05
  • 2023-06
  • 2023-08
  • 2023-09
  • 2023-10
  • 2023-11
  • 2023-12
  • 2024-01
  • 2024-02
  • 2024-03
  • 2024-04
  • 2024-05
  • br Acknowledgment br Data This paper presents tables and

    2018-11-14


    Acknowledgment
    Data This paper presents tables and charts describing the variation in how minutiae are marked on latent fingerprints by latent print examiners, in support of the article “Interexaminer variation of minutia markup on latent fingerprints” [1]. The underlying data was collected in the “White Box” study [2]; the aspects of that data specific to interexaminer variation in minutiae markup have not been previously published.
    Experimental design, materials and methods The test procedure, fingerprint data, and examiner determination and markup data are summarized here, and are described in greater detail in [2].
    Materials and methods
    Example markups Fig. 2 shows four examples of latent-exemplar pairs (columns A–D); this AG-14361 expands on the examples (A and B) used in Fig. 6 of [1]. Marked minutiae are shown as small black dots inside color-coded clusters. For the Analysis phase, cluster colors indicate the proportion of examiners who marked within that cluster; for the Comparison phase, colors indicate the proportion of comparing examiners who corresponded the minutia as marked on the latent. The third row of images (\"Latent with Analysis minutiae\") shows all minutiae as marked in the Analysis phase; the fourth row (\"Latent with corresponding minutiae\") shows markup from the Comparison phase limited to those minutiae that examiners marked as corresponding; the fifth row (\"Exemplar with corresponding minutiae\") shows the locations of the corresponding minutiae as marked on the exemplar. Because marked minutiae from one cluster on the latent did not always correspond to one cluster on the exemplar (either due to examiner disagreements or behavior of the clustering algorithm), the fifth row (\"Exemplar with corresponding minutiae\") uses the color-coding from the latent markup to help visualize the correspondences. Table 1 describes for each of the four examples shown in Fig. 2, the number of examiners contributing to the clusters, and their determinations. Note that example D is the one comparison on which an erroneous individualization occurred (also shown as an example in Fig. 2 of [7]). Five examiners marked correspondences (two of whom also marked discrepancies), one additional examiner marked debatable correspondences, and one additional examiner marked discrepancies. Even after omitting the examiner who individualized, more correspondences were marked on this image pair (22, in 11 clusters) than on any other nonmated image pair in the test. Other top examples of nonmated image pairs with many correspondences marked included one with 18 correspondences (in 12 clusters, by two of ten comparing examiners), and another with 13 correspondences (in 8 clusters, by five of eight comparing examiners).
    Effect of clustering parameters Examiners’ markups differed in whether or not individual minutiae were marked, and in the precise location where the minutiae were marked. In order to focus on whether examiners agree on the presence or absence of minutiae, we need to see past minor variations in minutia location. Neumann et al. [8] used ellipses to determine whether two minutiae should be considered the same, based on an expectation of more variation in location along the direction of the ridge than perpendicular to ridge flow; here we did not collect minutia direction, making this approach impractical. In [7], our technique of classifying features as retained, moved, added or deleted was based on a fixed radius of 0.5mm (0.02in., or approximately the average inter-ridge distance) — although that approach was satisfactory for two markups where one was derived from the other, Maternal inheritance is not well suited to comparing more than two markups. We used automated clustering algorithms in order to classify minutiae marked by multiple examiners as representing the same minutia on the latent. Clustering was implemented in two stages as follows: Neither algorithm makes use of any information from the fingerprint images themselves; they rely entirely on the x,y coordinates of the minutiae as marked by examiners. The implementation of Density-based Spatial Clustering of Applications with Noise (DBSCAN) we used was written by Michal Daszkyowski of the University of Silesia in 2004. [9,10] The DBSCAN radius was set to 0.015\" (0.38mm) after extensively reviewing the algorithm׳s performance over a range of radius settings. In our review, we considered several standard clustering performance measures and visually assessed the resulting clusters as plotted superimposed over the latent prints. As shown in Fig. 3 and Table 2, any choice of radius substantially biases the reproducibility distributions: increasing the radius increases the measured mean reproducibility, and decreases the measured number of clusters. We selected a slightly large radius in order to aggregate some of the less precisely focused clusters; we then split many of the oversized clusters in the second step.