Quality and Complexity Measures for Data Linkage and Deduplication

Date

Authors

Christen, Peter
Goiser, Karl

Journal Title

Journal ISSN

Volume Title

Publisher

Springer

Abstract

Deduplicating one data set or linking several data sets are increasingly important tasks in the data preparation steps of many data mining projects. The aim of such linkages is to match all records relating to the same entity. Research interest in this area has increased in recent years, with techniques originating from statistics, machine learning, information retrieval, and database research being combined and applied to improve the linkage quality, as well as to increase performance and efficiency when linking or deduplicating very large data sets. Different measures have been used to characterise the quality and complexity of data linkage algorithms, and several new metrics have been proposed. An overview of the issues involved in measuring data linkage and deduplication quality and complexity is presented in this chapter. It is shown that measures in the space of record pair comparisons can produce deceptive quality results. Various measures are discussed and recommendations are given on how to assess data linkage and deduplication quality and complexity.

Description

Citation

Source

Book Title

Quality Measures in Data Mining: Studies in Computational Intelligence

Entity type

Access Statement

License Rights

Restricted until

2037-12-31