Measuring search engine quality
Date
2001
Authors
Hawking, David
Craswell, Nick
Bailey, Peter Richard
Griffiths, Kathleen
Journal Title
Journal ISSN
Volume Title
Publisher
Kluwer Academic Publishers
Abstract
The effectiveness of twenty public search engines is evaluated using TREC-inspired methods and a set of 54 queries taken from real Web search logs. The World Wide Web is taken as the test collection and a combination of crawler and text retrieval system is evaluated. The engines are compared on a range of measures derivable from binary relevance judgments of the first seven live results returned. Statistical testing reveals a significant difference between engines and high intercorrelations between measures. Surprisingly, given the dynamic nature of the Web and the time elapsed, there is also a high correlation between results of this study and a previous study by Gordon and Pathak. For nearly all engines, there is a gradual decline in precision at increasing cutoff after some initial fluctuation. Performance of the engines as a group is found to be inferior to the group of participants in the TREC-8 Large Web task, although the best engines approach the median of those systems. Shortcomings of current Web search evaluation methodology are identified and recommendations are made for future improvements. In particular, the present study and its predecessors deal with queries which are assumed to derive from a need to find a selection of documents relevant to a topic. By contrast, real Web search reflects a range of other information need types which require different judging and different measures.
Description
Keywords
Keywords: Evaluation; Search engines; Web search
Citation
Collections
Source
Information Retrieval
Type
Journal article
Book Title
Entity type
Access Statement
License Rights
Restricted until
2037-12-31
Downloads
File
Description