Workload sampling for enterprise search evaluation
In real world use of test collection methods, it is essential that the query test set be representative of the work load expected in the actual application. Using a random sample of queries from a media company's query log as a 'gold standard' test set we demonstrate that biases in sitemap-derived and top n query sets can lead to significant perturbations in engine rankings and big differences in estimated performance levels.
|Collections||ANU Research Publications|
|Source:||Proceedings of 30th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval|
|01_Rowlands_Workload_sampling_for_2007.pdf||217.9 kB||Adobe PDF||Request a copy|
Items in Open Research are protected by copyright, with all rights reserved, unless otherwise indicated.