Evaluating image retrieval

Nikhil V. Shirahatti, Jacobus J Barnard

Research output: Chapter in Book/Report/Conference proceedingConference contribution

29 Citations (Scopus)

Abstract

We present a comprehensive strategy for evaluating image retrieval algorithms. Because automated image retrieval is only meaningful in its service to people, performance characterization must be grounded in human evaluation. Thus we have collected a large data set of human evaluations of retrieval results, both for query by image example and query by text. The data is independent of any particular image retrieval algorithm and can be used to evaluate and compare many such algorithms without further data collection. The data and calibration software are available on-line (http://kobus.ca/research/data). We develop and validate methods for generating sensible evaluation data, calibrating for disparate evaluators, mapping image retrieval system scores to the human evaluation results, and comparing retrieval systems. We demonstrate the process by providing grounded comparison results for several algorithms.

Original languageEnglish (US)
Title of host publicationProceedings - 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2005
Pages955-961
Number of pages7
VolumeI
DOIs
StatePublished - 2005
Event2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2005 - San Diego, CA, United States
Duration: Jun 20 2005Jun 25 2005

Other

Other2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2005
CountryUnited States
CitySan Diego, CA
Period6/20/056/25/05

Fingerprint

Image retrieval
Calibration

ASJC Scopus subject areas

  • Engineering(all)

Cite this

Shirahatti, N. V., & Barnard, J. J. (2005). Evaluating image retrieval. In Proceedings - 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2005 (Vol. I, pp. 955-961). [1467369] https://doi.org/10.1109/CVPR.2005.147

Evaluating image retrieval. / Shirahatti, Nikhil V.; Barnard, Jacobus J.

Proceedings - 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2005. Vol. I 2005. p. 955-961 1467369.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Shirahatti, NV & Barnard, JJ 2005, Evaluating image retrieval. in Proceedings - 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2005. vol. I, 1467369, pp. 955-961, 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2005, San Diego, CA, United States, 6/20/05. https://doi.org/10.1109/CVPR.2005.147
Shirahatti NV, Barnard JJ. Evaluating image retrieval. In Proceedings - 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2005. Vol. I. 2005. p. 955-961. 1467369 https://doi.org/10.1109/CVPR.2005.147
Shirahatti, Nikhil V. ; Barnard, Jacobus J. / Evaluating image retrieval. Proceedings - 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2005. Vol. I 2005. pp. 955-961
@inproceedings{82dd32b84cd74de8be1ad55c3cd8cdfb,
title = "Evaluating image retrieval",
abstract = "We present a comprehensive strategy for evaluating image retrieval algorithms. Because automated image retrieval is only meaningful in its service to people, performance characterization must be grounded in human evaluation. Thus we have collected a large data set of human evaluations of retrieval results, both for query by image example and query by text. The data is independent of any particular image retrieval algorithm and can be used to evaluate and compare many such algorithms without further data collection. The data and calibration software are available on-line (http://kobus.ca/research/data). We develop and validate methods for generating sensible evaluation data, calibrating for disparate evaluators, mapping image retrieval system scores to the human evaluation results, and comparing retrieval systems. We demonstrate the process by providing grounded comparison results for several algorithms.",
author = "Shirahatti, {Nikhil V.} and Barnard, {Jacobus J}",
year = "2005",
doi = "10.1109/CVPR.2005.147",
language = "English (US)",
isbn = "0769523722",
volume = "I",
pages = "955--961",
booktitle = "Proceedings - 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2005",

}

TY - GEN

T1 - Evaluating image retrieval

AU - Shirahatti, Nikhil V.

AU - Barnard, Jacobus J

PY - 2005

Y1 - 2005

N2 - We present a comprehensive strategy for evaluating image retrieval algorithms. Because automated image retrieval is only meaningful in its service to people, performance characterization must be grounded in human evaluation. Thus we have collected a large data set of human evaluations of retrieval results, both for query by image example and query by text. The data is independent of any particular image retrieval algorithm and can be used to evaluate and compare many such algorithms without further data collection. The data and calibration software are available on-line (http://kobus.ca/research/data). We develop and validate methods for generating sensible evaluation data, calibrating for disparate evaluators, mapping image retrieval system scores to the human evaluation results, and comparing retrieval systems. We demonstrate the process by providing grounded comparison results for several algorithms.

AB - We present a comprehensive strategy for evaluating image retrieval algorithms. Because automated image retrieval is only meaningful in its service to people, performance characterization must be grounded in human evaluation. Thus we have collected a large data set of human evaluations of retrieval results, both for query by image example and query by text. The data is independent of any particular image retrieval algorithm and can be used to evaluate and compare many such algorithms without further data collection. The data and calibration software are available on-line (http://kobus.ca/research/data). We develop and validate methods for generating sensible evaluation data, calibrating for disparate evaluators, mapping image retrieval system scores to the human evaluation results, and comparing retrieval systems. We demonstrate the process by providing grounded comparison results for several algorithms.

UR - http://www.scopus.com/inward/record.url?scp=33745181962&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=33745181962&partnerID=8YFLogxK

U2 - 10.1109/CVPR.2005.147

DO - 10.1109/CVPR.2005.147

M3 - Conference contribution

AN - SCOPUS:33745181962

SN - 0769523722

SN - 9780769523729

VL - I

SP - 955

EP - 961

BT - Proceedings - 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2005

ER -