Evaluating image retrieval

Nikhil V. Shirahatti, Kobus Barnard

Research output: Chapter in Book/Report/Conference proceedingConference contribution

36 Scopus citations

Abstract

We present a comprehensive strategy for evaluating image retrieval algorithms. Because automated image retrieval is only meaningful in its service to people, performance characterization must be grounded in human evaluation. Thus we have collected a large data set of human evaluations of retrieval results, both for query by image example and query by text. The data is independent of any particular image retrieval algorithm and can be used to evaluate and compare many such algorithms without further data collection. The data and calibration software are available on-line (http://kobus.ca/research/data). We develop and validate methods for generating sensible evaluation data, calibrating for disparate evaluators, mapping image retrieval system scores to the human evaluation results, and comparing retrieval systems. We demonstrate the process by providing grounded comparison results for several algorithms.

Original languageEnglish (US)
Title of host publicationProceedings - 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2005
PublisherIEEE Computer Society
Pages955-961
Number of pages7
ISBN (Print)0769523722, 9780769523729
DOIs
StatePublished - 2005
Event2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2005 - San Diego, CA, United States
Duration: Jun 20 2005Jun 25 2005

Publication series

NameProceedings - 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2005
VolumeI

Other

Other2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2005
Country/TerritoryUnited States
CitySan Diego, CA
Period6/20/056/25/05

ASJC Scopus subject areas

  • General Engineering

Fingerprint

Dive into the research topics of 'Evaluating image retrieval'. Together they form a unique fingerprint.

Cite this