TY - GEN
T1 - Salience-based evaluation strategy for image annotation
AU - Ge, Yong
AU - Wei, Jishang
AU - Yang, Xin
AU - Wu, Xiuqing
PY - 2007
Y1 - 2007
N2 - Evaluating the efficiency of an image auto-annotation method is a requisite to guide the development of auto-annotation method. This paper firstly investigates most existing evaluation strategies, and proposes a novel salience-based evaluation strategy. In the most existing evaluation strategies, every keyword in the annotation results is considered equally. We argue that different keywords in the annotation results have different semantic salience and the keyword which corresponds to the most prominent concept for one image should be the most semantic salient one. In our salience-based evaluation strategy, we consider different keywords according to their semantic salience and we design two evaluation parameters: salience-score and noisy-coefficient, which are more reasonable and more explicit. We conduct our experiments on standard Corel dataset, after obtaining annotation results with three classical statistical models, we compare variant evaluation strategies on these annotation results. The results demonstrate that our evaluation strategy is more consistent to human perception..
AB - Evaluating the efficiency of an image auto-annotation method is a requisite to guide the development of auto-annotation method. This paper firstly investigates most existing evaluation strategies, and proposes a novel salience-based evaluation strategy. In the most existing evaluation strategies, every keyword in the annotation results is considered equally. We argue that different keywords in the annotation results have different semantic salience and the keyword which corresponds to the most prominent concept for one image should be the most semantic salient one. In our salience-based evaluation strategy, we consider different keywords according to their semantic salience and we design two evaluation parameters: salience-score and noisy-coefficient, which are more reasonable and more explicit. We conduct our experiments on standard Corel dataset, after obtaining annotation results with three classical statistical models, we compare variant evaluation strategies on these annotation results. The results demonstrate that our evaluation strategy is more consistent to human perception..
UR - http://www.scopus.com/inward/record.url?scp=48349136052&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=48349136052&partnerID=8YFLogxK
U2 - 10.1109/CIS.2007.201
DO - 10.1109/CIS.2007.201
M3 - Conference contribution
AN - SCOPUS:48349136052
SN - 0769530729
SN - 9780769530727
T3 - Proceedings - 2007 International Conference on Computational Intelligence and Security, CIS 2007
SP - 381
EP - 385
BT - Proceedings - 2007 International Conference on Computational Intelligence and Security, CIS 2007
T2 - 2007 International Conference on Computational Intelligence and Security, CIS'07
Y2 - 15 December 2007 through 19 December 2007
ER -