Résumé

This article addresses the issue of retrieval result diversifica- tion in the context of social image retrieval and discusses the results achieved during the MediaEval 2013 benchmarking. 38 runs and their results are described and analyzed in this text. A comparison of the use of expert vs. crowdsourcing annotations shows that crowdsourcing results are slightly dif- ferent and have higher inter observer differences but results are comparable at lower cost. Multimodal approaches have best results in terms of cluster recall. Manual approaches can lead to high precision but often lower diversity. With this de- tailed results analysis we give future insights on this matter.

Détails

Actions

PDF