Résumé

Affective content analysis has gained great attention in recent years and is an important challenge of content-based multimedia information retrieval. In this paper, a hierarchical approach is proposed for affect recognition in movie datasets. This approach has been verified on the AFEW dataset, showing an improvement in classification results compared to the baseline. In order to use all the visual sentiment aspects contained in the movies excerpts of a realistic dataset such as FilmStim, deep learning features trained on a large set of emotional images are added to the standard audio and visual features. The proposed approach will be integrated in a system that communicates the emotions of a movie to impaired people and contribute to improve their television experience.

Détails

Actions