Incorporating social media comments in affective video retrieval
Journal of Information Science
Published online on August 12, 2015
Abstract
Affective video retrieval systems aim at finding video contents matching the desires and needs of users. Existing systems typically use the information contained in the video itself to specify its affect category. These systems either extract low-level features or build up higher-level attributes to train classification algorithms. However, using low-level features ignores global relations in data and constructing high-level features is time consuming and problem dependent. To overcome these drawbacks, an external source of information may be helpful. With the explosive growth and availability of social media, users’ comments could be such a valuable source of information. In this study, a new method for incorporating social media comments with the audio-visual contents of videos is proposed. Furthermore, for the combination stage a decision-level fusion method based on the Dempster–Shafer theory of evidence is presented. Experiments are carried out on the video clips of the DEAP (Database for Emotion Analysis using Physiological signals) dataset and their associated users’ comments on YouTube. Results show that the proposed system significantly outperforms the baseline method of using only the audio-visual contents for affective video retrieval.