Affective Multimedia Analysis: Introduction, Background and Perspectives

author: Mohammad Soleymani, Intelligent Behaviour Understanding Group (iBUG), Department of Computing, Imperial College London
recorded by: IEEE ICME
published: Sept. 18, 2012,   recorded: July 2012,   views: 3325
Categories

Slides

Related content

Report a problem or upload files

If you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Lecture popularity: You need to login to cast your vote.
  Delicious Bibliography

Description

In 1995, Picard proposed ideas about how to use affective computing for multimedia selection [1]. She envisaged a content player which can sense user’s emotional state and deliver the content that matches her emotional state. This also needs an emotional understanding of the content itself. In 2001, Hanjalic and Xu proposed a user oriented affective video content analysis which pioneered the track of research which aimed at understanding the affective content of videos using the content [2].

With the current rate of the expansion of user generated content. The classic, cognitive indexing methods are showing their limits. Affective indexing is showing a potential alternative which attracts multimedia researchers. Users are also expecting content recommendation and delivery systems that can better adapt to their taste and emotions. Although the user interaction and social information is bridging the existing gap between the users and machines, emotional understanding from the content and users will certainly improve users’ experience.

Although, affective computing now has its own journal, IEEE Transactions on Affective Computing, and its biannual conference, Affective Computing and Intelligent Interactions (ACII) multimedia community does not have a strong presence in those publications. Multimedia related affective research is being published in different venues and lacks coherence and standardization. Unlike, emotion recognition studies which have large number of publicly available databases and challenges. There is a lack of standard benchmarks for video affective analysis. This is partly due to the usage of copyrighted material which prohibits publishing and sharing the datasets. The other reason behind this lack of consensus is that this track of research lacks its own forum which brings together the interested scholars or industrial key players. In this talk, I will present the origins of the idea of using affect in content delivery system, from Picard’s technical report and follow its development in the last decade to its current state of the art. The focus of the talk will be on content analysis for affective characterization and not on affect sensing. At the end, I will give recommendations on affective corpora development and present an example of public affective content corpus development, i.e., Violence scenes detection at Mediaeval benchmarking campaign.

References:
[1] Picard, R. (1995). Affective computing. Technical Report 321, MIT Media Laboratory, MIT Media Laboratory: Perceptual Computing; 20 Ames St., Cambridge, MA 02139. Available online at : http://affect.media.mit.edu/pdfs/95.picard.pdf
[2] Hanjalic, A.; Li‐Qun Xu, "User‐oriented affective video content analysis," Content‐Based Access of Image and Video Libraries, 2001. (CBAIVL 2001). IEEE Workshop on , vol., no., pp.50‐57, 2001.doi: 10.1109/IVL.2001.990856, available online at : http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=990856&isnumber=21352

Link this page

Would you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !

Write your own review or comment:

make sure you have javascript enabled or clear this field: