Multimodal Sentiment Analysis of Songs Using Ensemble Classifiers
Gomez-Saavedra, Esteban
Loading…
Permalink
https://hdl.handle.net/2142/79038
Description
Title
Multimodal Sentiment Analysis of Songs Using Ensemble Classifiers
Author(s)
Gomez-Saavedra, Esteban
Contributor(s)
Do, Minh N.
Issue Date
2015-05
Keyword(s)
Music Information Retrieval
Sentiment Analysis
Multimodal Classification
Classification Algorithms
Multimodal Fusion
Abstract
We consider the problem of performing sentiment analysis on songs by combining
audio and lyrics in a large and varied dataset, using the Million Song
Dataset for audio features and the MusicXMatch dataset for lyric information.
The algorithms presented on this thesis utilize ensemble classifiers as a
method of fusing data vectors from different feature spaces. We find that
multimodal classification outperforms using only audio or only lyrics. This
thesis argues that utilizing signals from different spaces can account for interclass
inconsistencies and leverages class-specific performance. The experimental
results show that multimodal classification not only improves overall
classification, but is also more consistent across different classes.
Use this login method if you
don't
have an
@illinois.edu
email address.
(Oops, I do have one)
IDEALS migrated to a new platform on June 23, 2022. If you created
your account prior to this date, you will have to reset your password
using the forgot-password link below.