Multimodal Sentiment Analysis Of Social Media Through Deep Learning Approach

Multimodal Data, Characterized By Its Inherent Complexity And Heterogeneity, Presents Computational Challenges In Comprehending Social Media Content. Conventional Approaches To Sentiment Analysis Often Rely On Unimodal Pre-Trained Models For Feature Extraction From Each Modality, Neglecting The I...

Full description

Bibliographic Details
Main Author: An, Jieyu
Format: Thesis
Language:English
Published: 2024
Subjects:
Online Access:http://eprints.usm.my/62631/
http://eprints.usm.my/62631/1/24%20Pages%20from%20AN%20JIEYU.pdf
_version_ 1848885037990150144
author An, Jieyu
author_facet An, Jieyu
author_sort An, Jieyu
building USM Institutional Repository
collection Online Access
description Multimodal Data, Characterized By Its Inherent Complexity And Heterogeneity, Presents Computational Challenges In Comprehending Social Media Content. Conventional Approaches To Sentiment Analysis Often Rely On Unimodal Pre-Trained Models For Feature Extraction From Each Modality, Neglecting The Intrinsic Connections Of Semantic Information Between Modalities, As They Are Typically Trained On Unimodal Data. Additionally, Existing Multimodal Sentiment Analysis Methods Primarily Focus On Acquiring Image Representations While Disregarding The Rich Semantic Information Contained Within The Images. Furthermore, Current Methods Often Overlook The Significance Of Color Information, Which Provides Valuable Insights And Significantly Influences Sentiment Classification. Addressing These Gaps, This Thesis Explores Deep Learning-Based Methods For Multimodal Sentiment Analysis, Emphasizing The Semantic Association Between Multimodal Data, Information Interaction, And Color Sentiment Modelling From The Perspectives Of The Multimodal Representation Layer, The Multimodal Interaction Layer, And The Color Information Integration Layer. To Mitigate The Overlooked Semantic Interrelations Between Modalities, The Thesis Introduces "Joint Representation Learning For Multimodal Sentiment Analysis" Within The Representation Layer. This Method, Validated By Rigorous Experiments, Showcases A Marked Improvement In Accuracy, Achieving 76.44% On The Mvsa-Single And 72.29% On The Mvsa-Multiple Datasets, Surpassing Existing Methodologies. In The Multimodal Interaction Layer,
first_indexed 2025-11-15T19:16:14Z
format Thesis
id usm-62631
institution Universiti Sains Malaysia
institution_category Local University
language English
last_indexed 2025-11-15T19:16:14Z
publishDate 2024
recordtype eprints
repository_type Digital Repository
spelling usm-626312025-07-17T07:53:50Z http://eprints.usm.my/62631/ Multimodal Sentiment Analysis Of Social Media Through Deep Learning Approach An, Jieyu QA75.5-76.95 Electronic computers. Computer science Multimodal Data, Characterized By Its Inherent Complexity And Heterogeneity, Presents Computational Challenges In Comprehending Social Media Content. Conventional Approaches To Sentiment Analysis Often Rely On Unimodal Pre-Trained Models For Feature Extraction From Each Modality, Neglecting The Intrinsic Connections Of Semantic Information Between Modalities, As They Are Typically Trained On Unimodal Data. Additionally, Existing Multimodal Sentiment Analysis Methods Primarily Focus On Acquiring Image Representations While Disregarding The Rich Semantic Information Contained Within The Images. Furthermore, Current Methods Often Overlook The Significance Of Color Information, Which Provides Valuable Insights And Significantly Influences Sentiment Classification. Addressing These Gaps, This Thesis Explores Deep Learning-Based Methods For Multimodal Sentiment Analysis, Emphasizing The Semantic Association Between Multimodal Data, Information Interaction, And Color Sentiment Modelling From The Perspectives Of The Multimodal Representation Layer, The Multimodal Interaction Layer, And The Color Information Integration Layer. To Mitigate The Overlooked Semantic Interrelations Between Modalities, The Thesis Introduces "Joint Representation Learning For Multimodal Sentiment Analysis" Within The Representation Layer. This Method, Validated By Rigorous Experiments, Showcases A Marked Improvement In Accuracy, Achieving 76.44% On The Mvsa-Single And 72.29% On The Mvsa-Multiple Datasets, Surpassing Existing Methodologies. In The Multimodal Interaction Layer, 2024-06 Thesis NonPeerReviewed application/pdf en http://eprints.usm.my/62631/1/24%20Pages%20from%20AN%20JIEYU.pdf An, Jieyu (2024) Multimodal Sentiment Analysis Of Social Media Through Deep Learning Approach. PhD thesis, Perpustakaan Hamzah Sendut.
spellingShingle QA75.5-76.95 Electronic computers. Computer science
An, Jieyu
Multimodal Sentiment Analysis Of Social Media Through Deep Learning Approach
title Multimodal Sentiment Analysis Of Social Media Through Deep Learning Approach
title_full Multimodal Sentiment Analysis Of Social Media Through Deep Learning Approach
title_fullStr Multimodal Sentiment Analysis Of Social Media Through Deep Learning Approach
title_full_unstemmed Multimodal Sentiment Analysis Of Social Media Through Deep Learning Approach
title_short Multimodal Sentiment Analysis Of Social Media Through Deep Learning Approach
title_sort multimodal sentiment analysis of social media through deep learning approach
topic QA75.5-76.95 Electronic computers. Computer science
url http://eprints.usm.my/62631/
http://eprints.usm.my/62631/1/24%20Pages%20from%20AN%20JIEYU.pdf