Sentiment analysis using Hierarchical Multimodal Fusion (HMF)
Faculty of Science, Health and Technology, Nepal Open University, Bagmati, Nepal.
Research Article
World Journal of Advanced Research and Reviews, 2022, 14(03), 296–303
Article DOI: 10.30574/wjarr.2022.14.3.0549
Publication history:
Received on 08 May 2022; revised on 12 June 2022; accepted on 14 June 2022
Abstract:
The rapid rise of platforms like YouTube and Facebook is due to the spread of tablets, smartphones, and other electronic devices. Massive volumes of data are collected every second on such a platform, demanding large-scale data processing. Because these data come in a variety of modalities, including text, audio, and video, sentiment categorization in various modalities and emotional computing are the most researched fields in today's scenario. Companies are striving to make use of this information by developing automated systems for a variety of purposes, such as automated customer feedback collection from user assessments, where the underlying challenge is to mine user sentiment connected to a specific product or service. The use of efficient and effective sentiment analysis tools is required to solve such a complex problem with such a big volume of data. The sentiment analysis of videos is investigated in this study, with data available in three modalities: audio, video, and text. In today's world, modality fusion is a major problem. This study introduces a novel approach to speaker-independent fusion: utilizing deep learning to fuse in a hierarchical fashion. The work tried to obtain improvement over simple concatenation-based fusion.
Keywords:
Multi-modal; Bimodal; Sentiment analysis; Hierarchical fusion; Emotion
Full text article in PDF:
Copyright information:
Copyright © 2022 Author(s) retain the copyright of this article. This article is published under the terms of the Creative Commons Attribution Liscense 4.0