UGC Approved Journal no 63975(19)

ISSN: 2349-5162 | ESTD Year : 2014
Call for Paper
Volume 11 | Issue 5 | May 2024

JETIREXPLORE- Search Thousands of research papers



WhatsApp Contact
Click Here

Published in:

Volume 9 Issue 8
August-2022
eISSN: 2349-5162

UGC and ISSN approved 7.95 impact factor UGC Approved Journal no 63975

7.95 impact factor calculated by Google scholar

Unique Identifier

Published Paper ID:
JETIR2208124


Registration ID:
500913

Page Number

b163-b168

Share This Article


Jetir RMS

Title

Text-Audio Sentiment Analysis Using Cross-Modal BERT

Authors

Abstract

Multimodal sentiment analysis is a relatively new field of research with the aim of enabling machines to perceive, analyse, and express emotion. We can learn more in-depth information about the speaker's emotional qualities through the cross-modal engagement. Bidirectional Encoder Representations from Transformers is a powerful pre-trained language representation model (BERT). Fine-tuning has provided novel, state-of-the-art results on ten natural language processing tasks, that included question-answering and natural language inference. Although the majority of earlier studies that improved BERT only used text data, it is still worthwhile to investigate how to learn a better representation by incorporating multimodal data. The Cross-Modal BERT (CM BERT), which we suggest in this research, uses the communication between text and audio modality to hone the pre-trained BERT model. Masked multimodal attention, the core aspect of CM-BERT, combines the knowledge retrieved from text and audio modalities to dynamically modify the weight of words. On the open multimodal sentiment analysis datasets CMU-MOSI and CMU-MOSEI, we test our methodology. The findings of the experiment reveal that it has greatly outperformed prior baselines and text-only finetuning of BERT in terms of performance on all criteria. In addition, by using audio modality information, we demonstrate the masked multimodal attention and demonstrate that it can appropriately modify the weight of words.

Key Words

Natural Language Processing, BERT, Cross-Modal BERT (CM-BERT)

Cite This Article

"Text-Audio Sentiment Analysis Using Cross-Modal BERT", International Journal of Emerging Technologies and Innovative Research (www.jetir.org), ISSN:2349-5162, Vol.9, Issue 8, page no.b163-b168, August-2022, Available :http://www.jetir.org/papers/JETIR2208124.pdf

ISSN


2349-5162 | Impact Factor 7.95 Calculate by Google Scholar

An International Scholarly Open Access Journal, Peer-Reviewed, Refereed Journal Impact Factor 7.95 Calculate by Google Scholar and Semantic Scholar | AI-Powered Research Tool, Multidisciplinary, Monthly, Multilanguage Journal Indexing in All Major Database & Metadata, Citation Generator

Cite This Article

"Text-Audio Sentiment Analysis Using Cross-Modal BERT", International Journal of Emerging Technologies and Innovative Research (www.jetir.org | UGC and issn Approved), ISSN:2349-5162, Vol.9, Issue 8, page no. ppb163-b168, August-2022, Available at : http://www.jetir.org/papers/JETIR2208124.pdf

Publication Details

Published Paper ID: JETIR2208124
Registration ID: 500913
Published In: Volume 9 | Issue 8 | Year August-2022
DOI (Digital Object Identifier):
Page No: b163-b168
Country: Noida, Uttar Pradesh, India .
Area: Engineering
ISSN Number: 2349-5162
Publisher: IJ Publication


Preview This Article


Downlaod

Click here for Article Preview

Download PDF

Downloads

000313

Print This Page

Current Call For Paper

Jetir RMS