AI-powered sentiment analysis for classifying harmful content on social media: A case study with ChatGPT Integration

OLADAYO O. AMUSAN 1, * and AMARACHI M. UDEFI 2

1 Department of Big Data Science and Technology, University of Bradford, England, United Kingdom.
2 Department of Computer Engineering Technology, Grundtvig Polytechnic Oba, Anambra State, Nigeria.
 
Research Article
World Journal of Advanced Research and Reviews, 2024, 24(03), 924–939
Article DOI10.30574/wjarr.2024.24.3.3710
 
Publication history: 
Received on 28 October 2024; revised on 04 December 2024; accepted on 07 December 2024
 
Abstract: 
Social media platforms have become essential for communication but have also created spaces where harmful content, including cyberbullying, racism, and other abusive behaviors, thrives. This study employs AI-driven sentiment analysis to classify social media posts into three categories: Abusive, Neutral, and Harmless. A dataset of Twitter posts sourced from Kaggle was preprocessed through steps like noise removal, tokenization, and normalization to ensure readiness for analysis. The Sentiment Analysis Model (ChatGPT Integration) was utilized for classification, leveraging its advanced contextual capabilities to effectively analyze linguistic patterns. The model's performance, with an accuracy of 96%, sensitivity of 90%, and precision of 88%, was validated through a confusion matrix analysis, demonstrating its reliability in identifying harmful content. The findings highlight the model's potential as a scalable solution for mitigating online abuse. Future work will focus on addressing challenges such as class imbalance, integrating multilingual datasets, and implementing real-time monitoring to enhance its usability and impact.
 
Keywords: 
Sentiment Analysis; ChatGPT Integration; Social Media Content; Cyberbullying; Natural Language Processing (NLP); Preprocessing; Feature Extraction; Classification Model; Performance Metrics
 
Full text article in PDF: 
Share this