An Optimized Hyperparameter Tuning for Improved Hate Speech Detection with Multilayer Perceptron
Hate speech classification is a critical task in the domain of natural language processing, aiming to mitigate the negative impacts of harmful content on digital platforms. This study explores the application of a Multilayer Perceptron (MLP) model for hate speech classification, utilizing Bag of Wor...
Saved in:
Main Authors: | , |
---|---|
Format: | Article |
Language: | English |
Published: |
Ikatan Ahli Informatika Indonesia
2024-08-01
|
Series: | Jurnal RESTI (Rekayasa Sistem dan Teknologi Informasi) |
Subjects: | |
Online Access: | https://jurnal.iaii.or.id/index.php/RESTI/article/view/5949 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
_version_ | 1841544054228123648 |
---|---|
author | Muhamad Ridwan Ema Utami |
author_facet | Muhamad Ridwan Ema Utami |
author_sort | Muhamad Ridwan |
collection | DOAJ |
description | Hate speech classification is a critical task in the domain of natural language processing, aiming to mitigate the negative impacts of harmful content on digital platforms. This study explores the application of a Multilayer Perceptron (MLP) model for hate speech classification, utilizing Bag of Words (BoW) for feature extraction. The hypothesis posits that hyperparameter tuning through sophisticated optimization techniques will significantly improve model performance. To validate this hypothesis, we employed two distinct hyperparameter tuning approaches: Random Search and Optuna. Random Search provides a straightforward yet effective means of exploring the hyperparameter space, while Optuna offers a more sophisticated, optimization-based approach to hyperparameter selection. The study involved training the MLP model on a labeled dataset is based on crawling results on the Twitter platform of hate speech and non-hate speech overall total dataset is 13.169, followed by evaluation using standard metrics. Our experimental results demonstrate the comparative effectiveness of these two hyperparameter tuning methods. Notably, the MLP model tuned with Optuna achieved a higher F1-score of 81.49%, compared to 79.70% achieved with Random Search, indicating the superior performance of Optuna in optimizing the hyperparameters. These results were obtained through extensive cross-validation to ensure robustness and generalizability. The findings underscore the importance of optimized hyperparameters in developing robust hate speech classification systems. The superior perform ance of Optuna highlights its potential for broader application in other machine learning tasks requiring hyperparameter optimization. This improvement enables more reliable and efficient automated moderation, which is crucial for the integrity and security of digital communication platforms such as Twitter. |
format | Article |
id | doaj-art-bd86a6e0727c4a1d8ff6ef3469c3048b |
institution | Kabale University |
issn | 2580-0760 |
language | English |
publishDate | 2024-08-01 |
publisher | Ikatan Ahli Informatika Indonesia |
record_format | Article |
series | Jurnal RESTI (Rekayasa Sistem dan Teknologi Informasi) |
spelling | doaj-art-bd86a6e0727c4a1d8ff6ef3469c3048b2025-01-13T03:33:02ZengIkatan Ahli Informatika IndonesiaJurnal RESTI (Rekayasa Sistem dan Teknologi Informasi)2580-07602024-08-018452553410.29207/resti.v8i4.59495949An Optimized Hyperparameter Tuning for Improved Hate Speech Detection with Multilayer PerceptronMuhamad Ridwan0Ema Utami1Universitas Amikom YogyakartaUniversitas Amikom YogyakartaHate speech classification is a critical task in the domain of natural language processing, aiming to mitigate the negative impacts of harmful content on digital platforms. This study explores the application of a Multilayer Perceptron (MLP) model for hate speech classification, utilizing Bag of Words (BoW) for feature extraction. The hypothesis posits that hyperparameter tuning through sophisticated optimization techniques will significantly improve model performance. To validate this hypothesis, we employed two distinct hyperparameter tuning approaches: Random Search and Optuna. Random Search provides a straightforward yet effective means of exploring the hyperparameter space, while Optuna offers a more sophisticated, optimization-based approach to hyperparameter selection. The study involved training the MLP model on a labeled dataset is based on crawling results on the Twitter platform of hate speech and non-hate speech overall total dataset is 13.169, followed by evaluation using standard metrics. Our experimental results demonstrate the comparative effectiveness of these two hyperparameter tuning methods. Notably, the MLP model tuned with Optuna achieved a higher F1-score of 81.49%, compared to 79.70% achieved with Random Search, indicating the superior performance of Optuna in optimizing the hyperparameters. These results were obtained through extensive cross-validation to ensure robustness and generalizability. The findings underscore the importance of optimized hyperparameters in developing robust hate speech classification systems. The superior perform ance of Optuna highlights its potential for broader application in other machine learning tasks requiring hyperparameter optimization. This improvement enables more reliable and efficient automated moderation, which is crucial for the integrity and security of digital communication platforms such as Twitter.https://jurnal.iaii.or.id/index.php/RESTI/article/view/5949hate speechmultilayer perceptrobag of wordshyperparameter tuningrandom searchoptuna |
spellingShingle | Muhamad Ridwan Ema Utami An Optimized Hyperparameter Tuning for Improved Hate Speech Detection with Multilayer Perceptron Jurnal RESTI (Rekayasa Sistem dan Teknologi Informasi) hate speech multilayer perceptro bag of words hyperparameter tuning random search optuna |
title | An Optimized Hyperparameter Tuning for Improved Hate Speech Detection with Multilayer Perceptron |
title_full | An Optimized Hyperparameter Tuning for Improved Hate Speech Detection with Multilayer Perceptron |
title_fullStr | An Optimized Hyperparameter Tuning for Improved Hate Speech Detection with Multilayer Perceptron |
title_full_unstemmed | An Optimized Hyperparameter Tuning for Improved Hate Speech Detection with Multilayer Perceptron |
title_short | An Optimized Hyperparameter Tuning for Improved Hate Speech Detection with Multilayer Perceptron |
title_sort | optimized hyperparameter tuning for improved hate speech detection with multilayer perceptron |
topic | hate speech multilayer perceptro bag of words hyperparameter tuning random search optuna |
url | https://jurnal.iaii.or.id/index.php/RESTI/article/view/5949 |
work_keys_str_mv | AT muhamadridwan anoptimizedhyperparametertuningforimprovedhatespeechdetectionwithmultilayerperceptron AT emautami anoptimizedhyperparametertuningforimprovedhatespeechdetectionwithmultilayerperceptron AT muhamadridwan optimizedhyperparametertuningforimprovedhatespeechdetectionwithmultilayerperceptron AT emautami optimizedhyperparametertuningforimprovedhatespeechdetectionwithmultilayerperceptron |