Source Code Error Understanding Using BERT for Multi-Label Classification
Programming is an essential skill in computer science and across a wide range of engineering disciplines. However, errors, often referred to as ‘bugs’ in code, can be challenging to identify and rectify for both students learning to program and experienced professionals. Unders...
Saved in:
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2025-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/10820190/ |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
_version_ | 1841550763001643008 |
---|---|
author | Md Faizul Ibne Amin Yutaka Watanobe Md Mostafizer Rahman Atsushi Shirafuji |
author_facet | Md Faizul Ibne Amin Yutaka Watanobe Md Mostafizer Rahman Atsushi Shirafuji |
author_sort | Md Faizul Ibne Amin |
collection | DOAJ |
description | Programming is an essential skill in computer science and across a wide range of engineering disciplines. However, errors, often referred to as ‘bugs’ in code, can be challenging to identify and rectify for both students learning to program and experienced professionals. Understanding, identifying, and effectively addressing these errors are critical aspects of programming education and software development. To aid in understanding and classifying these errors, we propose a multi-label error classification approach for source code using fine-tuned BERT models (BERT_Uncased and BERT_Cased). The models achieved average classification accuracies of 90.58% and 90.80%, exact match accuracies of 48.28% and 49.13%, and weighted F1 scores of 0.796 and 0.799, respectively. Precision, Recall, Hamming Loss, and ROC-AUC metrics further evaluate the effectiveness of our models. Additionally, we employed several combinations of large language models (CodeT5, CodeBERT) with machine learning classifiers (Decision Tree, Random Forest, Ensemble Learning, ML-KNN), demonstrating the superiority of our proposed approach. These findings highlight the potential of multi-label error classification to advance programming education, software engineering, and related research fields. |
format | Article |
id | doaj-art-54f2c85a379e4cad83e7c1f7031f09eb |
institution | Kabale University |
issn | 2169-3536 |
language | English |
publishDate | 2025-01-01 |
publisher | IEEE |
record_format | Article |
series | IEEE Access |
spelling | doaj-art-54f2c85a379e4cad83e7c1f7031f09eb2025-01-10T00:01:42ZengIEEEIEEE Access2169-35362025-01-01133802382210.1109/ACCESS.2024.352506110820190Source Code Error Understanding Using BERT for Multi-Label ClassificationMd Faizul Ibne Amin0https://orcid.org/0009-0001-0722-3536Yutaka Watanobe1https://orcid.org/0000-0002-0030-3859Md Mostafizer Rahman2https://orcid.org/0000-0001-9368-7638Atsushi Shirafuji3https://orcid.org/0000-0001-9890-4806Graduate Department of Computer and Information Systems, The University of Aizu, Aizu-Wakamatsu, Fukushima, JapanGraduate Department of Computer and Information Systems, The University of Aizu, Aizu-Wakamatsu, Fukushima, JapanDepartment of Computer Science, Tulane University, New Orleans, LA, USAGraduate Department of Computer and Information Systems, The University of Aizu, Aizu-Wakamatsu, Fukushima, JapanProgramming is an essential skill in computer science and across a wide range of engineering disciplines. However, errors, often referred to as ‘bugs’ in code, can be challenging to identify and rectify for both students learning to program and experienced professionals. Understanding, identifying, and effectively addressing these errors are critical aspects of programming education and software development. To aid in understanding and classifying these errors, we propose a multi-label error classification approach for source code using fine-tuned BERT models (BERT_Uncased and BERT_Cased). The models achieved average classification accuracies of 90.58% and 90.80%, exact match accuracies of 48.28% and 49.13%, and weighted F1 scores of 0.796 and 0.799, respectively. Precision, Recall, Hamming Loss, and ROC-AUC metrics further evaluate the effectiveness of our models. Additionally, we employed several combinations of large language models (CodeT5, CodeBERT) with machine learning classifiers (Decision Tree, Random Forest, Ensemble Learning, ML-KNN), demonstrating the superiority of our proposed approach. These findings highlight the potential of multi-label error classification to advance programming education, software engineering, and related research fields.https://ieeexplore.ieee.org/document/10820190/Multi-label classificationBERTCodeT5CodeBERTdecision treerandom forest |
spellingShingle | Md Faizul Ibne Amin Yutaka Watanobe Md Mostafizer Rahman Atsushi Shirafuji Source Code Error Understanding Using BERT for Multi-Label Classification IEEE Access Multi-label classification BERT CodeT5 CodeBERT decision tree random forest |
title | Source Code Error Understanding Using BERT for Multi-Label Classification |
title_full | Source Code Error Understanding Using BERT for Multi-Label Classification |
title_fullStr | Source Code Error Understanding Using BERT for Multi-Label Classification |
title_full_unstemmed | Source Code Error Understanding Using BERT for Multi-Label Classification |
title_short | Source Code Error Understanding Using BERT for Multi-Label Classification |
title_sort | source code error understanding using bert for multi label classification |
topic | Multi-label classification BERT CodeT5 CodeBERT decision tree random forest |
url | https://ieeexplore.ieee.org/document/10820190/ |
work_keys_str_mv | AT mdfaizulibneamin sourcecodeerrorunderstandingusingbertformultilabelclassification AT yutakawatanobe sourcecodeerrorunderstandingusingbertformultilabelclassification AT mdmostafizerrahman sourcecodeerrorunderstandingusingbertformultilabelclassification AT atsushishirafuji sourcecodeerrorunderstandingusingbertformultilabelclassification |