Employing large language models to enhance K-12 students’ programming debugging skills, computational thinking, and self-efficacy

The introduction of programming education in K-12 schools to promote computational thinking has attracted a great deal of attention from scholars and educators. Debugging code is a central skill for students, but is also a considerable challenge when learning to program. Learners at the K-12 level o...

Full description

Saved in:
Bibliographic Details
Main Author: Shu-Jie Chen, Xiaofen Shan, Ze-Min Liu, Chuang-Qi Chen
Format: Article
Language:English
Published: International Forum of Educational Technology & Society 2025-04-01
Series:Educational Technology & Society
Subjects:
Online Access:https://www.j-ets.net/collection/published-issues/28_2#h.njwqi1ffqtu2
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The introduction of programming education in K-12 schools to promote computational thinking has attracted a great deal of attention from scholars and educators. Debugging code is a central skill for students, but is also a considerable challenge when learning to program. Learners at the K-12 level often lack confidence in programming debugging due to a lack of effective learning feedback and programming fundamentals (e.g., correct syntax usage). With the development of technology, large language models (LLMs) provide new opportunities for novice programming debugging training. We proposed a method for incorporating an LLM into programming debugging training, and to test its validity, 80 K-12 students were selected to participate in a quasi-experiment with two groups to test its effectiveness. The results showed that through dialogic interaction with the model, students were able to solve programming problems more effectively and improve their ability to solve problems in real-world applications. Importantly, this dialogic interaction increased students’ confidence in their programming abilities, thus allowing them to maintain motivation for programming learning.
ISSN:1176-3647
1436-4522