Enhancing Data Science Education Through AI-Driven Feedback
- This thesis investigates the potential of LLMs to provide personalized and context aware feedback in data science education. Traditional automated feedback systems often face challenges related to adaptiveness, scalability, and pedagogical alignment. To address these limitations, an experimental study was conducted using a custom-built AI tutor based on GPT-4o, which guided students through six clustering assignments designed around k-means and DBSCAN concepts. Data were collected from pre- and post experiment questionnaires and 516 dialogue exchanges recorded across ten individual tutoring sessions. A mixed-methods approach was adopted. Quantitative analysis compared pre and post-survey results to measure normalized learning gain (g = 0.375), effect size (Cohen’s d = 0.321), and statistical significance (t(9) = 0.811, p > 0.05).This thesis investigates the potential of LLMs to provide personalized and context aware feedback in data science education. Traditional automated feedback systems often face challenges related to adaptiveness, scalability, and pedagogical alignment. To address these limitations, an experimental study was conducted using a custom-built AI tutor based on GPT-4o, which guided students through six clustering assignments designed around k-means and DBSCAN concepts. Data were collected from pre- and post experiment questionnaires and 516 dialogue exchanges recorded across ten individual tutoring sessions. A mixed-methods approach was adopted. Quantitative analysis compared pre and post-survey results to measure normalized learning gain (g = 0.375), effect size (Cohen’s d = 0.321), and statistical significance (t(9) = 0.811, p > 0.05). Qualitative analysis involved manual coding of AI responses for feedback type, adaptiveness, and student engagement. Results showed that students generally perceived the AI tutor positively, emphasizing its clear explanations, step-by-step guidance, and timely feedback. While moderate conceptual improvement was observed, statistical effects remained small, suggesting that perceived learning gains may exceed measured performance improvements. Conversational analysis revealed that adaptive responses and interactive questioning supported engagement, though occasional inconsistencies and reliance on predefined solutions limited deeper adaptiveness. The study contributes to educational technology research by providing empirical insight into both the capabilities and current constraints of LLM-based tutoring. Although student satisfaction was high, findings highlight the need for more sophisticated scaffolding, enhanced contextual adaptiveness, and hybrid human-AI feedback frameworks. Overall, this research demonstrates the promise of LLMs in delivering scalable, personalized support in data science education, while emphasizing the importance of continued evaluation to ensure pedagogical reliability and meaningful learning outcomes.…

