Refine
Year of publication
Document Type
- Conference Proceeding (17)
- Article (1)
- Book (1)
- Part of a Book (1)
Is part of the Bibliography
- no (20)
Keywords
- Intercultural Collaboration (3)
- Collaborative Online International Learning (2)
- Competency-Oriented Exams (2)
- Interdisciplinary Students Project (2)
- Project-based Learning (2)
- COIL (1)
- Collaborative Learning (1)
- Constructive Alignment (1)
- Continuous Assessment (1)
- Distributed Software Development (1)
Institute
Unleashing Personalized Education Using Large Language Models in Online Collaborative Settings
(2024)
The Artificial Intelligence community has long pursued personalized education. Over the past decades, efforts have ranged from automated advisors to Intelligent Tutoring Systems, all aimed at tailoring learning experiences to students' individual needs and interests. Unfortunately, many of these endeavors remained largely theoretical or proposed solutions challenging to implement in real-world scenarios. However, we are now in the era of Large Language Models (LLMs) like ChatGPT, Mistral, or Claude, which exhibit promising capabilities with significant potential to impact personalized education. For instance, ChatGPT 4 can assist students in using the Socratic method in their learning process. Despite the immense possibilities these technologies offer, limited significant results are showcasing the impact of LLMs in educational settings. Therefore, this paper aims to present tools and strategies based on LLMs to address personalized education within online collaborative learning settings. To do so, we propose RAGs (Retrieval-Augmented Generation) agents that could be added to online collaborative learning platforms: a) the Oracle agent, capable of answering questions related to topics and materials uploaded to the platform.; b) the Summary agent, which can summarize and present content based on students' profiles.; c) the Socratic agent, guiding students in learning topics through close interaction.; d) the Forum agent, analyzing students' forum posts to identify challenging topics and suggest ways to overcome difficulties or foster peer collaboration.; e) the Assessment agent, presenting personalized challenges based on students' needs. f) the Proactive agent, analyzing student activity and suggesting learning paths as needed. Importantly, each RAG agent can leverage historical student data to personalize the learning experience effectively. To assess the effectiveness of this personalized approach, we plan to evaluate the use of RAGs in online collaborative learning platforms compared to previous online learning courses conducted in previous years.
Competency-oriented exams offer a wide range of advantages, especially where the use and mastery of third-party applications and tools play an important role. Therefore, we developed a competency-oriented setup for both our programming classes and exams ensuring their constructive alignment.
Exams were moved to the computer lab and designed to test both conceptional skills as well as the use of state-of-the-art programming tools. At the peak of the COVID-19 pandemic, when exams had to be moved from lab to online, we needed to design an online setup for our practical programming exams preserving the competency-oriented approach and its constructive alignment as well as the validity, reliability and fairness of the exams. The key was to use the same online tools that have been introduced
for running lectures and practical classes offering almost the same learning experience as before the pandemic. However, to ensure the validity and fairness of the exams, some kind of online supervision needed to be implemented as technical solutions were found to be either unusable or not working
properly in our case. This paper discusses the driving factors, the resulting technical and organizational setup as well as students’ feedback and lessons learned for further improvements. Therefore, COVID-19 has not been able to ruin our competency-oriented programming exams.
This paper reveals various approaches undertaken over more than two decades of teaching undergraduate programming classes at different Higher Education Institutions, in order to improve student activation and participation in class and consequently teaching and learning effectiveness.
While new technologies and the ubiquity of smartphones and internet access has brought new tools to the classroom and opened new didactic approaches, lessons learned from this personal long-term study show that neither technology itself nor any single new and often hyped didactic approach ensured sustained improvement of student activation. Rather it needs an integrated yet open approach towards a participative learning space supported but not created by new tools, technology and innovative teaching methods.
This paper presents a pragmatic approach for stepwise introduction of peer assessment elements in undergraduate programming classes, discusses some lessons learned so far and directions for further work. Students are invited to challenge their peers with their own programming exercises to be submitted through Moodle and evaluated by other students according to a predefined rubric and supervised by teaching assistants. Preliminary results show an increased activation and motivation of students leading to a better performance in the final programming exams.

