AI Chatbots in Higher Education: Personalized Motivation, Reflection and Learning

Discover how AI is transforming university life with intelligent chatbots! These advanced systems collaborate in the background, remembering student interactions, balancing schedules, and integrating new learning materials. Leveraging powerful theories like 2-Sigma, Flow, and Self-Determination Theory (SDT), they craft personalized learning plans tailored to each student’s needs while aligning with official handbooks. Already in use and making an impact, this technology is part of the SMARTA project (Student Motivation and Reflective AI-Assistants) and is integrated into the TraiNex campus management system.

Two Sigma Challenge

Dive into the future of higher education with SMARTA (Student Motivation and Reflective Training AI-Assistants). This project introduces AI chatbots as personal study coaches, aiming to bridge the educational gap known as the Two Sigma Problem. With the power of AI, SMARTA offers a trio of specialized chatbots designed to enhance student motivation, deepen engagement, and personalize the learning journey for over 5000 students at a German university. From fostering empathetic support to encouraging self-directed learning and interactive dialogues, these chatbots mark a significant leap towards mimicking the personalized touch of one-on-one tutoring. Discover how SMARTA leverages AI to turn the Two Sigma challenge into an unparalleled opportunity for students alike. Get ready to explore the cutting-edge intersection of technology and education, where personalization meets excellence. Join us on this enlightening journey to redefine the landscape of higher education through the lens of AI.

MUSE (Motivation-Understanding-Schema for Effectiveness)

The chatbot ALIX’s job is to encourage students. As part of the SMARTA project, we’re looking to see if students come to the chat feeling motivated or not. We’re also trying to find out if chatting with ALIX changes their motivation levels. For example, a student might start the chat in a bad mood and not feel better after talking to ALIX. We call this a ‘Stagnation Spiral.’ Ideally, we want ALIX to help students who are feeling down become more motivated, which we call a ‘Motivation Reboot.’

Currently, we can measure the mood/sentiment of each chat and how it changes (!). We don’t know the users’ names, but we can analyze and record the chat’s sentiment. This lets us track how often something like a ‘Euphoria Decline’ occurs. Right now, the percentages in our matrix are just rough estimates. We’ll publish exact numbers in spring 2024.

Chatbots and Legal Compliance

What legal risks do universities face with chatbots? Is there a risk of a chatbot’s wrong answer leading to legal issues? Absolutely! In a 2022 experiment with thousands of students, we observerd that roughly 10% of chatbot responses could have posed legal problems for the university. For example, if a student asks, ‘I am ill. Do I need to send a doctor’s certificate?’ and the official chatbot incorrectly responds, ‘No, just stay at home and get well,’ the university could face legal issues if the student is later de-registered for not submitting the certificate.

Are scripted chatbots a solution? Pro: Scripted chatbots have a low probability of giving wrong answers. They always follow set rules and typically use a limited database, making them generally safe. (This is Position 1 in our matrix.) However, interacting with scripted chatbots can be unsatisfying because they struggle to understand the user’s intent or to respond to unforeseen questions. In SMARTA, we didn’t choose to improve scripted bots (Development-path A).

In Position 9 of the matrix is an AI-chatbot like ChatGPT. These chatbots understand user questions and respond in fluent sentences, offering a high level of user satisfaction. However, the risk of legal issues is greater due to the data they use. For instance, the chatbot might not know a specific university’s examination regulations but might reference another university’s regulations from its general training data. Therefore, we need to both limit (e.g., ‘do not hallucinate faculty staff’) and expand (e.g., ‘use these specific examination regulations’) the data used by the chatbot, which is Development-path B. At SMARTA, we are pursuing this path to control the input, processing, and output of the chatbot.

Full German text "Empirische Studie zu hochschulischen Chatbot-Einsatzmöglichkeiten": https://www.researchgate.net/publication/368328066_Empirische_Studie_zu_hochschulischen_Chatbot-Einsatzmoglichkeiten