Chatbots and Legal Compliance

What legal risks do universities face with chatbots? Is there a risk of a chatbot’s wrong answer leading to legal issues? Absolutely! In a 2022 experiment with thousands of students, we observerd that roughly 10% of chatbot responses could have posed legal problems for the university. For example, if a student asks, ‘I am ill. Do I need to send a doctor’s certificate?’ and the official chatbot incorrectly responds, ‘No, just stay at home and get well,’ the university could face legal issues if the student is later de-registered for not submitting the certificate.

Are scripted chatbots a solution? Pro: Scripted chatbots have a low probability of giving wrong answers. They always follow set rules and typically use a limited database, making them generally safe. (This is Position 1 in our matrix.) However, interacting with scripted chatbots can be unsatisfying because they struggle to understand the user’s intent or to respond to unforeseen questions. In SMARTA, we didn’t choose to improve scripted bots (Development-path A).

In Position 9 of the matrix is an AI-chatbot like ChatGPT. These chatbots understand user questions and respond in fluent sentences, offering a high level of user satisfaction. However, the risk of legal issues is greater due to the data they use. For instance, the chatbot might not know a specific university’s examination regulations but might reference another university’s regulations from its general training data. Therefore, we need to both limit (e.g., ‘do not hallucinate faculty staff’) and expand (e.g., ‘use these specific examination regulations’) the data used by the chatbot, which is Development-path B. At SMARTA, we are pursuing this path to control the input, processing, and output of the chatbot.

START: AI and University and Campus Management

Exciting times, aren’t they?

We are happy to launch this small site. “We” is the small AI-team of Trainings-Online GmbH. We belive in the power of AI.

KI-Campus.eu wants to report on selected AI activities of the AI team of Trainings-Online GmbH.

We are not an educational portal and not a governmental institution.

Our focus is on selected opportunities and risks of AI for the private higher education sector in Germany.

Our special interest lies in the controllable and legally secure integration of AI through interfaces into campus management systems like e.g. TraiNex.

Singularity – General Artificial Intelligence

Berlin, 2020. Alongside an interview with Howard Rheingold, we explored the main theme ‘The End of Utopia?’. This included a discussion on ‘Singularity: Point-of-no-Return to Utopia or Dystopia?’. Here’s a summary:

Today’s Artificial Intelligence (AI) is a technically limited assistant, often not much smarter than a person with natural stupidity. However, through machine learning and exponentially accelerated progress, these weak assistants will soon become expert geniuses. The idea of singularity is that there will be a moment when machines start improving themselves forcefully, leading from expert geniuses to a universal machine genius. It’s the point where humans recognize this universal AI as having strong intelligence, growing exponentially and quickly surpassing human intelligence by a wide margin. It’s the turning point where there’s no going back to a world without this AI. If this universal AI is given decision-making and action-taking abilities, society will change into either a positively utopian or negatively dystopian society, depending on the moral stance the AI adopts. […]

The full text (in german) can be found here:
Singularität: Point-of-no-Return zur Utopie oder Dystopie? (researchgate.net)