Official Seal for Research Excellence and Innovation Competence 🏅🤖✨

Proud to share that our AI activities have been recognized with the official Seal of Innovation Competence. The BSFZ seal is awarded on behalf of the Federal Ministry of Research, Technology and Space by the Certification Office for Research Allowance (BSFZ).

This recognition confirms that our AI projects are not only innovative and future-oriented, but also comply with the high standards set by the research funding program. It is an official acknowledgment of our research excellence and our contributions to technological progress in Germany.

A key aspect being honored is our approach of “combining generative AI with higher education context data to create a new functional level of personalized educational assistance.” While some of this remains a vision under development, it is already becoming tangible through our projects. In other words: we connect state-of-the-art AI technologies with the real needs of study and campus management — today and tomorrow.

For us, the seal is both recognition and motivation to continue: bringing AI into higher education in a practical, responsible, and impactful way.

SMARTA Explained: How Chatbots Like Alix, Robyn & Dr. Melly Revolutionize Studying with AI

Discover how the SMARTA project is reshaping higher education through AI-powered study coaching. Based on the academic article “SMARTA – Chatbots as Individual Study Coaches for Tackling the Two Sigma Problem,” this animated explainer introduces three intelligent chatbots — Alix, Robyn, and Dr. Melly — and how they support students in motivation, reflection, and learning. These AI companions bring the benefits of one-on-one tutoring to everyday study life — scalable, personal, and always available. Dive into the future of education and see how the SMARTA approach bridges the gap between psychological insight and AI innovation.

Video Thumbnail

Trustworthy AI Architecture for Academia: Inside the SMARTA Framework

The EU’s “Trustworthy AI” requirements emphasize high-quality data, legal compliance, robust control, and user well-being. Integrating AI chat assistants into academia must meet these principles from the start. This explainer breaks down the architecture proposed in “Architecture for AI-Chat-Assistants in Academia – Trustworthy Through an Algorithmic Framework.” It presents a system where algorithmic and generative intelligence work hand in hand — ensuring both data trustworthiness and conversational quality. At the heart of this architecture lies the operation control component, responsible for orchestrating the system: ensuring high-quality data retrieval (via the quality data retriever and document vector fetcher), enforcing compliance (via guardians), and refining responses (through the response data splitter). Generative AI is only used where truly needed — for creative tasks — while the core logic remains rule-based and transparent. The result: a robust, rule-based Retrieval-Augmented Generation (rRAG) framework that powers the SMARTA project and actively supports students in German higher education today. Keywords: AI, chatbots, assistants, architecture, trust, AI act, algorithm, academia, algorithmic prompt composer, qualified data retriever, SMARTA, trust-driven AI, higher education.

Video Thumbnail

Architecture for AI-Chat-Assistants in Academia – Compliant and Trustworthy Through an Algorithmic Framework

The EU requirements for “Trustworthy AI” emphasize high-quality data, compliance with legal regulations in data processing and storage, system control, and ensuring AI serves the well-being of users.

An AI system architecture for universities should integrate these principles from the outset.

It can be achieved through an algorithmic framework that embeds trust as a fundamental design element—a ‘trust-driven architecture’.

We propose the following architecture and align our development with it, as it combines user-friendliness on one hand and control/governance on the other.