Trustworthy AI Architecture for Academia: Inside the SMARTA Framework

The EU’s “Trustworthy AI” requirements emphasize high-quality data, legal compliance, robust control, and user well-being. Integrating AI chat assistants into academia must meet these principles from the start. This explainer breaks down the architecture proposed in “Architecture for AI-Chat-Assistants in Academia – Trustworthy Through an Algorithmic Framework.” It presents a system where algorithmic and generative intelligence work hand in hand — ensuring both data trustworthiness and conversational quality. At the heart of this architecture lies the operation control component, responsible for orchestrating the system: ensuring high-quality data retrieval (via the quality data retriever and document vector fetcher), enforcing compliance (via guardians), and refining responses (through the response data splitter). Generative AI is only used where truly needed — for creative tasks — while the core logic remains rule-based and transparent. The result: a robust, rule-based Retrieval-Augmented Generation (rRAG) framework that powers the SMARTA project and actively supports students in German higher education today. Keywords: AI, chatbots, assistants, architecture, trust, AI act, algorithm, academia, algorithmic prompt composer, qualified data retriever, SMARTA, trust-driven AI, higher education.

Video Thumbnail