Client Work

Case Studies

Real engagements. Tangible outcomes. AI strategy that translates into results.

Diotima — AI-Powered Formative Assessment Platform for Irish Secondary Education

Helping Diotima build a compliance-grade, curriculum-aligned AI formative assessment platform purpose-built for the Leaving Certificate — where generic AI tools fail on Irish context, and where EU AI Act obligations shaped every architectural decision from day one.

Background

Diotima set out to build an AI-powered formative assessment platform tailored to the Irish secondary school curriculum, specifically targeting Leaving Certificate alignment. The challenge was significant on two fronts simultaneously.

The first was pedagogical: Ireland's curriculum is highly specific, and generic AI tools consistently fail on Irish historical context, Irish language content, and the particular assessment rubrics used by the State Examinations Commission. Off-the-shelf solutions produce plausible-sounding content that does not survive contact with an Irish classroom.

The second was regulatory. Under the EU AI Act, any AI system that analyses student responses and structures performance interpretation is classified as high-risk under Annex III, Section 3(a). That classification was not incidental to the product — it shaped every architectural and governance decision from the outset. From day one, Diotima was designed to meet those obligations, not to retrofit compliance onto an existing product.

What We Did

  • Compliance-Grade Architecture from First Principles

    The most consequential early decision was to treat the EU AI Act's Annex III classification as a design brief rather than a documentation task. This meant scoping the high-risk components narrowly — the Question, Answer and Rubric Generation Engine and the AI Inference and Feedback Engine — and separating them cleanly from supporting infrastructure. Every architectural choice was oriented by that classification, which simplified governance, reduced risk propagation, and made the compliance case tractable for the team building it.

  • Curriculum Alignment and Grounded Content Generation

    Advised on how to structure content ingestion and retrieval so the platform could accurately map questions and learning objectives to specific Leaving Cert syllabus strands. The approach required custom solutions rather than off-the-shelf RAG: all AI-generated questions, answers, and rubrics are explicitly linked to curriculum topics, learning outcomes, Bloom's Taxonomy cognitive levels, and approved, licensed source-of-truth materials. This grounding simultaneously prevents hallucination, ensures curriculum fidelity, and enables clear traceability — giving teachers the visibility they need to make informed approval decisions and giving regulators a clear basis for understanding why any given item exists.

  • Human Oversight as a Structural Feature, Not a Policy Statement

    A central challenge in the product design was operationalising Article 14 (Human Oversight) in a way that was real rather than nominal. The solution was to enforce oversight through the workflow itself: teachers must approve every generated assessment item before any student sees it; AI rubric placements are always provisional; teachers can accept, edit, override, or replace any output at any stage; and only teacher-confirmed results are stored as authoritative. Students never receive AI-generated feedback that has not been reviewed by a teacher. That is a technical constraint embedded in the system, not an aspiration in a policy document.

  • Model Selection and Responsible Evaluation

    Worked through the model selection and prompting strategy for generating exam-style questions and marking rubrics — including the particular challenges of Irish history and Irish language content that generic models handle poorly. Model choice was treated as a governance decision: candidates were evaluated against structured benchmarks covering knowledge and reasoning, factuality and hallucination, instruction following, bias and fairness, and toxicity and safety. No student data is used for model training at any stage. Model versions are logged per inference to support audit and post-market monitoring.

  • Teacher Pilot Programme and Post-Market Monitoring

    Supported the design of an early pilot with Irish teachers, helping define what "good" looked like for AI-generated content in an Irish classroom context, and how to gather structured feedback to improve the system. The teacher approval and rejection workflow was designed as a continuous monitoring mechanism — every rejection reason is a signal about model performance in real educational contexts that feeds back into the risk register and model governance process. Compliance, in this design, is something the system generates evidence for as a byproduct of normal operation.

  • Content Licensing Strategy and Investor Narrative

    As the platform matured, content licensing with Irish publishers became a key risk. Helped identify this early and advised on how to structure publisher agreements explicitly covering storage, processing, and derivative use of licensed materials within the scope of Diotima's purpose. This became part of the investor narrative: the platform's differentiation — Irish-specific, curriculum-aligned, teacher-validated, and compliance-grade — was articulated not just as a product thesis but as a regulatory moat. In regulated education markets, compliance-by-design gives buyers access to institutions that non-compliant products simply cannot legally serve.

Outcome

The platform moved from concept to working pilot with Irish secondary school teachers. The team developed a clear product thesis distinguishing Diotima from generic AI tutoring tools — grounded in the specific regulatory, pedagogical, and curricular context of the Irish education system.

The compliance-by-design approach proved to be the most durable element of that differentiation. Most educational AI products treat regulation as friction. Diotima treats it as infrastructure. By embedding Annex III requirements into the architecture from the outset, the platform achieved something that post-hoc compliance cannot: genuine trustworthiness that institutions, teachers, and regulators can verify rather than merely assert. The curriculum alignment approach and the teacher-in-the-loop workflow together constitute a defensible technical and regulatory moat — one that becomes more valuable, not less, as the EU AI Act comes fully into force.

Ready to Navigate AI with Confidence?

Whether you're preparing for EU AI Act compliance or building your AI strategy from scratch, we're here to guide you.

Book a Consultation View Our Services