[2602.15531] GenAI-LA: Generative AI and Learning Analytics Workshop (LAK 2026), April 27--May 1, 2026, Bergen, Norway
Summary
The article presents the GenAI-LA workshop focusing on Generative AI and Learning Analytics, scheduled for April 27-May 1, 2026, in Bergen, Norway. It introduces EduEVAL-DB, a dataset for evaluating AI tutors and pedagogical evaluators, and discusses its implications for educa...
Why It Matters
This workshop is significant as it addresses the intersection of generative AI and educational analytics, highlighting the need for effective evaluation tools in AI-driven education. The EduEVAL-DB dataset aims to improve the quality of AI-generated educational content, ensuring it meets pedagogical standards and enhances learning outcomes.
Key Takeaways
- Introduction of EduEVAL-DB for evaluating AI tutors.
- Dataset includes 854 explanations for diverse educational questions.
- Proposes a pedagogical risk rubric to assess AI-generated content.
- Validation experiments benchmark AI models for pedagogical risk detection.
- Focus on improving instructional explanations using AI.
Computer Science > Artificial Intelligence arXiv:2602.15531 (cs) [Submitted on 17 Feb 2026] Title:GenAI-LA: Generative AI and Learning Analytics Workshop (LAK 2026), April 27--May 1, 2026, Bergen, Norway Authors:Javier Irigoyen, Roberto Daza, Aythami Morales, Julian Fierrez, Francisco Jurado, Alvaro Ortigosa, Ruben Tolosana View a PDF of the paper titled GenAI-LA: Generative AI and Learning Analytics Workshop (LAK 2026), April 27--May 1, 2026, Bergen, Norway, by Javier Irigoyen and 6 other authors View PDF HTML (experimental) Abstract:This work introduces EduEVAL-DB, a dataset based on teacher roles designed to support the evaluation and training of automatic pedagogical evaluators and AI tutors for instructional explanations. The dataset comprises 854 explanations corresponding to 139 questions from a curated subset of the ScienceQA benchmark, spanning science, language, and social science across K-12 grade levels. For each question, one human-teacher explanation is provided and six are generated by LLM-simulated teacher roles. These roles are inspired by instructional styles and shortcomings observed in real educational practice and are instantiated via prompt engineering. We further propose a pedagogical risk rubric aligned with established educational standards, operationalizing five complementary risk dimensions: factual correctness, explanatory depth and completeness, focus and relevance, student-level appropriateness, and ideological bias. All explanations are annot...