HyperAIHyperAI

Command Palette

Search for a command to run...

Nutzung von Lernfortschrittsverläufen zur Steuerung von KI-Feedback für das naturwissenschaftliche Lernen

Xin Xia Nejla Yuruk Yun Wang Xiaoming Zhai

Zusammenfassung

Generative Künstliche Intelligenz (KI) bietet skalierbare Unterstützung für formatives Feedback; jedoch stützt sich das von KI generierte Feedback überwiegend auf fachspezifische Bewertungsraster (Rubrics), die von Expertinnen und Experten des Fachgebiets erstellt wurden. Während dieser Ansatz wirksam ist, ist die Erstellung solcher Raster zeitintensiv und schränkt deren Skalierbarkeit in unterschiedlichen Lehr-Lern-Kontexten ein. Lernfortschrittsverläufe (Learning Progressions, LP) stellen eine theoretisch fundierte Darstellung der sich entwickelnden Lernverständnisse von Lernenden dar und könnten eine alternative Lösung bieten. Diese Studie untersucht, ob eine auf Lernfortschrittsverläufen basierende Pipeline zur Generierung von Bewertungsrastern KI-generiertes Feedback von vergleichbarer Qualität erzeugen kann wie Feedback, das durch von Experten verfasste, aufgabenbezogene Raster geleitet wird.Wir analysierten von KI generiertes Feedback zu schriftlichen wissenschaftlichen Erklärungen, die von 207 Schülerinnen und Schülern der mittleren Schulstufe im Rahmen einer Chemie-Aufgabe verfasst wurden. Dabei wurden zwei Pipelines verglichen: (a) Feedback, das durch ein von einem menschlichen Experten entwickeltes, aufgabenbezogenes Bewertungsraster geleitet wurde, und (b) Feedback, das durch ein Aufgaben-Raster geleitet wurde, das vor der Benotung und Feedbackgenerierung automatisch aus einem Lernfortschrittsverlauf abgeleitet wurde. Zwei unabhängige menschliche Codierpersonen bewerteten die Feedbackqualität anhand eines mehrdimensionalen Bewertungsrasters, das die Dimensionen Klarheit (Clarity), Richtigkeit (Accuracy), Relevanz (Relevance), Engagement und Motivation sowie Reflexivität (Reflectiveness) mit insgesamt zehn Unterdimensionen erfasst. Die Inter-Rater-Reliabilität war hoch: Der prozentuale Übereinstimmungsgrad lag zwischen 89 % und 100 %, und die Cohens-Kappa-Werte für die schätzbaren Dimensionen bewegten sich im Bereich von κ = 0,66 bis 0,88.Paarweise t-Tests ergaben für die Dimensionen Klarheit (t1 = 0,00, p1 = 1,000; t2 = 0,84, p2 = 0,399), Relevanz (t1 = 0,28, p1 = 0,782; t2 = –0,58, p2 = 0,565), Engagement und Motivation (t1 = 0,50, p1 = 0,618; t2 = –0,58, p2 = 0,565) sowie Reflexivität (t = –0,45, p = 0,656) keine statistisch signifikanten Unterschiede zwischen den beiden Pipelines. Diese Befunde legen nahe, dass die auf Lernfortschrittsverläufen basierende Rastergenerierungspipeline als alternative Lösung geeignet ist.

One-sentence Summary

Researchers from the University of Georgia and Gazi University propose an LP-driven rubric pipeline that generates AI feedback for middle school chemistry explanations as effectively as expert-authored rubrics, enabling scalable, theory-grounded formative assessment without task-specific human rubric design.

Key Contributions

  • The study addresses the scalability bottleneck in AI-generated feedback by replacing labor-intensive expert-authored rubrics with rubrics automatically derived from learning progressions, which map students’ conceptual development in science.
  • It introduces an LP-driven pipeline that generates feedback for middle school chemistry explanations and compares its quality against expert-rubric-guided feedback across five dimensions using human coder evaluations of 207 student responses.
  • No statistically significant differences were found between the two feedback pipelines across Clarity, Relevance, Engagement and Motivation, or Reflectiveness, supporting LP-derived rubrics as a viable, scalable alternative to expert-designed ones.

Introduction

The authors leverage learning progressions (LPs) — empirically grounded models of how students’ understanding develops — to automatically generate task-specific rubrics for AI feedback in science education. This addresses a key bottleneck in current AI feedback systems, which rely on time-intensive, expert-authored rubrics that limit scalability across diverse classroom tasks. While prior work shows AI can generate useful feedback when guided by detailed rubrics, building those rubrics for every new task is impractical. The authors demonstrate that LP-derived rubrics produce AI feedback statistically indistinguishable in quality from expert-authored ones across dimensions like clarity, relevance, and reflectiveness — suggesting LPs can serve as a reusable pedagogical backbone to automate rubric creation and scale feedback without sacrificing quality.

Top Figure

Dataset

  • The authors use 207 anonymized middle school student responses drawn randomly from a larger pool of 1,200 responses collected via an NGSS-aligned online assessment system. No demographic data is available due to anonymization, but the sample reflects a broad U.S. geographic distribution.

  • All responses stem from a single open-ended chemistry task focused on gas properties, sourced from the Next Generation Science Assessment task set. Students analyzed data on flammability, volume, and density across four gas samples and explained which gases could be the same, justifying their reasoning with evidence.

  • The task is designed to assess scientific explanation skills—specifically, connecting evidence to claims using appropriate terminology—and serves as the sole context for evaluating AI-generated formative feedback.

  • Feedback evaluation focuses on five dimensions: Clarity, Accuracy, Relevance, Engagement and Motivation, and Reflectiveness. The dataset is used exclusively to test how different AI feedback pipelines respond to student explanations and to compare feedback quality across these dimensions.

Method

The authors leverage a unified large language model—GPT-5.1—to generate feedback across both evaluation pipelines, ensuring methodological consistency. For each student response, the model is prompted to perform two core tasks: first, to evaluate the response against a specified rubric, and second, to produce formative feedback that directly aligns with the evaluation outcome. The feedback is intentionally crafted to be developmentally appropriate, supportive in tone, and pedagogically focused on guiding students toward improved scientific explanation skills.

To isolate the impact of rubric origin on feedback quality, both pipelines employ identical prompting strategies and output constraints. The only variable introduced is the source of the rubric—either human-authored or derived from a learning progression framework. This controlled design enables a direct comparison of how rubric provenance influences the quality and utility of the generated feedback.

As shown in the figure below:

Experiment

  • Gas-filled balloon experiment validated measurement of gas properties under controlled conditions, focusing on flammability, volume, mass, and density.
  • Two AI feedback pipelines (Expert-Rubric and Learning-Progression) were compared using a within-subjects design; both produced high-quality feedback across all dimensions.
  • Feedback quality was assessed via a 5-dimension rubric (Clarity, Accuracy, Relevance, Engagement, Reflectiveness); both pipelines scored near ceiling, with perfect accuracy in scientific content.
  • No statistically significant differences were found between the two pipelines across any feedback dimension, indicating equivalent effectiveness.
  • Reflectiveness prompting showed slightly lower and more variable scores, suggesting room for improvement in encouraging student reflection.
  • Results confirm that structured, task-aligned AI feedback can reliably deliver scientifically accurate, clear, and motivating guidance at scale.

The authors use a multi-dimensional rubric to evaluate AI-generated feedback across five quality dimensions, with human coders achieving high percent agreement and moderate to strong inter-rater reliability for most dimensions. Results show that both expert-rubric and learning-progression pipelines produce consistently high-quality feedback, with no statistically significant differences between them across any evaluable sub-dimension. Feedback was uniformly accurate, clear, relevant, and engaging, though reflectiveness prompting showed greater variability in quality.

The authors compared two AI feedback pipelines—one using expert-designed rubrics and the other using learning progression-derived criteria—and found no statistically significant differences in feedback quality across any evaluated dimension. Both approaches consistently produced high-quality, scientifically accurate, and pedagogically sound feedback under controlled conditions. Results suggest that structuring AI feedback with either expert or progression-based criteria can yield similarly effective outcomes for student support.

The authors use a controlled experiment to compare gas properties across four samples, measuring flammability, density, and volume under identical conditions. Results show that flammability does not correlate with density or volume, as both flammable and non-flammable gases appear across the full range of measured values. The data indicate that these physical properties must be evaluated independently to characterize each gas accurately.


KI mit KI entwickeln

Von der Idee bis zum Launch – beschleunigen Sie Ihre KI-Entwicklung mit kostenlosem KI-Co-Coding, sofort einsatzbereiter Umgebung und bestem GPU-Preis.

KI-gestütztes kollaboratives Programmieren
Sofort einsatzbereite GPUs
Die besten Preise

HyperAI Newsletters

Abonnieren Sie unsere neuesten Updates
Wir werden die neuesten Updates der Woche in Ihren Posteingang liefern um neun Uhr jeden Montagmorgen
Unterstützt von MailChimp