Command Palette
Search for a command to run...
استخدام تقدم التعلم لتوجيه تغذية راجعة ذكية للتعلم العلمي
استخدام تقدم التعلم لتوجيه تغذية راجعة ذكية للتعلم العلمي
Xin Xia Nejla Yuruk Yun Wang Xiaoming Zhai
الملخص
توفّر الذكاء الاصطناعي التوليدي (Generative AI) دعماً قابلاً للتوسع لتقديم التغذية الراجعة التكوينية؛ غير أن غالبية التغذية الراجعة المُولّدة بواسطة الذكاء الاصطناعي تعتمد على مواصفات أداء (Rubrics) محددة بالمهام، من إعداد خبراء في المجال. وعلى الرغم من فاعلية هذه المواصفات، فإن إعدادها يستهلك وقتاً طويلاً ويحدّ من قابلية التوسع عبر سياقات تعليمية متنوعة. وتوفّر مسارات التطوير المعرفي (Learning Progressions) تمثيلاً نظرياً رصيناً للفهم المتطور لدى الطلاب، وقد تقدّم حلاً بديلاً.تدرس هذه الورقة ما إذا كان مسار العمل الموصى به الذي يقوده مسار التطوير المعرفي (LP-driven) يمكنه إنتاج تغذية راجعة مُولّدة بالذكاء الاصطناعي تتمايز جودتها مع التغذية الراجعة الموجهة بمواصفات أداء محددة بالمهام من إعداد الخبراء.حلّلنا التغذية الراجعة المُولّدة بالذكاء الاصطناعي لتفسيرات علمية مكتوبة أنتجها 207 طالباً من طلاب المرحلة المتوسطة في مهمة تتعلق بالكيمياء. قُورِن مساران: (أ) التغذية الراجعة الموجهة بمواصفات أداء من إعداد خبير بشري ومحددة بالمهمة، و(ب) التغذية الراجعة الموجهة بمواصفات أداء محددة بالمهام مُشتقة آلياً من مسار التطوير المعرفي (Learning Progression) قبل عملية التقييم وإنتاج التغذية الراجعة.قَدَّر اثنان من المقيّمين البشريين جودة التغذية الراجعة باستخدام مواصفات أداء متعددة الأبعاد قاسَت الوضوح، والدقة، والملاءمة، والتفاعل والتحفيز، والتأمل (عشرة أبعاد فرعية). وقد سجّل الاعتماد بين المقيّمين مستوى عالياً، حيث تراوح نسبة الاتفاق من 89% إلى 100%، وتراوحت قيم كابي Cohen's kappa للأبعاد القابلة للتقدير بين 0.66 و0.88.أظهرت اختبارات t المزدوجة عدم وجود فروق ذات دلالة إحصائية بين المسارين في مجالي الوضوح (t₁ = 0.00, p₁ = 1.000؛ t₂ = 0.84, p₂ = 0.399)، والملاءمة (t₁ = 0.28, p₁ = 0.782؛ t₂ = -0.58, p₂ = 0.565)، والتفاعل والتحفيز (t₁ = 0.50, p₁ = 0.618؛ t₂ = -0.58, p₂ = 0.565)، والتأمل (t = -0.45, p = 0.656).تشير هذه النتائج إلى أن مسار العمل المستند إلى مسار التطوير المعرفي (LP-driven) يمكن اعتباره حلاً بديلاً قابلاً للتطبيق.
One-sentence Summary
Researchers from the University of Georgia and Gazi University propose an LP-driven rubric pipeline that generates AI feedback for middle school chemistry explanations as effectively as expert-authored rubrics, enabling scalable, theory-grounded formative assessment without task-specific human rubric design.
Key Contributions
- The study addresses the scalability bottleneck in AI-generated feedback by replacing labor-intensive expert-authored rubrics with rubrics automatically derived from learning progressions, which map students’ conceptual development in science.
- It introduces an LP-driven pipeline that generates feedback for middle school chemistry explanations and compares its quality against expert-rubric-guided feedback across five dimensions using human coder evaluations of 207 student responses.
- No statistically significant differences were found between the two feedback pipelines across Clarity, Relevance, Engagement and Motivation, or Reflectiveness, supporting LP-derived rubrics as a viable, scalable alternative to expert-designed ones.
Introduction
The authors leverage learning progressions (LPs) — empirically grounded models of how students’ understanding develops — to automatically generate task-specific rubrics for AI feedback in science education. This addresses a key bottleneck in current AI feedback systems, which rely on time-intensive, expert-authored rubrics that limit scalability across diverse classroom tasks. While prior work shows AI can generate useful feedback when guided by detailed rubrics, building those rubrics for every new task is impractical. The authors demonstrate that LP-derived rubrics produce AI feedback statistically indistinguishable in quality from expert-authored ones across dimensions like clarity, relevance, and reflectiveness — suggesting LPs can serve as a reusable pedagogical backbone to automate rubric creation and scale feedback without sacrificing quality.

Dataset

-
The authors use 207 anonymized middle school student responses drawn randomly from a larger pool of 1,200 responses collected via an NGSS-aligned online assessment system. No demographic data is available due to anonymization, but the sample reflects a broad U.S. geographic distribution.
-
All responses stem from a single open-ended chemistry task focused on gas properties, sourced from the Next Generation Science Assessment task set. Students analyzed data on flammability, volume, and density across four gas samples and explained which gases could be the same, justifying their reasoning with evidence.
-
The task is designed to assess scientific explanation skills—specifically, connecting evidence to claims using appropriate terminology—and serves as the sole context for evaluating AI-generated formative feedback.
-
Feedback evaluation focuses on five dimensions: Clarity, Accuracy, Relevance, Engagement and Motivation, and Reflectiveness. The dataset is used exclusively to test how different AI feedback pipelines respond to student explanations and to compare feedback quality across these dimensions.
Method
The authors leverage a unified large language model—GPT-5.1—to generate feedback across both evaluation pipelines, ensuring methodological consistency. For each student response, the model is prompted to perform two core tasks: first, to evaluate the response against a specified rubric, and second, to produce formative feedback that directly aligns with the evaluation outcome. The feedback is intentionally crafted to be developmentally appropriate, supportive in tone, and pedagogically focused on guiding students toward improved scientific explanation skills.
To isolate the impact of rubric origin on feedback quality, both pipelines employ identical prompting strategies and output constraints. The only variable introduced is the source of the rubric—either human-authored or derived from a learning progression framework. This controlled design enables a direct comparison of how rubric provenance influences the quality and utility of the generated feedback.
As shown in the figure below:
Experiment
- Gas-filled balloon experiment validated measurement of gas properties under controlled conditions, focusing on flammability, volume, mass, and density.
- Two AI feedback pipelines (Expert-Rubric and Learning-Progression) were compared using a within-subjects design; both produced high-quality feedback across all dimensions.
- Feedback quality was assessed via a 5-dimension rubric (Clarity, Accuracy, Relevance, Engagement, Reflectiveness); both pipelines scored near ceiling, with perfect accuracy in scientific content.
- No statistically significant differences were found between the two pipelines across any feedback dimension, indicating equivalent effectiveness.
- Reflectiveness prompting showed slightly lower and more variable scores, suggesting room for improvement in encouraging student reflection.
- Results confirm that structured, task-aligned AI feedback can reliably deliver scientifically accurate, clear, and motivating guidance at scale.
The authors use a multi-dimensional rubric to evaluate AI-generated feedback across five quality dimensions, with human coders achieving high percent agreement and moderate to strong inter-rater reliability for most dimensions. Results show that both expert-rubric and learning-progression pipelines produce consistently high-quality feedback, with no statistically significant differences between them across any evaluable sub-dimension. Feedback was uniformly accurate, clear, relevant, and engaging, though reflectiveness prompting showed greater variability in quality.

The authors compared two AI feedback pipelines—one using expert-designed rubrics and the other using learning progression-derived criteria—and found no statistically significant differences in feedback quality across any evaluated dimension. Both approaches consistently produced high-quality, scientifically accurate, and pedagogically sound feedback under controlled conditions. Results suggest that structuring AI feedback with either expert or progression-based criteria can yield similarly effective outcomes for student support.

The authors use a controlled experiment to compare gas properties across four samples, measuring flammability, density, and volume under identical conditions. Results show that flammability does not correlate with density or volume, as both flammable and non-flammable gases appear across the full range of measured values. The data indicate that these physical properties must be evaluated independently to characterize each gas accurately.
