HyperAIHyperAI
Back to Headlines

Generative AI Turns College Exams into a Wicked Problem, Researchers Warn

5 days ago

Generative AI has transformed university exams into what researchers are calling a "wicked problem" — a complex, multifaceted challenge with no simple or permanent solution. A new study published in the journal Assessment & Evaluation in Higher Education warns that AI tools like ChatGPT have disrupted traditional assessment methods, leaving professors overwhelmed and uncertain about how to design fair, meaningful, and AI-resistant exams. The research, led by Thomas Corbin, David Boud, Margaret Bearman, and Phillip Dawson from Deakin University, involved in-depth interviews with 20 unit chairs at a large Australian university during the second half of 2024. The findings reveal widespread confusion, increased workloads, and a lack of consensus on how to respond. While some educators view AI as a necessary tool students must learn to use responsibly, others see it as a threat to academic integrity and authentic learning. Many professors described being stuck in impossible trade-offs. One attempted to offer both AI-permitted and AI-free assignments, only to find the dual system doubled their workload. Another expressed concern that overly strict measures could end up testing compliance rather than creativity. Oral exams, seen as more resistant to AI, were deemed impractical for large classes due to logistical constraints. The authors explain that "wicked problems" — a term originally used in urban planning and climate policy — are characterized by interconnected challenges, shifting priorities, and no clear right answer. Every solution introduces new complications. For example, banning AI might preserve academic rigor but ignore real-world skill needs. Relying on AI could streamline grading but risk undermining learning. Instead of searching for a perfect fix, the researchers urge universities to embrace flexibility. They recommend giving educators "permission to compromise, diverge, and iterate," recognizing that no single assessment method will work across all subjects or contexts. Constant adaptation, not perfection, should be the goal. In practice, professors are experimenting with a mix of strategies. Some are using handwritten or in-person tasks to verify a student’s authentic voice. Others are incorporating reflective writing, live presentations, or personalized prompts that are harder to outsource. A growing number are using AI themselves — not to cheat, but to draft lesson plans, quizzes, and feedback templates — freeing up time for more meaningful student engagement. Beyond the classroom, experts argue the crisis reflects a deeper issue: much of modern education relies on standardized, easily graded assignments that AI can now replicate. Economist Tyler Cowen suggests this exposes the limitations of current teaching models. LinkedIn co-founder Reid Hoffman agrees, predicting that future assessments will need to be harder to game — possibly involving oral defenses or even AI-powered examiners integrated into the testing process. The study concludes that universities must stop chasing a mythical "perfect" solution to AI in exams. The real challenge isn’t just preventing cheating — it’s rethinking what assessment should mean in an age where knowledge is accessible, and skills are more valuable than memorization.

Related Links