Outgrowing Traditional Exams: Preparing Students for Real-World Challenges
Within higher education, rapid technological and workplace shifts—including the explosive rise of generative AI—are prompting a major rethink of how universities assess student learning. Long considered the cornerstone of academic assessment, traditional exams are now revealing shortcomings in both credibilityand inclusivity—especially when delivered online. As AI capabilities advance and remote learning becomes more common, institutions are exploring alternative approaches that better capture the skills graduates need in today’s fast-changing world.
Beyond Exams: Why They Are Not the Answer
Online proctored exams, which surged during the COVID-19 pandemic, have highlighted cracks in reliability and validity, while exam-based models—digital or not—generally overemphasise memorisation instead of critical thinking, creativity, and adaptability. Time-pressured exams in online or hybrid contexts can worsen academic misconduct, heighten student anxiety, and disadvantage those lacking reliable technology (Woldeab & Brothen, 2019; Hartnett et al., 2023). For fully online learners around the globe juggling multiple commitments, rigid testing schedules pose yet another barrier. Meanwhile, AI tools are becoming ever more accessible, making the quest for fair, inclusive, meaningful assessment more urgent than ever — spurring the push for alternatives to traditional exams.
From Continuous to Authentic Assessment: Meaningful Assessment Pathways
Continuous assessment, which breaks learning into a series of smaller tasks spread out across a module or program.Has been advocated by educationalists as an effective alternative for the past few decades. Instead of one final exam or essay submision, students engage in frequent formative activities—quizzes, discussion posts, or short reflections—alongside summative assessments such as projects or presentations. This approach allows instructors to catch misconceptions early and provide regular feedback. Students likewise experience lower pressure, better work–life balance, and more opportunities to prove their mastery over time (Fynn & Mashile, 2022). The key is designing enough checkpoints for meaningful feedback without overburdening staff or learners.
Authentic assessment, is also an important part of our learning design toolkit. This approach reqires students to tackle tasks that mirror real-world demands. Rather than testing recall of course content, authentic assessments ask learners to apply their knowledge in practical ways—developing marketing campaigns, conducting real-time data analysis, or drafting policy briefs. These methods require higher-order thinking, collaborative problem-solving, and creativity, more accurately reflecting the workplace challenges graduates will face (Vlachopoulos & Makri, 2024). By bridging theory and practice, authentic assessment also reduces incentives for cheating, because the task itself is both engaging and personalised.
The AI Factor: Challenges and Opportunities
With generative AI tools (e.g., ChatGPT, CoPilot and Claude) now at students’ fingertips, educators must rethink assessment design to incorporate this technology constructively. Embedding AI use into assessment needs to move beyond merely testing whether students can spot ‘hallucinations’ or factual errors in AI outputs. As generative AI technology evolves, such exercises risk becoming outdated, focusing on superficial AI literacy rather than deep disciplinary thinking. Recent research from the Higher Education Policy Institute (HEPI) and Kortex reported that in 2025 88% of students surveyed used AI tools to develop their assessments, and increase of 35% from the previous year (Freeman, J. 2025). Merely banning AI is rarely practical; instead, we should clarify ethical guidelines around AI usage (Perkins et al., 2024; Furze, 2024). Students can be asked to show how AI informed their thinking or assisted data analysis, while still demonstrating original, critical judgment. If they do use AI to streamline tasks, they must cite it properly and explain where human insight added value.
Building the Bigger Picture: Programme Level Assessment
As AI capabilities advance and remote learning becomes more common, institutions are exploring alternative approaches that better capture the critical thinking, creativity, and adaptability graduates need in today’s fast-changing world. Finally, programme-level assessment ties multiple courses together in a cohesive, project-based framework. Rather than isolate learning outcomes in each individual class, this approach focuses on a student’s cumulative development, culminating in capstones or e-portfolios that showcase authentic skills. Program-level assessment not only illustrates deeper progress and integration of knowledge but can also help reduce the overall assessment load for both students and faculty.
Conclusion
In an era where technology and professional expectations evolve at breakneck speed, a shift toward continuous, authentic, and AI-ready assessment strategies is no longer optional—it’s essential. By trading in traditional, surveillance-heavy exams for tasks that engage real-world thinking, we promote genuine mastery and integrity. This reimagined approach ultimately prepares students for success beyond graduation, equipping them with the critical faculties, ethical mindsets, and adaptability they need to thrive in the modern world.
References
Freeman, J. (2025) Student Generative AI Survey 2025. HEPI Policy Note 61 (February 2025). Available at: https://www.hepi.ac.uk/2025/02/26/hepi-kortext-ai-survey-shows-explosive-increase-in-the-use-of-generative-ai-tools-by-students/ (Accessed: 23 March 2025).
Furze, L. (2024) ‘Updating the AI Assessment Scale’ [Blog] Leon Furze https://leonfurze.com/. Available at: https://leonfurze.com/2024/08/28/updating-the-ai-assessment-scale/ (Accessed: 23 March 2025).
Fynn, T. & Mashile, J. (2022) ‘Continuous online assessment at a South African ODeL institution’, Frontiersin Education, 7. Available at: https://doi.org/10.3389/feduc.2022.791271 (Accessed: 23 March 2025).
Hartnett, M., Butler, P. and Rawlins, P. (2023) ‘Online proctored exams and digital inequalities during the pandemic’, Journal of Computer Assisted Learning, 1 – 13. Available at: https://doi.org/10.1111/jcal.12813 (Accessed: 23 March 2025).
Perkins, M., Furze, L., Roe, J., & MacVaugh, J. (2024) The Artificial Intelligence Assessment Scale (AIAS): A Framework for Ethical Integration of Generative AI in Educational Assessment. Journal of University Teaching and Learning Practice, 21(06). Available at: https://doi.org/10.53761/q3azde36 (Accessed: 23 March 2025).
Vlachopoulos, P. & Makri, A. (2024) ‘A systematic literature review on authentic assessment in higher education: Best practices for the development of 21st century skills, and policy considerations’, Studies in Educational Evaluation, 83, Article 39. Available at: https://doi.org/10.1016/j.stueduc.2024.101425 (Accessed: 23 March 2025).
Woldeab, D. and Brothen, T. (2019) ‘21st Century Assessment: Online proctoring, test anxiety, and student performance’, International Journal or E-Learning & Distance Education, 34(1).. Available at: https://www.ijede.ca/index.php/jde/article/view/1106 (Accessed: 23 March 2025).