The Educator Magazine U.K. May-August 2026 issue. - Magazine - Page 54
The end of ‘AI-written or not’:
What comes next for assessment integrity
As coursework deadlines approach in late
April and early May, followed by the UK
summer exam season, schools are
reassessing the impact of AI on assessment.
This shifts the focus from authorship to
understanding, enabling educators to
assess how students approached tasks,
how their thinking evolved and whether
they engaged with the subject matter.
This comes amid Ofqual’s recent
consultation on onscreen assessments
in England. Although digital GCSE and
A-level assessments are currently limited,
regulatory approval for broader implementation is anticipated by 2030. This
transition, while governed by rigorous
phased safeguards, signifies a strategic
pivot toward modernised assessment
environments.
Transparency in the writing process gives
students clearer guardrails for responsible
AI use. When expectations are tied to the
visibility of their creative process, students
better understand how AI can support
learning without replacing independent
thinking.
Recent research shows that eight in
ten young people use AI tools in
assignments. Given that coursework is
typically completed outside controlled
conditions, it is one of the first areas where
schools are seeing AI’s impact on
assessment.
While AI-generated writing is less of a
concern in exam halls, attention is
growing on AI student agents in online
exam settings. These agents can operate
within digital environments to complete
tasks on a student’s behalf, raising
challenges for exam security and
supervision.
At the same time, schools increasingly
recognise that banning AI is neither
realistic nor beneficial. Students will
continue to engage with AI in education
and employment, making responsible use
and AI literacy essential.
Schools must now strengthen their
AI-enabled education strategies and
establish clear parameters to protect the
learning and assessment process.
This raises a critical question: if ‘AI-written
or not’ is no longer a reliable benchmark,
what should assessment integrity look like
instead, particularly in online and digital
exam environments?
From simple detection to contextual
assessment integrity
As educators navigate AI in assessment,
it raises questions about the value of
traditional detection methods.
Embedding AI into the drafting and
preparation process complicates the
distinction between acceptable support,
such as clarifying concepts, structuring
plans or practising exam questions, and
academic misconduct.
In digital exams, similar principles are
emerging, with systems able to surface
behavioural and interaction data to
support contextual interpretation rather
than relying solely on outputs.
Traditional detection approaches frame
integrity as a final judgement rather than
an ongoing process. False positives
can undermine trust, while unclear
thresholds can leave educators unsure
how to respond and students uncertain
about boundaries.
Exam season is already high-pressure,
and uncertainty around AI use adds
further strain. Building trust requires
moving away from surveillance-led
approaches towards greater transparency
and a shared understanding of acceptable
use. Clearly defined and openly discussed
expectations better equip students to use
AI responsibly, reducing ambiguity at the
point of assessment.
A more effective approach focuses
on context rather than conclusions.
Instead of asking whether AI was used,
contextual integrity examines how work
was produced, why decisions were made,
and the extent of student engagement.
The emphasis shifts from policing
outcomes to understanding learning.
In online exam environments, this is
critical, as detection alone cannot reliably
account for how students interact with
digital tools in live or remotely supervised
assessments.
Making the learning process visible,
not just the final submission
AI-enabled assessment is shifting focus
from outputs to the learning process.
Writing environments with drafting
histories and revision patterns can reveal
a student’s evolving thought process.
Safeguarding exam integrity from
agentic AI
Much of the early debate around AI in
assessments focused on the influence
of generative AI tools on written
assignments outside the exam hall.
Today, however, the greater risk lies in
digitally delivered exams, where agentic
AI systems can complete tasks and
submit answers with minimal student
involvement. These tools often
operate outside traditional browser-based
protections, making existing safeguards
inadequate.
Schools must therefore reconsider
how exams are delivered and secured.
Protecting exam integrity requires
securing device environments, encrypting
exam content end-to-end and, where
appropriate, moving summative
assessments offline. Standardising digital
audit trails minimises AI-related while
maximising the advantages of digital
exams.
From detection to judgement
The challenge for schools has shifted
from resisting AI to evolving evaluative
models that reflect how students actually
learn. Because the “AI-written or not” is no
longer a reliable benchmark, schools must
move beyond simple detection toward
approaches that prioritise context and
visible evidence of learning.
By anchoring integrity into the traceable
evolution of a student’s work, educators
can safely embrace AI as a modern study
tool while fortifying the security of the
final qualification. This transition ensures
that digital environments don’t just host
exams, but actively validate the student’s
genuine intellectual journey.