The Educator Magazine U.K. September - December 2025 issue. - Magazine - Page 68
Ai in the classroom safeguarding education
from cyber risk
By Nik Levantis, Senior Cybersecurity Consultant, Obrela
assisted misconduct, with some universities reporting a large increase
in cases year-on-year. Detection tools are struggling to keep pace.
Models like GPT-4 produce text that so closely mimics human writing,
it is harder to flag issues using conventional plagiarism systems.
A 2023 study published in the International Journal for Educational
Integrity found that detection tools often produce inconsistent results
and false positives. Without clear policy guidance or robust review
mechanisms, institutions risk unfair accusations, as well as undetected
misconduct.
This means human judgment has to be used as well as technological
detection tools. Clear boundaries around acceptable AI use must also
be established and communicated to staff and students alike, with
policies that clearly define misconduct in specific, actionable terms.
Phishing, deepfakes and the expanding attack surface
The UK education sector is entering a transformative phase with
artificial intelligence (AI) tools like ChatGPT becoming increasingly
integrated into classrooms and lecture halls. AI offers the potential for
more personalised learning, streamlined administration and greater
accessibility. However, alongside the benefits, there are a whole host
of cybersecurity, ethical and operational risks that many educational
institutions are still trying to understand, let alone mitigate.
The open-access culture that defines much of the education sector
makes it particularly vulnerable to cyber threats. Universities and
schools operate on networks designed for collaboration, not for
containment. With thousands of users accessing shared systems and
with the addition of AI tools interacting with sensitive data, the
potential for breaches has grown significantly.
Data privacy and institutional trust
One of the most pressing concerns is data privacy. Educational
institutions hold vast repositories of personal and academic
information, from student records and health data to employment
contracts and research outputs. In recent years, breaches have
exposed just how fragile these systems can be. In one UK university
incident, AI-generated queries inadvertently surfaced personally
identifiable information, including special educational needs data
and passport scans. The fallout extended beyond regulatory fines.
It disrupted operations and eroded trust among students, staff and
other stakeholders.
Elsewhere, a breach affecting 14 UK schools resulted in the leak of
sensitive student and staff data, including identity details and
personal records. These exposures not only increase the risk of fraud
and unauthorised access, but they also reduce public confidence in
the education sector’s ability to safeguard personal data.
To counter the risks, institutions need to implement layered security
strategies. Encryption, structured data classification and advanced
detection tools are essential components of digital hygiene, but
technology alone is not enough. Institutions must also invest in
governance frameworks that define accountability, ensure
compliance and encourage a culture of vigilance.
Academic integrity in the age of generative AI
Beyond data protection, AI is changing the very nature of academic
assessment. The rise of generative models has led to a surge in AI-
Cyber criminals are also using AI to create more sophisticated phishing
campaigns. These include emails, texts and voice recordings that
convincingly impersonate educational staff or external partners. The
University of Nevada, for example, recently reported a sharp increase
in scams involving deepfakes and other social engineering techniques. Research suggests phishing attacks rose by 58% in 2023 alone,
with educational institutions particularly exposed due to their open
networks and high turnover of users.
Responding to these threats needs more than software.
Educational institutions must build resilience through real-time
monitoring, incident response capabilities and regulatory alignment.
Tools such as Extended Detection and Response (XDR) and Security
Orchestration, Automation and Response (SOAR) can help streamline
defences, but their effectiveness also depends on how well they are
integrated into broader risk management strategies.
Bias, ethics and the question of accountability
There are also pressing ethical concerns. AI models can only reflect the
data they are trained on so if that data is skewed, the outputs will be
too. This can reinforce harmful stereotypes, marginalise non-Western
perspectives and raise questions about intellectual property. When
content is AI-generated, its source is often obscured, making
transparency and accountability much more difficult to enforce.
To address these issues, educational institutions must establish
formal governance. Establishing Governance, Risk and Compliance
(GRC) frameworks can help manage the ethical deployment of AI.
Institutions that align their policies with GDPR and other regulatory
standards are better positioned to reduce legal exposure and maintain
trust.
European universities that have implemented AI ethics guidelines
are already seeing a reduction in privacy-related incidents, offering a
model for others to follow.
Building a secure and ethical foundation for AI in education
AI adoption in education needs to be accompanied by secure
infrastructure, staff training, ethical oversight and clear policy
frameworks. This is not simply a matter of risk avoidance. If it is done
properly, it will create a resilient foundation for an AI-powered future
that enhances learning, protects privacy and upholds academic
integrity.
As institutions navigate this technology, the challenge will be to
balance the potential for innovation with responsibility. The decisions
that they make today will shape not only how AI is used in the
classroom but also how education itself evolves in the digital age.
To download Obrela’s full white paper on ChatGPT in education, visit
ChatGPT in Education: Challenges and Cyber Risks in Open Institutions