the educator mag Jan 26 - Flipbook - Page 26
Safeguarding and Cybersecurity
in association with LGfL
Generative AI in education:
Assessing risks and safeguarding learners
Alex Dave, Safeguarding Lead, edtech
charity LGfL-The National Grid for Learning
With the Department for Education (DfE)
encouraging schools to harness the potential
of generative artificial intelligence (GenAI),
it is essential this is done in a balanced and
responsible way. GenAI represents a
significant shift in how technology interacts
with learning, and all staff must understand
the risks that accompany its use.
GenAI is already embedded in many apps,
social media platforms and educational tools
that children use daily. Research from Internet
Matters and the Alan Turing Institute shows
that many young people are already familiar
with these tools, meaning schools are often
playing catch-up. While early debate focused
heavily on academic dishonesty, there are far
broader safeguarding concerns, and it would
be naïve to assume that adequate protections
exist within the platforms themselves. This
article presents some current trends in AI
safeguarding risks, as from understanding
these, schools, educators and parents can
then consider how best to protect children.
Risks surrounding the use of GenAI
Fake CSAM
The Internet Watch Foundation (IWF) has
reported a sharp rise in fake child sexual abuse
material (CSAM) generated using AI. Although
these images are fabricated, the harm to
victims is real and comparable to direct abuse.
Some offenders use AI to create sexualised
images of real children using photos taken
from social media, often for coercion or
extortion. There is also a growing concern
around easily accessible ‘nudifying’ apps,
which can tempt young people to manipulate
images of peers as a joke, quickly escalating
into bullying and abuse. All fake CSAM is
illegal under new offences introduced by the
Online Safety Act, making it vital that students
understand both the risks and the serious
legal consequences.
Adult content via chatbots
Chatbots embedded in popular platforms
such as Snapchat and Instagram can engage
children in sexually explicit conversations.
Such content whether text or images, can
distort young people’s understanding of
consent, healthy relationships and gender,
while normalising harmful behaviours and
reinforcing damaging stereotypes, including
violence against women and girls. It can also
start to normalise such content and
desensitise children to pornography and
other adult themes, who in turn find more
explicit content.
Inappropriate or harmful advice
Well-known AI tools such as ChatGPT
and Gemini have been shown to provide
harmful guidance on topics including
self-harm, suicide, extreme dieting and
sexual activity. At a time when youth
mental-health needs are increasing, this
presents a significant risk.
A recent U.S. lawsuit highlights these dangers,
alleging that a chatbot deepened a teenager’s
isolation and encouraged self-destructive
thoughts after he turned to it for emotional
support. Alarmingly such tools often have
limited safeguards in place when
conversations veer into worrying territory
and not only fail to signpost users to help and
support but can actually encourage harmful
and illegal activity.
Exposure to harmful or fake content
AI-generated images and videos are
increasingly used to spread misinformation
or manipulate public opinion. These
deepfakes are often hyper-realistic and
difficult to distinguish from real content,
even for adults. This reinforces the importance
of teaching critical thinking, media literacy
and digital resilience as core safeguarding
skills.
Emotional impact of AI relationships
Another growing concern is the rise of AI
companion apps that allow users to create
virtual friends or partners. For adolescents,
these tools can blur the boundaries between
real and artificial relationships, potentially
displacing genuine human connection and
distorting views of intimacy.
Research from Common Sense Media found
that a significant number of teenagers find
conversations with AI companions as
satisfying as, or more satisfying than, those
with real friends, and that younger teens are
particularly likely to trust advice given by
these systems. This makes them especially
vulnerable.
Further research from Cornell University
shows that some AI companions are designed
to maximise engagement by emotionally
manipulating users, including encouraging
secrecy or discouraging them from ending
conversations. Behaviour that, in any other
context, would raise serious concerns around
grooming or controlling relationships.
Vulnerable children, including those with
neurodiversity, disabilities or mental-health
challenges, are at greatest risk. With services
such as CAMHS under increasing pressure,
more young people may turn to AI for
comfort, increasing reliance on artificial rather
than real human support.
Bias and harmful stereotypes
Generative AI relies on large datasets scraped
from the internet, meaning biases present in
source material are often reflected in
outputs. Prompts relating to crime, profession
or behaviour frequently reinforce ethnic and
gender stereotypes, highlighting the risk of
normalising prejudice if such tools are used
uncritically in educational settings.
Sharing personal data
Children (and many adults) may unknowingly
share personal or sensitive information with
AI chatbots they trust. Names, locations and
other details can then be used for targeted
advertising, fraud, fake profiles or grooming.
As GenAI becomes more widespread, robust
data-privacy safeguards and clear guidance
for young people are essential to prevent
exploitation.
For free information and support, visit genai.
lgfl.net, and for parental resources see lgfl.
net/safeguarding/parentalsupport.
Safeguarding and Cybersecurity
in association with LGfL