AI presents many challenges and opportunities for schools, governors & trustees who should ensure and monitor that it’s being used in ways that enhance school life and opportunities for pupils, as well as helping reduce the workload of school leaders and staff. Rebecca Jones, from our school’s HR team discusses just such an opportunity and considers the associated risks.
AI is transforming recruitment across sectors, and education is no exception. From automated screening to predictive analytics, and sample questions, AI promises faster, smarter hiring decisions. But in schools - where recruitment isn’t just about qualifications, but also safeguarding and cultural fit, its use raises serious questions. When you strip away the buzzwords and automation, one question remains: Can a synthetic decision maker keep children safe?
As someone working in education recruitment, I’ve found myself increasingly curious and concerned about the growing role of AI in hiring. The promises are bold: faster shortlisting, reduced bias, streamlined processes. But beneath the surface, I’ve started digging into the pitfalls, especially around safer recruitment, which is far more than a compliance exercise - it’s a safeguarding imperative. The more I explore, the clearer it becomes: While AI might offer efficiency, it struggles with the very things that matter most in schools - human nuance, ethical judgment, and the instinct to protect.
Let’s start with the facts:
- AI use in recruitment has tripled in the UK in the past year.
- 30% of UK employers now use AI to recruit; 43% of large companies use AI to interview candidates.
- 70% of recruiters say AI improves hiring decisions, and it cuts hiring costs by up to 71%.
There’s no denying that recruitment can be time-consuming and expensive. Some roles attract large volumes of applications, and shortlisting can be a logistical challenge. AI offers solutions:
- Speed and efficiency: Algorithms can scan hundreds of applications in minutes, flagging those that meet key criteria.
- Consistency: AI applies the same rules to every candidate, reducing variability and helping ensure fairness.
- Data-driven decisions: Predictive analytics can assess which candidates are most likely to succeed based on historical data.
- Bias reduction: AI can be programmed to ignore demographic data, potentially reducing unconscious bias.
For governors, trustees and school leaders under pressure to make good hiring decisions quickly, these benefits are appealing. AI is alluring, but here’s the problem: schools aren’t hiring for productivity - they’re hiring for trust. And trust isn’t something you can measure with an algorithm.
AI is no silver bullet, and in education, its limitations aren’t just inconvenient, they’re dangerous. Schools aren’t hiring for output; they’re hiring for trust, integrity, and safety. Recruitment in this sector is a safeguarding process, not a productivity exercise. Every appointment carries risk, and safer recruitment isn’t a checklist - it’s a mindset rooted in human judgment.
Yes, AI can flag gaps in employment and track DBS checks. But it cannot detect charm masking manipulation, interpret vague or evasive references, or ask the uncomfortable follow-up question. It can’t sense when something feels off - and in safeguarding, that instinct can be the difference between protection and catastrophe.
Worse still, AI is only as unbiased as the data it’s trained on. If past hiring decisions reflect systemic inequalities, AI doesn’t correct them, it amplifies them. And when we start relying on algorithms to make decisions that demand human scrutiny, we risk sidelining professional judgment and missing the very red flags we’re supposed to catch.
In a sector where the stakes are children’s safety, outsourcing vigilance to an algorithm isn’t just flawed, it’s reckless.
In serious case reviews where children have come to harm in school settings, time and time again it is the same failures that come to light: Incomplete checks and blind trust in processes. Between 2019 and 2021, 59 schools were judged by Ofsted to have “not effective” safeguarding. The most common failures? Poor record keeping, weak leadership, and failure to follow up concerns. In one study of 41 professionals who sexually offended against children, 92.5% were aware of their interest before age 21, and 15% chose their career specifically to abuse.
The sad truth is that some people choose education roles specifically to access children. And we think AI can screen them out?
AI has a place in education recruitment, but only if we’re brutally honest about its limits.
It can help with admin. Elements of AI have been incorporated into the Teach in Herts website, for example, to flag gaps in employment. But it cannot replace the human responsibility to protect children. It’s crucial that school leaders retain full control over their recruitment process.
Let’s not sleepwalk into risk. AI might be the future of recruitment, but in education, we cannot afford to follow blindly. Safeguarding isn’t optional. It’s not a feature. It’s the foundation.
If you're considering AI in your recruitment process, ask yourself: Is it helping you make safer decisions, or just faster ones? Are you confident your recruitment process puts safeguarding first? Or are you trusting a system that doesn’t know what danger looks like?
Let’s keep the conversation going. Let’s challenge the hype. And most importantly, let’s keep children safe.
Further reading
Recruitment service | HFL Education
Safer Recruitment Training
AI for School Leaders and Business Managers
AI in Recruitment statistics UK 2025 | Latest reports & data
Safeguarding: What Can we Learn from Schools where it was Judged 'Not Effective' | Judicium Education
Safer recruitment and the importance of getting it right