19 February 2026
Making mentoring work
Building evidence for What Works
Over the past few years, we’ve seen a rapid growth in mentoring programmes for children and young people across the UK. But while mentoring feels like a good idea, the evidence around what actually works is still emerging. This four-part series shares insights from our work evaluating mentoring programmes across the country, exploring what works, how to evaluate it, and what we need to build a stronger evidence base.
Part 1: Why mentoring matters – but evidence still lags behind
Mentoring makes intuitive sense as a method for support
A trusted adult, providing guidance and support, helping navigate life’s challenges – it feels like a “no-brainer.”
Mentoring can also respond to many areas of need. These can include but aren’t limited to: preventing or reducing serious youth violence, improving education and employment outcomes, and responding to the needs of groups who may be more vulnerable, such as children in care.
Mentoring is everywhere
We’ve seen numerous mentoring programmes across the UK in recent years.
At Cordis Bright we’ve worked on a large number of mentoring evaluations recently, including sports-based mentoring in West Yorkshire, a youth diversion programme in Avon & Somerset, culturally-responsive, trauma-informed approaches funded through MOPAC’s MyEnds programme, and various mentoring programmes funded through the Youth Endowment Fund (YEF).
But as with commissioning any support, robust evidence is needed to be sure of what works.
We have some promising evidence that mentoring is effective
International studies show mentoring can reduce offending and reoffending by up to 20%.
The YEF Toolkit states that mentoring is effective in both reducing crime and the behaviours associated with crime and violence. The research suggests that, on average, mentoring reduces violence by 21%, all offending by 14%, and reoffending by 19%.
It’s also likely to have a desirable impact on other outcomes like substance misuse, behavioural difficulties, and self-esteem.
The EEF found some evidence that mentoring had a small positive impact on academic outcomes and on non-academic outcomes such as attitudes to school, attendance and behaviour. (e.g. here).
But we need more evidence
However, in the UK, high-quality causal evidence is still limited – it’s promising but variable, and we need more evidence on what actually works.
And of course, mentoring isn’t easy to evaluate: it’s relational, complex, and takes time to show results. This affects funding decisions and long-term planning.
This is where Cordis Bright can help.
We’re building the evidence base here in the UK
Alongside others like the Youth Endowment Fund, Education Endowment Foundation, and Foundations, we’re building evidence for mentoring through our growing portfolio of evaluations – from theory-driven randomised controlled trials, realist evaluation approaches to in-depth implementation and process studies and economic assessments.
In this series, we’ll explore what we’re learning, how to evaluate mentoring, and what needs to happen next.
Part 2: Learning from real-life mentoring – what’s actually happening in mentoring programmes today?
Mentoring programmes looks different in every context…
In recent years, Cordis Bright has been privileged to work on several evaluations of mentoring programmes.
These include SHIFT, a music mentoring programme delivered by AudioActive in East and West Sussex, MAC Cerridwen, a mentoring programme delivered by Media Academy Cymru in South Wales, Future Men Boys Development Programme, a programme for boys in South London aimed at reducing school exclusion, and Salford STEER a mentoring programme delivered by the Salford Foundation aimed at reducing offending.
…but some features are consistently important.
Each programme is distinct, but together, they tell us a lot about what effective mentoring can look like.
Our own experience and the existing evidence base (e.g. here) have identified common threads that work.
For example, evidence around mentoring’s impact on education outcomes found programmes are associated with more successful outcomes if they have a clear structure and expectations, and provide training and support for mentors (here).
Effective mentoring needs to be at an appropriate intensity and length, and maintain fidelity to the model.
Common features of effective programmes include:
- Clarity and specificity of aims (e.g. violence reduction, school engagement, wellbeing)
- Investment in recruitment, training, and supervision of mentors
- A diverse, committed group of mentors with varied skills who can adapt to the needs of the mentee
- Mentoring relationships built on trust and respect
- Trauma-informed, culturally competent practice
- Personalised support (e.g., CBT or arts focus), tailored to individual needs
- Support from parents or carers to attend sessions
- A carefully managed end to the mentoring relationship, involving signposting to other resources and celebrating progress made.
- A programme rooted in evidence, and a clear theory of how it will work.
It can be challenging for programmes to do this
Mentoring programmes face common and persistent challenges. These include being funded at an appropriate level, maintaining engagement from the people they're supporting, ensuring fidelity across areas, and balancing flexibility with structure (e.g. here).
But the potential is clear, so it’s vital to keep building the evidence through evaluation.
In the next section, we’ll dive into how to evaluate mentoring and some common challenges.
Part 3: Measuring mentoring – How to evaluate what works
Evaluating mentoring is vital to ensure value for money, and it’s not rocket science
Mentoring may feel daunting to evaluate, given the inherently human nature of the mentor-mentee relationship.
But especially in a tight funding environment, evaluation is crucial. It’s needed to make sure we’re offering programmes that make as big a difference as possible to as many people as possible.
So how do we go about evaluating mentoring?
Five principles for evaluating mentoring:
1. Start with the theory.
First, we need to understand the programme’s rationale, i.e. the problem trying to be solved and why.
Develop the Theory of Change. This should be clear on what the mechanism by which mentoring leads to reduced offending or improved wellbeing. Being precise here helps shape both delivery and measurement.
At the same time, agree the purpose of the evaluation – how will it be used, and what is it for.
2. Ask the right questions.
Develop a robust evaluation framework and agree research questions.
This means measuring mechanisms of change and outcomes – not just whether mentoring improves outcomes, but how it changes the knowledge, attitudes, behaviours, and experiences that impact on outcomes.
3. Be inclusive
Embed equality, diversity and inclusion at all stages of the evaluation.
Make sure reporting is accessible and inclusive.
4. Use the right methods
Blend quantitative and qualitative data to tell you if something works, why and how. Together, they provide a fuller picture.
Approaches should also be trauma-informed and sensitive.
It’s also important to build in support to mobilise the evidence you’ve found during the evaluation, i.e. what happens next?
5. Collaborate!
Lastly and importantly, evaluations should take a collaborative approach. For example, in all our projects, like SHIFT, Cerridwen and others, we’ve worked with providers to build realistic, robust evaluation frameworks.
This collaboration is crucial, as it ensures evaluations reflect the real-world constraints and strengths of delivery teams, and that findings are useful, and not just academic.
In the final section, we look to the future, and what’s needed to build a stronger, joined-up evidence base.
Part 4: Mobilising what we know to drive change
We already know what works
The sector is busy, with Violence Reduction Units (VRUs), Youth Endowment Fund (YEF) trials, grassroots and national delivery. There is a lot of evidence out there about what is needed for mentoring programmes to be successful.
For example, robust evaluations using randomised-controlled trials (RCTs) are ongoing, with evidence emerging all the time. Other evidence already exists too.
Lots of evidence has been collated and is in meta-analyses and systematic reviews already. It’s also in toolkits.
So why aren’t people using this?
The big question to grapple with, however, is why people aren’t engaging.
There’s not enough focus on engaging with existing evidence and building strategies and commissioning decisions around it.
This is likely because, in our restrictive funding environments, people are too busy to engage with the evidence in the depth needed to make a difference.
This is where we’re calling for change.
Shifting our focus to mobilising evidence
Mobilising evidence isn’t just about producing more reports. It’s about making sure that what we already know is actively used to inform practice, policy, and funding decisions.
We know a lot about what works in mentoring. For mentoring to achieve its full potential as a tool for improving lives, we need to focus on supporting the mobilisation of evidence, to keep building a shared, collective understanding of what works.
We also need to support continuous evaluation, which is a vital tool for continuous improvement.
What does this look like in practice?
Here are some practical ways to mobilise the evidence we have:
1. Embedding evidence in commissioning decisions
2. Creating practitioner-facing tools from evaluations, e.g. turning findings into a simple, shareable implementation checklist or “top tips” for delivery teams.
3. Using findings to adapt delivery in real time
4. Bringing together commissioners, providers and evaluators, e.g. by meeting frequently to review interim findings, reflect on what’s working, and adjust approaches.
5. Integrating evaluation insights into staff training
6. Using common indicators and data systems, e.g. adopting a shared outcomes framework across programmes to allow for comparison and build a cumulative picture of what works across contexts.
7. Aligning local strategies with national evidence, e.g. LAs developing youth violence strategy by drawing on YEF evidence and aligning its priority outcomes with tested mechanisms in mentoring.
Let’s build this together.
We’d love to work with others to make that happen.
If you’d like to collaborate, share learning, or join an upcoming roundtable/event/webinar, please get in touch with Matt Irani or Eleanor Southern-Wilkins.