Blog
2.2.22

Planning for impact: maximising the effectiveness of Pupil Premium usage

This blog is adapted from an article originally published in the SecEd magazine. Do check them out!

It’s common these days to see education as an engine for social mobility and creating opportunity for those who have the least. Policies like the Pupil Premium are explicitly intended to support this and address the very real challenges of education in high deprivation areas.

Quite rightly in schools we therefore invest huge amounts of time, money and energy in activities to make a difference, especially to the least advantaged. But this comes with a flipside. As Becky Allen, in her blog series ‘The Pupil Premium is not working’, points out: it can drive ‘short-term, interventionist behaviours’. Given accountability pressures, there’s an understandable tendency to just do more and more – after school clubs, one-to-one tutoring, curriculum booster – in order to make a difference.

But the effectiveness of this approach is being called into question. One Sutton Trust report found that even in academies set up explicitly to serve disadvantaged communities, most disadvantaged pupils were finding it increasingly difficult to improve levels of attainment. And ImpactEd’s own small-scale research suggested that only three percent of school leaders were confident in their ability to evaluate the impact of the work they were doing – what we call the ‘evaluation deficit’.

For sustained improvement, we need to get much sharper at knowing if what we are doing is effective, and prioritising those strategies that make the greatest impact to both close the gap and raise the bar overall. This article provides some practical methods for starting to do just that.

Start by making good bets

For schools looking to make evidence-based change, there has been an explosion in research evidence appearing to show what works to improve outcomes for the most disadvantaged. So it makes sense to start there.

For example, at a whole-school level, NFER research found that those schools which had most successfully closed the gap prioritised seven key building blocks: attainment for all, behaviour and attendance, high-quality teaching, meeting individual needs, effectively using staff, supportive leadership, and data-driven insights to support intervention programmes.

For specific strategies, the Education Endowment Foundation’s (EEF) Teaching and Learning Toolkit has made ideas such as developing meta-cognition and self-regulation, providing effective feedback and masterly learning common parlance in schools, with perhaps the most useful components the research reports that sit behind their summaries. The IEE’s Evidence 4 Impact database provides a similar wealth of resources. Researchers such as Barak Rosenshine provide clear syntheses of evidence that are immediately applicable to the classroom – summarised by Tom Sherrington here.

What does this look like here?

Having identified an area where you think your school might be able to improve practice, now you need to think very carefully through what this looks like here. Most projects sink or swim by implementation. Avoid initiative overload in favour of executing on a few key things well.

Then you need to think through what success will look like for this initiative – defining not just what you want to do, but what you are trying to achieve with it, and how will you know if you have achieved it. These conversations will often benefit from a focus on outcomes you are aiming for rather than prescription about what this needs to look like.

Planning for impact

With outcomes identified and some kind of implementation strategy, you’re ready to think about how you’ll evaluate your change to see if it’s achieving the outcomes you are hoping for.

First, decide what type of evidence you need for the questions you are trying to answer. For some initiatives that are relatively easy to implement, data like informal feedback from teachers may be completely sufficient evidence for what you are trying to achieve. For more involved projects that are aiming to make a sustained difference to pupil outcomes, you may want to look at more robust measures, potentially against a control group of pupils that aren’t taking part in a particular programme.

You should also be selective about your outcome measures, both intermediate and longer-term. The range of indicators you could look at is huge and could include:

  • Academic attainment. Consider carefully the validity and reliability of your data – national, moderated exam results and standardized assessments will give different sorts of data to classroom assessments.
  • Pastoral and school engagement measures – for example, looking at measures of behaviour, exclusions, attendance. This data will often be readily available and may be high quality.
  • Broader skills. Many initiatives will be looking to develop outcomes such as pupils’ levels of motivation, self-efficacy or metacognition. In many cases there will be pre-existing questionnaires that can be used to measure these outcomes.

Once you have decided on your measures, you will typically want to have both baseline and outcome data – where were young people when you started with an initiative, and where did they end up? Can you compare this against other groups that weren’t taking part, or even previous year groups, to put your observed impact in context?

Sense check this all against workload and your existing school processes. You don’t want to create a need to collect lots of new data, or to overhaul all your systems.  Evaluation should reduce work by helping you focus, not create more.

What to do with your data?

This is the key question that you should plan for at the beginning. You should be aiming to produce some sort of summary that can be useful for non-expert stakeholders. But most crucially you need to carve out some time to talk through implications for results. If your evaluation finds that what you were doing wasn’t effective, what are you going to do with that? Will you drop the initiative entirely, redesign some components of it, do some further research? Evaluation won’t necessarily give you the answers – what it will do is give you some evidence that you can reflect on and use alongside your professional judgement.

What are some of the pay-offs if done well? A good implementation and evaluation process can maximise the chances of teachers trying something that works, and minimising the risk that it won’t. Perhaps more importantly though, it can help pave the way for wider cultural change. When we want to narrow the gap, or indeed improve any sort of outcome, our focus becomes not just on doing more. Instead it becomes on carefully identifying the problem we’re trying to solve, and rigorously assessing if we are solving it. That offers not just the potential of improved outcomes, but also for a more sustainable and healthier approach to making a difference.

Takeaways
  • Start with the best bets: what does the research evidence suggest?
  • Think ‘what does this look here’ – plan for quality implementation
  • Plan for impact, with a well-considered and appropriate evaluation plan
  • Know what you are going to do with your data before you collect it

To get support with planning and measuring the impact of your Pupil Premium strategy, get in touch!

Subscribe to newsletter

Subscribe to learn about our latest events, research, and updates.

By clicking Sign Up you're confirming that you agree with our Terms and Conditions.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Get in touch

Want to hear more about ImpactEd Evaluation? Get in touch for a conversation with a member of our team.

Get in touch
See more
"