At Thursday’s SSC meeting we discussed Year 11 and 13 interventions: identifying the right pupils, designing effective assessments and implementing strategic interventions. Here is a brief overview of the meeting content. At the bottom is a list of questions, which should act as a checklist for HODs in making sure they are working strategically.
Accurate Assessment and Tracking
Without accurate judgments we could be directing our intervention at the wrong groups of pupils. It is essential that there is a shared understanding about how pupils are likely to achieve. It is equally important that those judgments are accurate and comparable between teachers. Predictions should be based on a combination of variables including the following:
- Summative assessments
- Low stakes tests
- Perceived effort
Each department should be operating a process of centralised tracking in which summative assessments are recorded with a topic/question or assessment objective breakdown.
Effective Assessment Design
Before we assess pupils, it is important to ask the purpose of the assessment. If we want to use the assessment to draw conclusions about a pupil’s likely achievement in several months’ time then we need to design an assessment which can tell us this.
This means we need an assessment process which is reliable and enables us to make valid inferences. Here are some key questions we should ask:
Reliability (of assessment)
- If a pupil took different versions of the same test then would they get approximately the same mark?
- If a pupil’s answer paper were submitted to three different markers, would they each return the same mark?
Validity (of inferences)
- Can we make valid inferences from the output of the assessment?
- Does it tell us what we want to know? E.g. what would that child get on a similar test in 6 months’ time?
In order to make sure our assessments are reliable, we should take the following steps:
- Make sure all teachers set the same test;
- Consider blind marking of assessments;
- Ensure that judgments are standardised;
- Compare pieces of work and discuss (between teachers) on key borderlines.
The most important part of ensuring valid inferences is making sure that we are measuring learning (long term) rather than short term gains in performance. So infrequent summative assessments need to sample work from across the entire course and stick as closely as possible to the format of assessment pupils will take in the summer. Subsequent assessments must also be as close as possible in terms of difficulty so that any comparisons between marks are meaningful. This also means replicating the conditions of the test e.g. Were pupils told in advance? Were they allowed to refer to notes? Was the test completed in exam conditions?
As important as detailed diagnostic data can be, what we ultimately need from the above processes is a list of pupils who are behind and a plan to close each gap. This is the starting point for successful interventions. So once a target group is identified, it’s a case of running through each pupil and deciding what type of intervention they will benefit from and how much intervention they need.
Interventions don’t have to involve special sessions after school. If a child’s barrier to progress is their attitude to learning, then a meeting with parents and child after school will likely have more impact than repeating the lesson at 15:30. Equally, additional support in class, differentiated resources or redirecting TA support are all valid interventions.
There however will be a group of pupils who require some additional sessions outside of lessons. Any sessions run in addition to the school day need to be really focused and for a fixed period of time (i.e. you might run a block of intervention sessions in the lead up to mocks and then take stock at that point). Best gains can be had from lots of modelling and feedback in small groups (maximum 5). The process should feel intensive and it is important that all teachers involved in intervention have well-planned resources and materials.
Finally, our testing mechanisms should be giving us feedback on whether these interventions are working and we should adapt our lists accordingly.
Here is the list of questions every department should be able to answer if they have planned effective interventions:
Accurate Assessment and Tracking
- How am I tracking the progress of the whole cohort?
- How do I know my data is accurate?
- Do I have a plan for every child who is underachieving?
Effective Assessment and Tracking
- Can I draw valid inferences from my assessments?
- Are my assessment methods reliable?
- How can I ensure I am measuring learning rather than performance?
- How will I ensure accurate moderation?
- Am I confident in the way I’m assigning grades to marks?
- Is my intervention specific to the identified needs of each child?
- Are my intervention groups small enough (no more than 5 pupils)?
- Have I considered the level and duration of intervention each child needs?
- How have I ensured the quality of resources is consistently high across those teachers delivering intervention?
- How will I know if my intervention is working and what will I do if it is/isn’t?