Why was it that achievement in the PSC improved by 23% over four years? The NFER research found several possible reasons. Firstly, after the 2012 PSC more students were 'disapplied' and did not undergo the check. So, we cannot really compare achievement in 2016 with that of 2012 because the sample groups were different. Instead, if we compare achievement in 2016 with that of 2013 we find a 12 point difference, which is not quite as impressive as the 23 point difference, previously mentioned. So, how we now account for a 12% improvement?
The NFER research suggests some plausible answers. Schools must report to parents on whether their child has passed or failed the Check. This implicitly gives the PSC the status of an examination. In addition, Ofsted, the British government's school inspection body, also review the results when inspecting individual schools. Both these factors make the Check a 'high-stakes' test for which teachers and schools are accountable. The imperative on them to do well is implicated in the NFER findings that:
- more lesson time is spent on reading nonsense words;
- more tests are conducted focusing on phonetic spellings rather than high frequency words;
- revision time is spent on preparation for the 'Check';
- increased time is spent on teaching phonics.
Advertisement
In summary then, schools now devote more curriculum time to coaching children to pass the check/test. Valuable time that could be devoted to a comprehensive approach to the teaching of reading, including phonics, is wasted. So, a significant proportion, if not all, of the 15% improvement in achievement, on the Phonics Screening Check can be accounted for by the additional coaching that schools do prior to the 'test'.
Finally, the claim that the PSC had a positive impact on students' reading attainment is refuted by the NFER research, which concluded that there was no evidence that any improvements in literacy performance or progress could be clearly attributed to the PSC. The only national benchmark available is the Key Stage One Standard Assessment Tests (SATs) (similar to NAPLAN) which students take a year after the PSC. The difference of a 1% increase in SATs results is statistically insignificant and is hardly a figure on which to be triumphant.
It is clear from the NFER research that the PSC is not suitable for all children. So why advocate it as a universal test? Contrary to the claims made by Birmingham, Buckingham and Gibb, there is a lack of hard evidence to suggest the PSC is a good predictor of a child's later reading ability. How can it be? It is based on both a flawed view of language and a narrow view of reading.
Whilst there is a need to continually review the teaching of reading, we can be sure that 'a one size fits all' approach is not the way forward and England is not a good model to emulate.
Discuss in our Forums
See what other readers are saying about this article!
Click here to read & post comments.
10 posts so far.