The argument that phonics is the best way to teach early reading and that Australia must follow the path of England by implementing a Phonics Screening Check (PSC) for 6 year olds, is both powerful and fallacious.
It is powerful because the Minister for Education, Simon Birmingham, supported by The Centre for Independent Studies (CIS) strongly advocates its implementation. The determination of this advocacy is signified by the fact that the UK Schools Minister, Nick Gibb, visited Australia in April of this year to speak on the subject. He was introduced by Jennifer Buckingham of the CIS. The triumvirate of the two Ministers and the researcher from the think-tank make an impressive alliance.
Unfortunately, the argument offered by this triumvirate in support of a Phonics Screening Check for Australian school children obfuscates rather than informs debate. Jennifer Buckingham's recent polemic in the West Australian (Opinion 8th August 2017), is indicative of the alt-truth behind the rhetoric of the triumvirate.
Advertisement
Although the type of phonics being advocated is not mentioned, we can deduce that as England is the aspiring model, synthetic phonics is being recommended. In England schools are legally bound to teach reading exclusively through synthetic phonics. This is in spite of decades of research that overwhelmingly supports the finding that a balanced approach to reading, with the inclusion of phonics, is required if students are to both decode fluently and comprehend effectively. The English Government has ignored this research and we might ask why would anyone want Australia to follow that mistake?
This is like telling children and their teachers they will be going on a voyage across the Ocean on a raft when they could go on an ocean liner. Why would anyone want to make the teaching of reading that difficult?
Buckingham repeats Gibbs' message that, '..there is strong evidence that the Phonics Screening Check had a positive impact on reading in England.' However, the ability to decode individual words, using synthetic phonics is not synonymous with reading, and the claim for a causal relationship between improved decoding and raised standards of reading is not born out by the evidence.
It is true that in the first year (2012) of the PSC only 58% of students achieved the required 32 out of 40 correct answers. The second year it was 69% and last year the success rate had risen to 81%. At first glance, this seems impressive, but reality can be obscured by statistics. For example, students that 'failed' the test in 2012 had to re-take it in 2013 and would have been included in the figures for the higher pass rate.
We need to further explore the reality by knowing something about the Check itself. It consists of 40 individual words, half of which are nonsense words. Students have to identify the individual letter sound correspondences in each word and then blend the sounds to read the word. We might ask, what value is there in testing the reading of nonsense words? Research suggests the children themselves asked this question too because a significant number of them would not read the pseudo words and as the first 12 words were nonsense words, many teachers would have halted the test for these children.
Following the 2012 Phonics Screening Check, the United Kingdom Literacy Association (UKLA) surveyed almost 500 schools. Teachers reported a tendency for better readers to 'fail' the Check. These readers were able to make the letter - sounds connections, but read a word that was meaningful to them. For example, 'strom' might be read by the child as, 'storm.'
Advertisement
Between 2012 and 2015, the National Foundation for Educational Research (NFER), commissioned by Nick Gibbs' own department, investigated the efficacy of the Phonics Screening Check. The findings of its report, published in 2015, have been conveniently brushed under the carpet. So, it is worth revisiting them here.
Of 573 literacy coordinators interviewed more than 70% said the Check failed to provide them with valuable information about students' reading ability, beyond what they already knew. This finding was corroborated in separate research directed by Maggie Snowling of Oxford University.
They also reported the check was not suitable for students with learning difficulties. Some teachers made the same comment in relation to students for whom English is and additional language or dialect and 62% said the PSC is not applicable for students who are already fluent readers, presumably for the reason stated above. In addition, younger children were found to be disadvantaged. Amongst those students not achieving the required score, approximately two thirds were the youngest children in the class. So, what real purpose does the PSC serve?
Why was it that achievement in the PSC improved by 23% over four years? The NFER research found several possible reasons. Firstly, after the 2012 PSC more students were 'disapplied' and did not undergo the check. So, we cannot really compare achievement in 2016 with that of 2012 because the sample groups were different. Instead, if we compare achievement in 2016 with that of 2013 we find a 12 point difference, which is not quite as impressive as the 23 point difference, previously mentioned. So, how we now account for a 12% improvement?
The NFER research suggests some plausible answers. Schools must report to parents on whether their child has passed or failed the Check. This implicitly gives the PSC the status of an examination. In addition, Ofsted, the British government's school inspection body, also review the results when inspecting individual schools. Both these factors make the Check a 'high-stakes' test for which teachers and schools are accountable. The imperative on them to do well is implicated in the NFER findings that:
- more lesson time is spent on reading nonsense words;
- more tests are conducted focusing on phonetic spellings rather than high frequency words;
- revision time is spent on preparation for the 'Check';
- increased time is spent on teaching phonics.
In summary then, schools now devote more curriculum time to coaching children to pass the check/test. Valuable time that could be devoted to a comprehensive approach to the teaching of reading, including phonics, is wasted. So, a significant proportion, if not all, of the 15% improvement in achievement, on the Phonics Screening Check can be accounted for by the additional coaching that schools do prior to the 'test'.
Finally, the claim that the PSC had a positive impact on students' reading attainment is refuted by the NFER research, which concluded that there was no evidence that any improvements in literacy performance or progress could be clearly attributed to the PSC. The only national benchmark available is the Key Stage One Standard Assessment Tests (SATs) (similar to NAPLAN) which students take a year after the PSC. The difference of a 1% increase in SATs results is statistically insignificant and is hardly a figure on which to be triumphant.
It is clear from the NFER research that the PSC is not suitable for all children. So why advocate it as a universal test? Contrary to the claims made by Birmingham, Buckingham and Gibb, there is a lack of hard evidence to suggest the PSC is a good predictor of a child's later reading ability. How can it be? It is based on both a flawed view of language and a narrow view of reading.
Whilst there is a need to continually review the teaching of reading, we can be sure that 'a one size fits all' approach is not the way forward and England is not a good model to emulate.