Review of Systematic Phonics

Bowers (in press). Reconsidering the evidence that systematic phonics is more effective than alternative methods of reading instruction. Educational Psychology Review.

It has taken about 2 years, but my review of systematic phonics has been accepted in the prestigious journal Educational Psychology Review. I show that the “science of reading” does not support the widespread claim that systematic phonics is more effective than alternative forms of instruction. If you are interested in the topic I hope you have a look at the paper itself:  Reconsidering the evidence that systematic phonics is more effective than alternative methods of reading instruction, but for summaries you can either
(1) have a look at  slides of a talk that outline the same points  or
(2) read my earlier blogpost: https://jeffbowers.blogs.bristol.ac.uk/blog/phonics/

To avoid any confusion, my criticism of systematic phonics does not provide any support for whole language or balanced literacy. Rather, I am claiming that the decades long “reading wars” is best characterized as a draw. The implication is that researchers should consider alternative approaches to reading instruction. In my view, one promising alternative is structured word inquiry (Bowers & Bowers, 2017, 2018), but the main point is that more research is needed before making evidence-based claims regarding the efficacy of phonics, balanced literacy, structured word inquiry, or any other method.

Unfortunately, research into reading instruction has been one of the most contentious areas in all of psychology and education. Proponents of phonics who simply ignore this critique are engaging in advocacy not science. Whether you agree or disagree, it would be great to get feedback here.

25 thoughts on “Review of Systematic Phonics

  1. I had accepted your challenge on Twitter and you have not come back with a response.
    I don’t care about SSP as they are only trying to sell their products.
    I disagree with your “the main point is that more research is needed before making evidence-based claims regarding the efficacy of phonics…”
    You and Jennifer Buckingham have given phonics a bad name with neither one of you understanding that it is not phonics that is the problem but the way the pronunciation of phonemes of consonants is taught in schools.
    Phonics is good only when sound symbol skills and blending is done correctly.
    When the sound symbol is incorrect how do you expect phonics to work the way it should.

    1. HI Luqman, if you can provide references to research papers that have taught phonics in the way you prefer and that these studies report significant benefits compared to alternative approaches common in schools then I would be very happy to read the studies.

  2. Response Part 5 to Buckingham (final one):

    My previous responses (1 through 4) identified multiple errors in Buckingham’s characterizations of the meta-analyses. This final response is concerned with the reading results in England post 2007 when systematic phonics was mandated in all state schools. I claimed that there is little or no evidence that reading out comes have improved, and again, Buckingham disagrees. I review her claims, and then finish with a more general point regarding how politics is compromising the science of reading instruction.

    Buckingham first considers a paper by Machin, McNally and Viarengo (2018) that assessed the impact of the early roll out of synthetic systematic phonics (SSP) on standard attainment test (SAT) reading results in England. They tested two cohorts of children (what they called the EDRp and CLLP cohorts) and failed to obtain overall long-term effects of SSP in either cohort (consistent with the meta-analysis of Suggate, 2016 that failed to obtain long-term effects of phonics). However, when they broke down both samples into multiple subgroups, they found that the non-native speakers (.068) and economically disadvantaged children (.062) in the CLLD cohort did show significant gains on the key stage 2 test (age 11), whereas these same subgroups in the EDRp cohort did not show significant benefits. Indeed, for the ERDp sample, there was a tendency for more economically advantaged native English children to read more poorly in the key stage 2 test (−0.061), p<0.1 Nevertheless the authors concluded in the abstract: “While strong initial effects tend to fade out on average, they persist for those with children with a higher initial propensity to struggle with reading. As a result, this program helped narrow the gap between disadvantaged pupils and other groups”.

    Given the significant long-term effects depend on an unmotivated breakdown of subgroups (2 out of 8 conditions showing significant positive effects at p < .05, and another group showing a strong tendency to do worse, p < .1), you might expect Buckingham to criticize this conclusion as post-hoc. Afterall, she characterized my paper as such: “However, going to this level of detail in my response is arguably unnecessary since even Bowers’ selective, post hoc analysis of the findings leads to the conclusion that there is stronger evidence in favour of using systematic phonics in reading instruction than not using it”. But instead she takes this finding as providing additional support for the long-term benefits of SSP, both in her comment and earlier work where she wrote: “Machin, McNally and Viarengo (2018) analysed student performance in the first five years after the English government mandated synthetic phonics and found that there was a significant improvement in reading among 5- and 7-year-old children across the board, with significant improvement for children from disadvantaged non-English speaking backgrounds at age 11” (Buckingham et al., 2019).

    And as common, the exaggerated conclusions of the authors themselves are amplified by subsequent authors, such as Marinelli, Berlinski, and Busso (2019) who wrote:

    “… a third fundamental insight from the recent literature shows that certain teaching practices can have a profound, beneficial impact on how students acquire basic skills. Three characteristics seem to explain and underlie effective teaching: i) the use of structured materials for teaching reading (Machin and McNally [2008]), ii) the use of phonics-based methods for teaching reading (Machin et al. [2018] Hirata and e Oliveira [2019]), and iii) the use of content targeted at the right level of difficulty for the student when teaching reading and math (Muralidharan et al. [2019], Banerjee et al. [2017]). “

    See how this works?

    With regards to the lack of improvement in the PISA outcomes in 2019, Buckingham writes: “There is little point discussing the PISA results in a great deal of detail. The cohort of 15 year old English students who participated in the latest PISA tests in 2018 were in Year 1 during the phased implementation of synthetic phonics policies a decade ago. They may or may not have had teachers who were part of the Phase 1 training”

    So, we seem to agree that the PISA data provide no support for SSP. But it is worth noting that Buckingham characterized the relevance of the study somewhat differently on twitter (Dec 1st, 2019) two days before the results were announced: “Hope England's results have improved! Hard to know how much early reading policies might have contributed. This PISA cohort were in Reception and Yr 1 towards the beginning of SSP implementation and were pre-Phonics Screening Check. Lots of other good things going on too, though!”

    Buckingham does agree that the PIRLS 2016 results are more relevant, but disagrees with my analysis of them. In response to the fact that Northern Ireland did better than England without legally requiring systematic phonics, Buckingham argues that Northern Ireland does implement systematic phonics, citing the current literacy strategy published by the Northern Ireland Department for Education (2011). Apart from dismissing the document I mention (easily found if you google it), she omitted the following passage from the document that she does cite (a passage immediately above the quote she provided): “A range of other strategies for developing early literacy should also be deployed as appropriate and pupils who have successfully developed their phonological awareness should not be required to undertake phonics work if the teacher does not think it necessary or beneficial”. It is also worth noting that Northern Ireland does not have a Phonics Screening Check (PSC), and according to Buckingham, the PSC played an important role in improving SSP provision in England (see below). Nevertheless, Buckingham wants to credit the success of Northern Ireland to the excellent provision of systematic phonics?

    In addition, in response to the fact that the PIRLS results in England were similar to 2001 (pre SSP) and 2016 (post SSP) she cites some papers suggesting that the 2001 results were artificially high. Specifically, she quotes Hilton (2006) who wrote “the sampling and the test itself to have been advantageously organised” and McGrane et al. (2017) who wrote “relatively large error for the average score in 2001” (p. 33) so comparisons of 2001 with 2016 should be made cautiously”. These are interesting points that should be considered, but I would note that there is no evidence I know of that the procedure for selecting children was different in subsequent years for PIRLS, and after looking through the McGrane et al report I see nowhere where they identify the “large error” (I’ve emailed McGrane and will update if he responds). At the same time, Buckingham dismisses a report by a well-respected researcher (Jonathan Solity) presented at one of the top conferences in reading research (Society for the Scientific Study of Reading) that showed that there was no increase in PIRLS results in state schools (the relevant population) from 2011 to 2016 (Solity 2018). This claim is easily verified or refuted by Buckingham if she cares to look at the PRILS report. And when she writes “For some reason, Bowers also makes an entirely unfounded statement about phonics instruction being “less engaging”, she must have missed the following passage in the paper “The PIRLS 2016 also ranked English children’s enjoyment of reading at 34th, the lowest of any English- speaking country (Solity 2018).”

    Now perhaps Hilton (2006) is right and the published 2001 PIRLS results are inflated due to a flawed procedure in 2001 and not subsequent years, and perhaps Solity is wrong and the state school results are in fact higher in 2016 compared to 2011. And perhaps Buckingham is right, and excellent Northern Ireland PIRLS results did indeed depend on them implementing a strong version of systematics phonics in the absence of any legal requirement and in the absence of the PSC. But these are all explanation for why the English PIRLS results do not look better, not evidence that SSP has worked. Nevertheless, people have claimed that the PIRLS results support the efficacy of systematic phonics, such Sir Jim Rose, author of the Rose (2006) report, writing “the spectacular success of England shown in the latest PIRLS data” as further evidence in support of systematic synthetic phonics” (Rose 2017).

    What about my claim that the SAT results in England provide no support for systematic phonics? Buckingham writes: “The graphs of KS1 and KS2 scores from 2006 to 2018 in Bowers (2020) clearly show an upward trend in reading and writing from 2011 to 2015 that is greater than the upward trend for math and science.” Given that Buckingham has expressed concerns about post-hoc reasoning in my article (without giving any examples), it is perhaps worth considering whether there is some post-hoc and motivated reasoning going on here. It will help if you go to my article and look at the graphs in question (see Figures 1 and 2 at: https://link.springer.com/content/pdf/10.1007/s10648-019-09515-y.pdf). The first thing to note is that there is no improvement in reading in writing in from 2006-2011 despite the introduction of SSP in 2007. When reading and writing results do go up from 2011-2012 (as large a gain in a single year as any) it is before the PSC is rolled out. Buckingham suggests that this increase might reflect the pilot PSC that was introduced a year before the PSC was officially implemented. When considering this hypothesis, it is worth noting that the pilot took place in 300 schools, as far as I can tell from report, included 9000 pupils. How this led to a nationwide improvement for 100,000s of children in reading and writing (as we as maths and sciences) in the SATs 2011-2012 is unclear.

    From 2012-2015 (after the PSC) the outcomes for reading, writing, maths and sciences all steadily increased, and Buckingham suggests that the increased performance in science and maths might be attributed to the improved SSP due to the PSC writing: “It is not surprising that maths and science would also improve slightly as reading improves because maths and science tests require children to be able to read the questions proficiently.” But earlier in her commentary, Buckingham attributes the failure to obtain benefits in reading comprehension in all meta-analyses (apart from the short-term effect reported in the NRP) to the fact that many studies did not include instruction that develops language comprehension. Nevertheless, according to Buckingham, the improved delivery of SSP (due to the PSC) is potentially responsible for an increase in comprehension, that in turn improved the results of the math and science SATs. That seems a big post-hoc stretch.

    The bottom-line is that the PSC scores improved dramatically from 2012 whereas the improvement in reading compared to nonreading SAT scores are TINY (look at graphs), with the improvements in SATs starting before the PSC was adopted. It is hard to conclude anything other than the introduction on the PSC led to improvements on the PSC and little or no evidence for improvements in PISA, PIRLS, or SAT results. And similarly, there is no evidence that the introduction of SSP in 2007 (long before the PSC) improved reading outcomes on these standardized tests. Indeed, the null results in the standardizes tests is in line with the lack of evidence for systematic phonics from the meta-analyses.

    Before concluding I should also briefly address Buckingham’s criticism of SWI. What is so striking is, again, how she mischaracterizes basic points. Consider the following passage:

    “The problem with seeing Structured Word Inquiry (SWI) as a superior alternative to systematic phonics is firstly that there is insufficient information to assess whether it is in fact based on the best available evidence. Bowers’ dismissal of what the vast majority of reading scientists accept about the essential role of learning GPCs in early reading acquisition suggests that it is not… If they do teach GPCs it would appear to be in a way that is more closely analogous to analytic than synthetic phonics, but in which the sub-word level analysis is based on morphemic units rather than sound units.”
    How can Buckingham write “If they do teach GPCs” when we repeatedly emphasize the importance of GPCs in SWI instruction? For example, Bowers and Bowers (2017) wrote: “SWI emphasizes that English spellings are organized around the interrelation of morphology, etymology, and phonology and that it is not possible to accurately characterize grapheme–phoneme correspondences in isolation of these other sublexical contraints” (p. 124) as well as writing “We have no doubt that learning grapheme–phoneme correspondences is essential” (p. 133).

    Bowers and Bowers (2018) wrote: “To avoid any confusion, it is important to emphasize that the explicit instruction of orthographic phonology — how grapheme-phoneme correspondences work—is a core feature of SWI. However, unlike phonics, SWI considers grapheme-phonemes within the context of morphology and etymology.”

    I have even directly tweeted Buckingham and emphasized the importance of GPCs in SWI, such as the following tweet: “SWI teaches GPCs from the start. In the context of morphemes. It teaches GPCs, morphemes, vocabulary together. More data needed to support hypothesis, but the evidence for teaching GPC by themselves is not strong”.
    We do agree that more evidence is required for SWI, and we have made this point repeatedly.

    To conclude, Buckingham is consistently wrong in her criticisms, often mischaracterizing the literature and my own work. I hope she responds, and of course, I welcome responses from others as well.

    The science of reading is broken when applied to literacy instruction.

    If Buckingham has been a reviewer on this paper it is clear she would have rejected it. And indeed, my paper was rejected by 4 journals and over 10 reviewers, almost all of whom recommended rejection based on a long list of mistakes and mischaracterizations. In a culture in which action editors are unwilling to even engage with authors in the face of straightforward factual errors from reviewers (until the action editor at Educational Psychology Review finally did), it is easy to block criticisms of systematic phonics in high-profile journals. Let me illustrate the problem by quoting the first lines of various letters I wrote to various editors of journals who (at least initially) rejected my paper.

    For “Review of Educational Research” I wrote: "Dear **, I am writing with regards to the manuscript entitled "Reconsidering the evidence that systematic phonics is more effective than alternative methods of reading instruction" (MS-223) that was recently rejected in Review of Educational Research. Of course, it is always disappointing when a paper is rejected, but in this case, the reviewers have completely mischaracterized my work – every major criticism is straightforwardly incorrect, and barring one or two trivial points, so are all the minor criticisms as well."

    For “Educational Psychologist” I wrote: “It is disappointing to be rejected yet again based on a set of fundamental misunderstandings, mischaracterizations, and straightforward mistakes. Of all the criticisms, only one has any merit. I do appreciate that the repeated rejections (previously at Psychological Bulletin and Review of Educational Research) might suggest that the misunderstanding/mistakes are mine. But the same set of mistaken claims are repeated over and over, and my response to these errors are never addressed by the editors (who are not willing to reconsider their decision) or reviewers (who never get a chance to see my responses). There is an extraordinary resistance to a simple point I’m making – the strong claims regarding the benefits of systematic phonics are not supported by the data."

    For "Psychological Bulletin" I wrote: “Dear Professor **, I expected that my review of systematic phonics would be controversial, but I did not anticipate that the main objection would be that this work is of “limited interest to readers of Psychological Bulletin”. With Reviewer 1 claiming that “the main objective of the author(s) is to denigrate the National Reading Report (NICHD, 2000) and the Rose report (2006)”.

    And for “Educational Psychology Review” (where the paper was initially rejected) I wrote: Dear Professor **, I am of course disappointed that my article “Reconsidering the evidence that systematic phonics is more effective than alternative methods of reading instruction” was rejected at Educational Psychology Review, but it is also frustrating given that all the major criticisms are mistaken. In this letter I briefly highlight the major mistakes of the most negative reviewer (Reviewer 3). These are not matter of opinions, they are black-and-white errors that are easy to document. The other reviewers have not identified any important errors either, but I thought I would only comment on the most obvious mistakes of the most negative review to keep this letter reasonably short”. Here is a short excerpt from Reviewer 3: “Reading through the paper it becomes clear that the authors seem to have a strong bent towards non-systematic phonics and Whole Language”. You could not make it up.

    Peter Bowers and myself had similar experiences tying to publish or paper on SWI that was eventually published in Educational Psychologist (2017). Again, it was rejected in multiple journals, and again, the same black-and-whole mistakes were repeated over and over by reviewers, such as the claim that we ignore the importance of teaching grapheme-phoneme correspondences (as claimed again by Buckingham). I hope this long response to Buckingham will convince some that indeed the evidence for systematic phonics has been greatly exaggerated, and make more people aware how the politics of reading research is biasing what gets published.

  3. In my Educational Psychology Review article I wrote that “findings are often mischaracterized by the authors of the reports, and these mischaracterizations are passed on and exaggerated by many others citing the work”. So far, I’ve pointed out multiple of places where Jennifer Buckingham has mischaracterized or exaggerated findings from the NRP, Camilli et al., and Torgerson, as well as mischaracterized my own work. In this fourth response to Buckingham I cover the remainder of the meta-analyses. Let’s see if the trend continues.

    The next meta-analysis was carried out by McArthur et al. (2012). Buckingham does make one valid point, namely, I missed the update meta-analysis from 2018, but let’s consider whether indeed I have mischaracterized the results from 2012 paper as claimed. I noted that the McArthur et al. reported significant effects of word reading accuracy and nonword reading accuracy, whereas no significant effects were obtained in word reading fluency, reading comprehension, spelling, and nonword reading fluency. But I also argued that the significant word reading accuracy results depended on two studies that should be excluded, and when the studies are removed, the word reading accuracy results are no longer significant (leaving only nonword reading accuracy significant). Buckingham commented:

    “Bowers argues that the high results for word reading accuracy were due to two studies — Levy and Lysynchuck (1997) and Levy, Bourassa and Horn (1999) — and that these studies should be excluded because they were one-to-one interventions. However, all of the studies in this meta-analysis were small group or one-to-one interventions, so there is no good reason to exclude these two particular studies just because the interventions were found to be particularly effective. Nonetheless, in Bowers’ summary he unilaterally decides to remove the Levy et al. studies and comes to the spurious and erroneous conclusion that the McArthur et al. (2012) meta-analysis found “no evidence” that systematic phonics instruction was effective.”

    This summary is a complete misrepresentation of my motivation for removing the studies. This is what I wrote: “One notable feature of the word reading accuracy results is that they were largely driven by two studies (Levy and Lysynchuk 1997; Levy et al. 1999) with effect sizes of d = 1.12 and d = 1.80, respectively. The remaining eight studies that assessed reading word accuracy reported a mean effect size of 0.16 (see Appendix 1.1, page 63). This is problematic given that the children in the Levy studies were trained on one set of words, and then, reading accuracy was assessed on another set of words that shared either onsets or rhymes with the trained items (e.g., a child might have been trained on the word beak and later be tested on the word peak; the stimuli were not presented in either paper). Accordingly, the large benefits observed in the phonics conditions compared with a nontrained control group only shows that training generalized to highly similar words rather than word reading accuracy more generally (the claim of the meta-analysis). In addition, both Levy et al. studies taught systematic phonics using one-on-one tutoring. Although McArthur et al. reported that group size did not have an overall impact on performance, one-on-one training studies with a tutor showed an average effect size of d = 0.93 (over three studies). Accordingly, the large effect size for word reading accuracy may be more the product of one-on-one training with a tutor rather than any benefits of phonics per se, consistent with the findings of Camilli et al. (2003).”

    That is, my main motivation for removing the studies is that word reading accuracy was assessed on words that were very similar to the study words. I quite like a tweet by Stephanie Ruston who wrote “I almost fell out of my chair when I read that mischaracterization of Bowers’ discussion of the Levy et al. studies. I mean, people can easily search through his paper to read that section for themselves. It’s unfortunate that many will not do so before forming a conclusion.”

    It is perhaps worth detailing a bit more what was done in these studies to see if my reasoning is justified (I would not want to be called post-hoc). The children in the Levy and Lysynchuk (1997) were taught 32 words composed of 8 onsets (fi, ca, bi, pa, wi, ma, pi, ra) and 8 rimes, (it, ig, in, ill, up, an, ash, at). They studied these 32 words were taught one-on-one with a tutor once a day for 15 days or until the child could pronounce all 32 words correctly on 2 successive days. At test, children read 48 new words and 48 nonwords composed of these onsets and rimes. For example, children would be taught can, pan, ban that all share the rime an, and then tested on fan. The authors found that these children were more accurate at reading these words and nonwords than another group of children who were given no instruction on these specific onsets and rimes (indeed, they were not given any extra reading instruction at). It was fine for Levy et al. to compare the efficacy of different ways of studying these 32 words (they found an “onset” phonics method was better than a “phoneme” method), but to use these findings in a meta-analysis that claims that systematic phonics improves word reading accuracy in general is inappropriate. Yes, if an instructor knows the test words and then trains children for 15 days one-one-one on a set of words as similar as possible to the test words, then these children do better than children who get no instruction at all. Levy et al. (1999) carried out a similar study, and for this same reason, the large effects on word reading accuracy should be excluded here as well. Indeed, the nonword reading accuracy results from both studies should be excluded as well.

    Let’s move on to the next meta-analysis by Galuschka et al. who found the overall effect sizes observed for phonics (g′ = 0.32) was similar to the outcomes with phonemic awareness instruction (g′ = 0.28), reading fluency training (g′ = 0.30), auditory training (g′ = 0.39), and color overlays (g′ = 0.32), with only reading comprehension training (g′ = 0.18) and medical treatment (g′=0.12) producing numerically reduced effects. Despite this, the authors wrote:
    “This finding is consistent with those reported in previous meta-analyses… At the current state of knowledge, it is adequate to conclude that the systematic instruction of letter- sound correspondences and decoding strategies, and the application of these skills in reading and writing activities, is the *most* effective method for improving literacy skills of children and adolescents with reading disabilities” [I’ve added the *].

    As a basic matter of statistics, the claim that phonics is *most* effective requires an interaction, with phonics significantly more effective than alternative methods. They did not test for an interaction, and a look at the effects sizes shows there is no interaction. Nevertheless, Buckingham writes “Galuschka, Ise, Krick and Schulte-Korne (2014) have not overstated the case for systematic phonics interventions. Based on statistical significance, they explicitly say that “At the current state of knowledge”, their conclusion about the relative effectiveness of systematic phonics instruction is sound. This simply means that, at this point in time, we can have more confidence in this finding than in the effect sizes found for the other treatment conditions.”

    I’m not sure how the “current state of knowledge” helps justify Galuschka et al.’s conclusion, nor am I’m not sure how the rewording of the conclusions justifies what the authors actually wrote. Here is a fair characterization of what was found: Similar effect sizes were obtained across multiple different forms of instruction, with only phonics significant. However, there was no evidence that phonics was more effective than other methods.

    Buckingham is also critical of my claim that there was evidence for publication bias that inflated the phonics results, writing:

    “Bowers disputes this conclusion, once again raising the spectre of publication bias with little real reason to do so. The speculation about publication bias inflating the results is very unpersuasive. There are many people who would be delighted to publish studies showing a null result of phonics instruction so the idea that there are a lot of undiscovered, unpublished studies out there showing null results is difficult to believe”.

    I have to confess I had a wry smile when Buckingham wrote it is “difficult to believe” in publication bias in the domain of reading instruction (I cannot exaggerate how difficult it was to publish this paper or my previous papers on SWI – for another time), but in any case, the statement is misleading as it was Galuschka et al. *themselves* that raised this concern and found evidence for publication bias, writing:

    “A funnel plot was used to explore the presence of publication bias. The shape of the funnel plot displayed asymmetry with a gap on the left of the graph. Using Duval and Tweedie’s trim and fill [44] the extent of publication bias was assessed and an unbiased effect size was estimated. This procedure trimmed 10 studies into the plot and led to an estimated unbiased effect size of g’ = 0.198 (95% CI [0.039, 0.357]) (see Figures 4 and 5, Table 4).”

    And once again, this meta-analysis did not even test the hypothesis that systematic phonics is more effective than standard alternative methods that include some degree of phonics. So again, Buckingham has made a number of mischaracterizations of the original study and my work.

    With regards to the Suggate (2010) meta-analysis, I wrote: “The critical novel finding, however, was that there was a significant interaction between method of instruction and age of child, such that phonics was most useful in kindergarten for reading measures, but alternative interventions were more effective for older children”. I challenged this conclusion noting that the advantage of early phonics was very small (d estimated to .1) and the study that showed the largest benefit of early phonics was carried out in Hebrew (a shallow orthography where GPCs are highly regular). This obviously weakens the claim that early systematic phonics is particularly important in learning to read English with less regular GPCs.

    Buckingham does not mention any of this, and simply concludes. “There is no challenge to the importance of phonics, or the impact of systematic phonics instruction, in these findings.” I would be interested to know whether Buckingham thinks it is fine to use studies from non-English languages to make conclusions regarding English, especially given the effect of this Hebrew study as well as the fact overall the effects were reduced in English compared to other language, as summarized in my paper.

    The Suggate (2016) study was the first to systematically assess the long-term impacts of various forms of reading interventions. This is from the abstract: “Much is known about short-term—but very little about the long-term—effects of reading interventions. To rectify this, a detailed analysis of follow-up effects as a function of intervention, sample, and methodological variables was conducted. A total of 71 intervention-control groups were selected (N = 8,161 at posttest) from studies reporting posttest and follow- up data (M = 11.17 months) for previously established reading interventions. The posttest effect sizes indicated effects (dw = 0.37) that decreased to follow-up (dw = 0.22). Overall, comprehension and phonemic awareness interventions showed good maintenance of effect that transferred to nontargeted skills, whereas phonics and fluency interventions, and those for preschool and kindergarten children, tended not to.”

    Buckingham simply dismisses the finding that systematic phonics had the smallest long-term effects of all forms of instruction writing: “The lower long-term effects of phonics interventions can be explained by the constrained nature of phonics. Once children have mastered decoding, other aspects of reading instruction become stronger variables in their reading ability”.

    I don’t understand how Buckingham can just brush aside a recent meta-analysis that considers all studies that have assessed the long-term effects of systematic phonics. It seems we are getting to the stage where the benefits of systematic phonics are unfalsifiable, with null effects explained away and the conclusion that the science of reading supports systematic phonics unaffected. We will see much more of this when Buckingham considers that reading results in England post 2007 (in next post). Let me quote from a letter from Suggate (the author of the last two meta-analyses) who wrote me after reading my review article (he is happy for me to post this):

    “I just came across your paper, hot off the press, on the effectiveness of systematic phonics instruction, which I have read with great interest…

    If I may be so upfront, I think that the strongest argument from my meta-analyses in support of your position is to be found in the long-term (still only 11 months) data, namely in the poor performance of the phonics interventions at follow-up (see Table 3 of my 2016 meta-analysis) – these had basically zero effect, except for spelling. So why make such a big deal of phonics when it doesn’t produce lasting effects and doesn’t transfer to other reading skills?”

    Finally, Buckingham dismisses my reviews of the other meta-analyses (and review) and writes:

    “Part of the problem with this premise is that unsystematic phonics is nebulous and undefined. It involves matters of degree – an absolute example of no phonics would be difficult to find. According to Bowers himself, whole language methods can contain unsystematic phonics. This is arguably more accurately described as balanced literacy; the boundaries are blurry. Given this difficulty of defining what is unsystematic phonics and what is whole language (with or without systematic phonics) and what is balanced literacy, it seems reasonable and practical to do what almost all studies and meta-analyses have done – compare systematic phonics instruction with the absence of systematic phonics instruction.”

    The problem with this is that the NRP design specifically compared systematic phonics to a control condition that combined nonsystematic phonics or no phonics. Here is a quote from NRP: “The research literature was searched to identify experiments that compared the reading performance of children who had received systematic phonics instruction to the performance of children given nonsystematic phonics or no phonics instruction”. And here is the conclusion from the NRP: “Findings provided solid support for the conclusion that systematic phonics instruction makes a bigger contribution to children’s growth in reading than alternative programs providing unsystematic or no phonics instruction.” It is also the case that when Camilli et al reviewed the literature they found interventions that had systematic phonics, nonsystematic phonics, and no phonics. Most subsequent meta-analyses adopted the same criterion of the NRP, comparing systematic phonics to a control condition that had nonsystematic phonics or no phonics. One exception was McArthur et al. (2012) who compared systematic phonics to no instruction. Buckingham’s characterization of the research and the meta-analyses is again incorrect.

    I do agree with Buckingham that there are few examples of instruction in the classroom that include no phonics. And that is exactly why Camilli et al. made the important point that the design of the NRP (and most subsequent meta-analyses) is inappropriate for assessing the importance of systematic phonics compared to standard classroom practice. The fact that there is little or no evidence for systematic phonics even in the face of a control condition that includes no phonics only serves to undermine the evidence for systematic phonics further.

    As I wrote in my summary of the meta-analyses: “There can be few areas in psychology in which the research community so consistently reaches a conclusion that is so at odds with available evidence.” I see no reason to change this conclusion based on Buckingham’s analyses of the meta-analyses. In my next (perhaps final response) I’ll consider Buckingham’s response to the reading results in England since introducing systematic synthetic phonics in 2007.

  4. As detailed in my first two responses to Buckingham, based on the NRP and Camilli et al. meta-analyses there is little no evidence that systematic phonics is better than standard forms of alternative instruction that include some (non-systematic) phonics, and even less evidence that SSP is better than whole language given only 4 studies were carried out, with mixed results. The Torgerson et al. (2006) meta-analysis further undermines the conclusions that can be drawn from the NRP. What the authors note is that few of the studies included in the NRP were actually RCT studies, and the quality of the studies is problematic. Here is a passage from this meta-analysis that specifically focused on 14 RCT studies that exist (including one unpublished study):

    “None of the 14 included trials reported method of random allocation or sample size justification, and only two reported blinded assessment of outcome. Nine of the 14 trials used intention to teach (ITT) analysis. These are all limitations on the quality of the evidence. The main meta-analysis included only 12 relatively small individually randomised controlled trials, with the largest trial having 121 participants and the smallest only 12 (across intervention and control groups in both cases). Although all these trials used random allocation to create comparison groups and therefore the most appropriate design for investigating the question of relative effectiveness of different methods for delivering reading support or instruction, there were rather few trials, all relatively small, and of varying methodological quality. This means that the quality of evidence in the main analysis was judged to be “moderate” for reading accuracy outcomes. For comprehension and spelling outcomes the quality of evidence was judged to be “weak”.

    So not a great basis for making strong conclusions.

    But what about the results? Jennifer Buckingham writes: “After limiting the included studies to RCTs, Torgerson, Brooks and Hall (2006) found moderate to high effect sizes for systematic phonics on word reading (0.27 to 0.38) and comprehension (0.24 to 0.35), depending on whether fixed or random effects models were used. The word reading effect was statistically significant. After removing one study with a particularly high effect size, the overall result was reduced for word reading accuracy but still of moderate size and still significant.”

    Again, the claim that the effects were moderate to high is at odds with the effect sizes, and this summary does not capture the fact that the comprehension effect was not significant, nor was spelling (d = 0.09). And as noted by Torgerson et al. *themselves*, after removing a flawed study with an absurd effect size of 2.69, the spelling effect just reaching significance one analysis (p = 0.03) and nonsignificant on another (p = 0.09). For Buckingham to summarize this as a significant and moderate effect is misleading (never mind the replication crisis that has highlighted how insecure a p < .05 effect is). Torgerson et al. also reported evidence of publication bias in support of systematic phonics. This bias will have inflated the results from previous meta-analyses, and the fact that the authors identified one additional unpublished study does not eliminate this worry. More importantly, the design of the study did not even compare systematic phonics compared to some phonics that is common in school settings. If this comparison was made, the effect sizes would be reduced even further. This means that the following claim by Torgerson et al.’s regarding whole language is unjustified. “Systematic phonics instruction within a broad literacy curriculum appears to have a greater effect on childrens progress in reading than whole language or whole word approaches”.

    In sum, Torgerson et al. assessed the efficacy of systematic phonics on spelling, comprehension, and word reading accuracy. The first two effects were not significant, and the word reading accuracy results according to their own analysis was borderline (significant on one test, not on another). Torgerson et al. did not even test whether systematic phonics was more effective than standard alternative methods that include some phonics. On top of this, as noted by Torgerson et al., the quality of the studies in their meta-analysis was mixed. It is hard to reconcile these facts with Buckingham's characterization of the findings. As we will see, all subsequent meta-analyses are even more problematic for the claim that the science of reading supports systematic phonics compared to common alternatives.

  5. Here is one of the conclusions of a largely favourable piece of 2019 research conducted into minilit, a program associated with Jennifer Buckingham, by the University of Melbourne:

    “While we did not see better outcomes on reading whole passages of text (this covers accuracy, reading rate and comprehension), this finding should be treated with caution as reading can be difficult to measure in younger students who are struggling with the basic reading skills.” https://pursuit.unimelb.edu.au/articles/finding-the-fundamentals-of-reading

    For me, this strikes chords with the following passage in Buckingham’s piece: “The study found phonics interventions to have relatively small long term effects. The lower long-term effects of phonics interventions can be explained by the constrained nature of phonics. Once children have mastered decoding, other aspects of reading instruction become stronger variables in their reading ability.”

    Thus, it doesn’t matter if the long-term effects of phonics are small, since, for a few, sweet years only, students were reading better than they would otherwise have been. Were I a principal of a primary school, I would really wonder if this was worth all the fuss. Any marginal gain from a reform is of course great, but it has to be weighed against the costs of implementation.

    To be fair, some advocates of phonics have a far stronger account of what they can achieve. Here is John Walker, the director of Sounds Write, the 4 year phonics program my children are treated to, writing about what he has achieved in England: “There you can see whole classes of children – children who don’t speak any English at home, children whose parents don’t own any books never mind read them, children who come from some of the poorest backgrounds – yet, who, by the age of seven are already literate enough to be able to read a broadsheet newspaper or anything in a classical novel. (https://theliteracyblog.com/2018/07/29/exploding-myths-imploding-arguments/). Really? Henry James? George Elliott? – anything??

    I think that’s what is called ‘the science of reading’.

  6. Is the confused and interchanging use of the terms systematic phonics and synthetic phonics deliberate by Buckingham et.al. The terms are not synonymous. Systematic phonics can include synthetic, analytic and phonics in context. The evidence at hand is for ‘systematic phonics’ teaching – but not, as is confused, synthetic phonics. By interchanging the terms the debate is confusing teachers/schools that rely on academic knowledge and endorsements for their employment of pedagogy. The ‘synthetic phonics’ approach starting with ‘letter sounds’ or initial code is false phonics teaching that leads to disaster for those that don’t overthrow the foundational learning. SSP is flawed, restrictive and what has caused many to shun the word phonics and has enabled the so called reading war to continue . Teaching letter names and the alphabetic principle and ‘real phonics’ allows for the teaching of phonology, morphemes and etymology all relevant skills in teaching vocabulary and written words.

  7. This morning I started writing a response to Jennifer Buckingham’s post entitled “The grass is not greener on Jeffrey Bowers’ side of the fence: Systematic phonics belongs in evidence-based reading programs” that was published on the “five from five” website. I was hoping they would allow comments, but appears not, so I thought I would post my initial response on here.

    I am pleased that Buckingham has provided a detailed response to my article, but almost every point she makes regarding my review of the evidence is either factually incorrect or a mischaracterization. There is nothing in her post that challenges the conclusions I’ve drawn. I’ll need to spend a few hours responding to all the points and don’t have the time today. But for now I thought I would start with her first response to my review of the evidence, namely, the findings from the NRP. Her summary of my critique of the NRP is as follows:

    “In his summary of the National Reading Panel (NRP) analysis, Bowers argues that the effect sizes are not large and do not justify the NRP’s conclusions that systematic phonics should be taught in schools. However, the effect sizes quoted by Bowers are moderate to high, especially for synthetic phonics in particular, and are certainly stronger than the evidence found for any other method, including whole language”.

    This is wrong in multiple many ways. First, it a gross mischaracterization of my summary, which I provide in the paper itself:

    “In sum, rather than the strong conclusions emphasized the executive summary of the NRP (2000) and the abstract of Ehri et al. (2001), the appropriate conclusion from this meta-analysis should be something like this:

    Systematic phonics provides a small short-term benefit to spelling, reading text, and comprehension, with no evidence that these effects persist following a delay of 4– 12 months (the effects were not reported nor assessed). It is unclear whether there is an advantage of introducing phonics early, and there are no short- or long-term benefit for majority of struggling readers above grade 1 (children with below average intelligence). Systematic phonics did provide a moderate short-term benefit to regular word and pseudoword naming, with overall benefits significant but reduced by a third following 4–12 months.”

    But on a point of fact, Buckingham’s claim that “the effect sizes quoted by Bowers are moderate to high, especially for synthetic phonics in particular” is straightforwardly contradicted by the following point from my paper:

    “And although the NRP is often taken to support the efficacy of synthetic systematic phonics (the version of phonics legally mandated in the UK), the NRP meta-analysis only included four studies relevant for this comparison (of 12 studies that compared systematic phonics with whole language, only four assessed synthetic phonics). The effect sizes in order of magnitude were d = 0.91 and d = 0.12 in two studies that assessed grade 1 and 2 students, respectively (Foorman et al. 1998); d = 0.07 in a study that asses grade 1 students (Traweek & Berninger, 1997); and d = − 0.47 in a study carried out on grade 2 students (Wilson & Norman, 1998).”

    It is worth highlighting this point. The NPR has no doubt been cited 100s of times in support of the claim that synthetic phonics is more effective than whole language. But if you look carefully, this is the basis for this claim.

    Also, according to the NRP itself, there is no statistical evidence that synthetic phonics is more effective than other forms of systematic phonics:

    “Are some types of phonics instruction more effective than others? Are some specific phonics programs more effective than others? Three types of phonics programs were compared in the analysis: (1) synthetic phonics programs that emphasized teaching students to convert letters (graphemes) into sounds (phonemes) and then to blend the sounds to form recognizable words; (2) larger-unit phonics programs that emphasized the analysis and blending of larger subparts of words (i.e., onsets, rimes, phonograms, spelling patterns) as well as phonemes; and (3) miscellaneous phonics programs that taught phonics systematically but did this in other ways not covered by the synthetic or larger-unit categories or were unclear about the nature of the approach. The analysis showed that effect sizes for the three categories of programs were all significantly greater than zero and did not differ statistically from each other. The effect size for synthetic programs was d = 0.45; for larger-unit programs, d = 0.34; and for miscellaneous programs, d = 0.27”

    Also, note these effect sizes. Cohen suggested that d=0.2 be considered a ‘small’ effect size, 0.5 represents a ‘medium’ effect size and 0.8 a ‘large’ effect size. So unclear why Buckingham writes: “the effect sizes quoted by Bowers are moderate to high, especially for synthetic phonics in particular, and are certainly stronger than the evidence found for any other method, including whole language”. Indeed, as noted above, the NRP only had 4 studies that compared synthetic phonics to whole language.

    These misrepresentations and misstatements of fact continue, but I’ll need a few days before I have some time. I do allow responses on my website, so happy to get comments. Do give me a few days, I’m in the middle of marking exams….

    1. I look forward to the remainder of your response. I’m glad folks are interested in your paper but disappointed so far in the quality of the challenges to your conclusions.

    2. I think I’ll be breaking up my responses rather than one long response. Here I focus on the Camilli et al. studies.

      As I’ve shown above, Buckingham makes a number of factual and misleading comments about NPR. She next criticizes my review of Camilli et al. and argues that these analyses provide further support for systematic phonics. Here are the key passages from her response:

      “Bowers presents the findings of two re-analyses of the studies included in the NRP by Camilli, Vargas and Yurecko (2003) and Camilli, Wolfe and Smith (2006) that are alleged to dispute the NRP’s conclusions. Yet after some substantial re-engineering of the data, Camilli, Vargas and Yurecko (2003) still found that the effect of systematic over non-systematic phonics instruction was significant.”

      And…

      “Camilli, Wolfe and Smith (2006) manoeuvered the data even more, creating a multi-level model that included language-based activities as a moderating variable. It reinforced the finding that systematic phonics was superior to no phonics but reduced the simple effect of systematic phonics over non-systematic phonics, which Bowers incorrectly interprets to mean that “Camilli et al (2006) failed to show an advantage of systematic over unsystematic phonics” (p. 9).

      Buckingham concludes: “Overall, far from presenting a challenge to systematic phonics, the findings of Camilli et al. can be described as supporting the conclusion that some phonics instruction is better than no phonics instruction, and the more systematic the phonics instruction is, the better. The best case scenario is systematic phonics instruction paired with high quality language activities.”

      The first thing to note is that Buckinghan seems to think that Camilli et al are up to no good, writing: “Yet after some substantial re-engineering of the data” and “Camilli, Wolfe and Smith (2006) manoeuvered the data even more”…. This seems to relate to Greg Ashman’s criticism of Camilli et al. in his earlier critique of my work that is nicely summarized in a recent tweet by him:

      “The fundamental flaw is it’s all post hoc. Bowers is slicing and dicing the meta-analyses to suit his hypothesis. He relies heavily on Camilli who also slice and dice the results. You can prove pretty much anything that way.”

      The first thing to note is that there is nothing arbitrary or post-hoc about the Camilli et al.’s analyses. As noted by the authors of the NRP, almost all forms of instruction in the USA included some degree of phonics, with systematic phonics less common. So, when claiming the science of reading supports a change to systematic phonics, the relevant question is whether systematic phonics is better than some phonics. Because the NRP did not test this hypothesis, Camilli et al. carried out new meta-analyses to address this question. There is absolutely nothing post-hoc about asking this reasonable question. The authors also wanted to distinguish the impact of systematic phonics from one-one tutoring and language-based activities more generally. Again, nothing post-hoc about asking this, let alone “substantial re-engineering of the data”. They carried out new meta-analyses to ask a new set of reasonable questions.

      But what about the results? Do they provide further support for systematic phonics? In first analysis Camilli et al found the benefit of systematic phonics was reduced to .24 (from .41), but nevertheless, the effect was still significant. Great news for systematic phonics! Well, no. What one should note is that this overall effect that combines the results from decoding of nonwords, regular words, etc. Based on these results there is no reason to think that systematic phonics improved any reading outcome other than decoding for the short-term. Here again is my summary of the original findings from the NRP (when the overall effect was .41), and the most appropriate conclusion from the Camilli et al. (2003) meta-analysis is that the impact of systematic phonics compared to some phonics (i.e., standard forms of instruction in schools) is about half this strength.

      “Systematic phonics provides a small short-term benefit to spelling, reading text, and comprehension, with no evidence that these effects persist following a delay of 4– 12 months (the effects were not reported nor assessed). It is unclear whether there is an advantage of introducing phonics early, and there are no short- or long-term benefit for majority of struggling readers above grade 1 (children with below average intelligence). Systematic phonics did provide a moderate short-term benefit to regular word and pseudoword naming, with overall benefits significant but reduced by a third following 4–12 months.”

      And furthermore, in a subsequent analysis of this same dataset (with a more careful assessment additional variables and improved statistics), Camilli et al. (2006) found that there was no longer any significant main effect of systematic phonics compared to unsystematic phonics. Instead, according to the Camilli et al. (2003, 2006), the strongest effect in the studies included in the NRP came from one-on-one tutoring.

      For some reason Buckingham claims I am incorrect in writing: “Camilli et al (2006) failed to show an advantage of systematic over unsystematic phonics”. But that is exactly what they found. And even if the nonsignificant d = .12 is taken to support systematic phonics over non-systematic phonics, the effects will mostly be driven by short-term measures of decoding, with even weaker effect sizes for all other measures of reading, and weaker effects still with children with reading difficulties (remember, Camilli et al. are working with largely the same dataset as the NRP). Indeed, as we will see later, subsequent meta-analyses highlight how systematics phonics does not have long-lasting effects on reading outcomes, and that children with reading difficulties do indeed benefit the least.

      To summarize thus far, the NRP provides weak evidence at best that systematic phonics improves reading outcomes, little or no evidence for synthetic phonics is better than whole language, and that the main design of the NRP did not even test they hypothesis that systematic phonics is better than common alternatives in schools. There is no basis whatsoever for Buckingham’s claim that the effect sizes in the NRP “are moderate to high, especially for synthetic phonics in particular”. When meta-analyses were designed to test whether systematic phonics is better than common alternatives that include some (nonsystematic) phonics, the effects are dramatically reduced, and indeed, in one analysis, the overall analysis is not even significant. There are no good scientific grounds to argue that the NRP or the Camilli et al. meta-analyses provide good support for systematic phonics. In my next post, I show that the Torgerson et al. (2006) meta-analyses further undermines the claim that the NRP or indeed Camilli et al. analyses support systematic phonics.

      1. Thanks, Jeff, for chunking your responses. In a future response can you please address the position that many of us hold, that systematic phonics instruction is necessary but not sufficient to improve reading performance. Buckingham writes:

        “Systematic phonics does not preclude a focus on the meaning of words. There is no directive that learning grapheme-phoneme correspondences (GPCs) must precede all other elements of reading instruction. The criteria for systematic phonics only apply to the aspect of instruction that focuses on teaching decoding. The expectation is that children will have concurrent instruction in all of the ‘Big 5’ – phonemic awareness, phonics, fluency, vocabulary and comprehension – and that the phonics component will be systematic and explicit.”

        I have been fortunate these last two years to do a job share and teach a third grade class once a week. Over this time, 18 of my former first and second grade intervention students (who came to me lacking decoding skills) have been in my third grade class. For the most part, they have come to third grade as quick and efficient decoders because of the phonemic awareness, phonics and fluency activities I did with them during their intervention sessions. However, in order for them to do what our state requires of them in the spring–to analyze multi-paragraph articles focused on a single topic and synthesize a response in either a narrative, argumentative or informational format–merely resting on their ability to decode words will get them nowhere.

        This is why an emphasis on comprehension and vocabulary are a must at all times, and especially once students can decode. Oral comprehension and vocabulary are important even when students can’t decode, but nailing the decoding piece is crucial if these students have any kind of chance in tackling multi-paragraph grade level text.

        Thanks for addressing (at some point) this point, that systematic phonics instruction is “necessary but not sufficient”.

  8. And finally, Jenny Chew in a November 8th response to you on Pamela Snow’s blog, “Trick or Treat: More nonsense words about phonics instruction and assessment”, ends her comments with the same question that I have.

    You write:

    “The key part of my quote is: “before focusing on the meaning of written words in isolation and in text”. Note the word *written*. Yes, children learn the meaning of spoken words, largely from home, and as far as I know, there is no systematic way in which meaning of spoken words is taught in schools. So phonics focuses on GPCs (like the quote from Max), with minimal teaching of meaning. SWI does explicitly teach GPC, but in the context of morphological matrices where the meaning of a set of morphologically related words are studied. Here, children are being taught meaning of *written* words, and learning GPCs in this context.”

    And Jenny Chew responds:

    “It looks as if you are using ‘focusing on’ to mean ‘teaching’, and are therefore saying that phonics teaches GPCs before teaching the meaning of written words. But written words have the same meanings as their spoken counterparts, and if children can use phonics to translate written words, however many morphemes they have, into spoken words whose meanings they already know, there is no need for teachers to teach the meanings of those written words separately. In those circumstances, however, not teaching meaning doesn’t mean that there is no ‘focus’ on it – teachers use questions and discussion to make sure that children understand what they read.

    Would you use a morphological matrix to teach GPCs from scratch to children who as yet knew no GPCs? Can you give an example?”

    It’s this example of a “morphological matrix to teach GPCs from scratch to children who as yet knew no GPCs” which would help me understand what SWI instruction to beginning readers looks likes in practice.

    Thanks!

    Harriett

    1. Hi Harriett, I’ve responded to these questions multiple times to the best of my ability. Videos have been posted of teachers teaching SWI in the early stages of reading instruction. Analytic phonics is an type of phonics where children learn GPCs within words at the start of instruction (no one claims that this is hard to understand as far as I know). But in any case, this is a paper about the evidence for systematic phonics, not about SWI. For this blogpost I’m hoping to talk about the claim that the science of reading supports SSP.

      1. Thanks, Jeff. You say: “Analytic phonics is an type of phonics where children learn GPCs within words at the start of instruction (no one claims that this is hard to understand as far as I know).”

        Definitely not hard to understand. Forgive me, but I hadn’t appreciated that SWI promotes the use of analytic phonics. I’ll now revisit your links and look for analytic phonics combined with “morphological matrices where the meaning of a set of morphologically related words are studied. Here, children are being taught meaning of *written* words, and learning GPCs in this context.”

      2. You’re right, Jeff, that I haven’t been addressing the main points of your review, but after reading what Anne Castles says, I realize that if she’s struggling with interpreting the data, then I would be way out of my depth. She writes:

        “I’d still cite the NRP study as coming out overall in favour of systematic phonics. I did read the Camilli et al (2006) study, but found it very hard to figure out what they’d actually done, and why they’d made the decisions they did about which factors to control. No doubt there are many, many different ways these data could be carved up – I really have no way of knowing whether things they have done have introduced other confounds. So, imperfect though it may be, I place more weight on the results as reported against the original set of criteria”

        However, I am not out of my depth when it comes to teaching 5 and 6 year-olds how to read, and after watching Pete’s video on teaching SWI right from the start (http://www.realspellers.org/forums/orthography/1412-video-on-how-swi-explicitly-teaches-grapheme-phoneme-correspondences-from-the-start), I’d have to say that I just don’t see that method working until students have nailed their GPC’s through quick and efficient blending and segmenting. Too much information! But once they’re “off the blocks” as Pamela Snow says, it’s good stuff.

        1. There’s really no good reason to fixate on Pete’s videos as examples of teaching 5 or 6-year-olds to read; instead, please view the many resources and videos from kindergarten and first grade teachers who actually are working with SWI and very young children.

          Let’s not act like there aren’t a million phonics videos out there that you would not think are good representations of phonics. Pete’s video really isn’t representative of much except Pete trying to kind of artificially model what he’d do beginning **remediation** with a kid who’s struggling, or introducing SWI concepts to a class for the first time. Nowhere does he make the claim that it’s a model of him working with a brand-new, beginning reader.

          Pete says the video is “about how we teach GPCs from the very start” — the question is the very start of WHAT? Harriet mistakenly assumes Pete is talking about the very start of literacy, rather than the very place he typically starts SWI with older kids and teachers.

          There are, however, many of us who actually study with very young kids. Depending on the child, I may start with script to ensure letter knowledge and proper pathways, or with the establishment of what vowels are and what consonants are. There are plenty of videos and websites that detail working with SWI with beginning readers, with K or pre-K or 1st graders.

          Might someone use a lexical word matrix to study orthographic phonology? Sure! From the very beginning? Perhaps — it depends on the child and the concept under study.

          I find it baffling how the biggest opponents of SWI, those who poke at it and perseverate on minutia (like what an spells, or a single video) are typically people who haven’t been trained or studied, just watched a video or two and formed false opinions. It’s the biggest barrier there is to learning: thinking you already know everything about something.

          1. Thank you so much for the clarification!

            You say: “There are plenty of videos and websites that detail working with SWI with beginning readers, with K or pre-K or 1st graders.”

            I’ve watched the “rain” video from a pre-school class that Pete has posted, but that did not show how to teach non-readers GPC’s using SWI. This is the type of video I’m looking for and would be very grateful if you could provide a link.

            Thank you!

  9. One more thing:

    You state in your paper: “To summarize, there are a number of different forms of reading instruction, some of which emphasize letter-sound mappings before other properties of words (e.g., systematic phonics), others that emphasize meaning from the start (e.g., whole language) and others that claim that the phonology and meaning of word spellings should be the focus of instruction from the beginning (structured word inquiry).”

    This confuses me. I do letter-sound mappings WITHIN the context of teaching words (dog, cat, mom, dad), and of course these words have meaning. I consider this to be teaching phonics. This is why many of us are unclear about exactly what SWI looks like in beginning reading instruction.

    Harriett

  10. Hi Jeff,
    As you know, I find this subject very interesting because I have worked with hundreds of struggling readers, and we practitioners really do want to implement “best practices” since so much is at stake when it comes to getting it right.

    Over the past two years since I first became acquainted with SWI, I have followed links that you and Jeff have posted and watched many videos. And this year I have started using word matrices with my third graders. What I still haven’t seen is a clear “how to” for beginning readers: How do you teach children who have no understanding of grapheme-phoneme connections how to read words?

    In her responses to you and Pete in the February 2018 exchange on her blog, Pamela Snow makes a similar point: the “sticking point” seems to be the “starting point”, and she–like many of us–would like to see how SWI is “translated into everyday practice” with students who have no grapheme-phoneme connections.

    She writes:

    “We have to start somewhere to help novices to understand the nature of reading and I’m curious to understand where you advocate starting . . . I just don’t see morphology as the starting point in a language that has a strong alphabetic component, particularly for the kinds of words we want to start young children off on . . . morphology and etymology are critical, but are not the entry point to early systematic reading instruction . . . I think pretty much all of us agree on the importance of phonemic awareness, phonics, morphology, vocabulary, comprehension, fluency, and oracy, but the sticking point seems to be whether the starting point is introducing children to a small subset of phoneme-grapheme correspondences to get them off the blocks . . . If you have evidence that starting with morphology and guided word study is “better”, then of course I am eager to be directed to that and will read it with great interest. How such evidence would be translated into everyday practice in early years classrooms (particularly in Australia), however, is a leap into the darkness.”

    Thanks for such an interesting discussion!

    Harriett

Leave a Reply

Your email address will not be published. Required fields are marked *