Responding to Fletcher et al.

Responding to the Fletcher, Savage and Vaughn (2020) article entitled: “A Commentary on Bowers (2020) and the Role of Phonics Instruction in Reading”

In addition to the Buckingham (2020) response, Fletcher et al. (2020) published an article challenging the Bowers (2020) claim that there is little or no evidence in support of systematic phonics. My response to Fletcher et al. has just been accepted and I’ve linked a pre-print of the response to this blogpost (psyarxiv.com/p24vs).  You can find the Bowers and Bowers (2021) response to the Buckingham article here (doi.org/10.31234/osf.io/f5qyu).  Also see the P. Bowers (2021) response that goes into more detail concerning one particularly egregious error of Buckingham, namely, the claim that SWI does not teach GPCs (psyarxiv.com/7qpyd/).

What a difference a journal (and an Action Editor) makes.  The Buckingham response was published in The Educational and Developmental Psychologist without asking me to review it, the action editor rejected the Bowers and Bowers (2021) response without addressing any of the straightforward errors of the reviewers that we pointed out, refused to issue a correction to the most critical errors, and would not permit us to publish our work on PsyArxiv while the paper was under consideration (see previous blogpost that describes our experience in more detail: jeffbowers.bristol.ac.uk/blog/buckingham-2020/). By contrast, I was asked to review the Fletcher et al. manuscript submitted to Educational Psychology Review (the same journal where my original article appeared), the Action Editor welcomed my response after sending it out for review, and is happy for me to publish my response in PsyArxiv.  That is how peer review and action editing should work, and unfortunately, this is rare. It has also led to a more constructive exchange compared to the Bowers and Bowers (2021) response that was largely devoted to correcting a long list of errors in the Buckingham article.

I’ve linked the Bowers (2021) response to the Fletcher et al. article to this blogpost in an attempt to generate more discussion regarding the evidence.  Anyone who follows me on twitter (@jeffrey_bowers) knows I’m a bit relentless (but mostly polite) in asking for evidence for phonics and criticizing the bias in the field.  I’ve included a few exchanges here in an attempt to prod these and other researchers to justify their claims.


In the following thread Kathy Rastle claims that the evidence for systematic phonics is strong, ridicules many (not me) for criticizing the evidence for phonics, and claims there is no evidence for an alternative approach I have been advocating (SWI).

In fact, there is some evidence for SWI (e.g., Bowers & Bowers, 2021), but I agree that more evidence is needed, and indeed, the goal of the Bowers (2020) critique was to motivate more research into alternative approaches (including more research on SWI).  I would be interested to hear whether Rastle still maintains that the evidence for systematic phonics is strong.

And here is a recent response from Timothy Shanahan, one of the members of the National Reading Panel (2000).  Strong words, so you might expect Shanahan to feel some obligation to justify these claims.  But no response thus far.  I would be interested in knowing the basis for these claims.

The person I’ve exchanged the most tweets with is Pamela Snow.  I do appreciate that she is willing to engage, but she does not provide evidence for the claims that she makes.  For instance, here she states that my critique does not change her view that the evidence for phonic is “dimensional, not binary”.  But she fails to provide evidence to justify her view.

Or here:

I would be interested in what multiple filters and caveats of existing data undermine my critique of SSP.

Perhaps the closest I’ve come to a direct answer to a request for evidence comes from this exchange:

But I’m not sure which specific studies Snow is referring to.  It would be great to get a direct answer to the question:  What is the best evidence for systematic or explicit phonics?

Mark Seidenberg has argued for the importance of phonics, but rarely responds to queries from me.  But here is one case he responded, with a non-sequitur.  Thus far he has failed to cite my critique of phonics or considered alternatives like SWI (the “bonkers” whole language approach is not the only alternative to phonics).

Not only do researchers fail to respond to my requests for evidence on twitter (or on my blog thus far), but they also largely ignore my critique in their publications.   Some authors think that is fine.  For example, consider the following exchange with Greg Ashman and Pamela Snow.

I find this comment strange.  A recent systematic review of all the meta-analyses of phonics interventions and a review the reading outcomes in England following over a decade of legally mandated phonics does not meet the “relevance threshold” when claiming the science of reading strongly supports phonics?

And my favourite:

Of course, researchers who “don’t think it’s very good” are free to criticize my work, and indeed, I’m asking people to challenge my arguments in the comment section below.  But I disagree that it is just fine to ignore work you don’t think is good and assert that “the scientific process should work it all out”.  If an author publishes the claim that the science of reading supports phonics, he or she should challenge my claims (to his credit, Greg Ashman has responded on his blog, but he just introduces more errors and fails to address the problems I have identified), or at least acknowledge that there is an alternative characterization of the evidence that has been published.

I can hear my critics moaning: “Bowers is just complaining about not being cited enough, welcome to the club”.  And fair enough if there were other relevant recent peer-reviewed papers challenging the evidence for phonics that were being cited.  But there are no such papers.

Is there really a bias against citing work critical of phonics?  In my case, there might be a much simpler answer, namely, my critique only came out in January 2020, so there has been little opportunity to cite my work.  I would note that my critique has been up on PsyArxiv since 2018 and downloaded almost 2.5K times (psyarxiv.com/xz4yn/), but perhaps given more time, Rastle, Seidenberg, Snow, Shanahan and other proponents of phonics will cite this work when they publish the claim the science of reading strongly supports systematic phonics.  But this does not explain why the few other published studies that have challenged the evidence for phonics have largely been ignored in a sea of pro-phonics citations.  Consider the Camilli et al. (2006) meta-analysis that not only identified a fundamental limitation of the phonics meta-analysis in the NRP, but also showed that when the data were analyzed in the relevant manner there was no evidence that systematic phonics was more effective than standard alternative methods used in schools (read my paper for more details).  The Camilli et al. paper has been cited a total of 68 times in Google Scholar, 8 times since 2019 (twice by me).  By contrast, the Ehri et al. (2001) article that describes the phonics section of the NRP has been cited 1152 times, 178 times since 2019, with the NRP itself cited over 24 thousand times, and almost 3 thousand times since 2019.  The relative lack of attention to the Camilli et al. meta-analysis may help explain why all subsequent meta-analyses of phonics included the same design limitation as the NRP.  It is hard enough to publish a paper critical of phonics (see final section of: jeffbowers.bristol.ac.uk/blog/buckingham/), but once you do, it seems the work will largely be ignored by the research community.  Hopefully my rather direct and persistent challenges will make this harder.  But currently, the politics of science has corrupted the science of reading instruction.

I don’t expect my articles will convince many who claim that the science of reading supports phonics.  But I do hope that more neutral observers interested in this issue will read the Bowers (2020) critique (or the shorter summary blogposts that I’ve published, including one in the Washington Post: jeffbowers.bristol.ac.uk/blog/916-2/), read the recent published exchanges (Buckingham, 2020; Fletcher et al., 2020 vs., J. Bowers, 2021; P. Bowers, 2021; Bowers & Bowers, 2021), and look at the comments (or lack of comments) below, or the comments in the blogpost response to Buckingham, or in Ashman’s recent blogpost (fillingthepail.substack.com/p/structured-word-inquiry).  If indeed the evidence for phonics is strong, it should be easy enough to identify the research that supports this conclusion, and it should be easy enough to identify problems/mistakes in my publications.  Proponent of phonics should be motivated to correct my mistakes.

Any comments are welcome but let me try and anticipate three responses that I think are distractions.  First, I expect some advocates of phonics will want to challenge me regarding the evidence for SWI.  As I have noted in multiple papers, there are good theoretical arguments for SWI, and there are some empirical studies that lend support to SWI.  But I agree, more research is needed.  Second, I expect some advocates of phonics will say that the term “phonics” refers to knowledge of letter-sound correspondences, and children need to learn these correspondences to read aloud.  The problem with this is that almost all forms of reading instruction teach letter-sound correspondences, including whole language, balanced literacy, and SWI.  The issue at hand is whether phonics instruction (as defined in all the meta-analyses and government reports) is the best way to teach these mappings.  Third, I expect proponents of phonics will note that “no one claims that phonics instruction is enough”.  And I agree, no proponent of phonics makes this claim (nor do any critics as far as I am aware).  But what proponents of phonics do claim is that phonics instruction is a necessary part of effective reading instruction.  The issue at hand is whether there is any evidence for this.

Again, I’m happy for any comments, but I’m most interested hearing what proponents of phonics consider the best evidence for systematic (or explicit) phonics.  Please post your responses below.

References:

Bowers, J.S. (2020). Reconsidering the evidence that systematic phonics is more effective than alternative methods of reading instruction. Educational Psychology Review, 32, 681–705. doi.org/10.1007/s10648-019-09515-y

Bowers, J.S. (2021). Yes children need to learn their GPCs but there really is little or no evidence that systematic or explicit phonics is effective: A response to Fletcher, Savage, and Sharon (2020).  Educational Psychology Review. Published online on 13 March 2021. doi.org/10.1007/s10648-021-09602-z

Bowers, P.N. (2021). Structured Word Inquiry (SWI) Teaches Grapheme-Phoneme Correspondences More Explicitly Than Phonics Does: An open letter to Jennifer Buckingham and the reading research community. psyarxiv.com/7qpyd

Bowers and Bowers (2021). The science of reading provides little or no support for the widespread claim that systematic phonics should be part of initial reading instruction: A response to Buckingham. doi.org/10.31234/osf.io/f5qyu

Buckingham, J. (2020). Systematic phonics instruction belongs in evidence-based reading programs: A response to Bowers. The Educational and Developmental Psychologist, 37(2), 105-113. doi.org/10.1017/edp.2020.12

Camilli, G., M. Wolfe, P., & Smith, M. L. (2006). Meta-analysis and reading policy: Perspectives on teaching children to read. The Elementary School Journal, 107, 27-36. doi.org/10.1086/509525

Fletcher, Savage, and Sharon (2020). A commentary on Bowers (2020) and the role of phonics instruction in reading. Educational Psychology Review (2020). doi.org/10.1007/s10648-020-09580-8

10 thoughts on “Responding to Fletcher et al.

  1. “But I’m not sure which specific studies Snow is referring to. It would be great to get a direct answer to the question: What is the best evidence for systematic or explicit phonics?”
    Why has she not responded?

  2. Hi Bowers, you recently stated in a debate that I hosted between yourself and Dr. Garforth that there is no meta-analysis evidence to support the use superiority of phonics. You also pointed to the NRP meta-analysis stating that they did not find phonics interventions improved reading results. However, on page 110 of the NRP meta-analysis, it states “Phonics instruction produced substantial reading growth among younger children at risk of developing future reading problems. Effect sizes were d = 0.58 for kindergartners at risk and d = 0.74 for 1st graders at risk.”

    Now I will in advanced conceded that this was not directly in comparison with Whole Language. However, we can indirectly compare these two factors by the fact that meta-analysis consistently finds much lower effect sizes than the above one for Whole Language, including the NRP analysis.

    Moreover, I will also concede that the evidence for synthetic phonics specifically is weaker, (possibly due to the lack of studies). However, you stated there was no meta-analysis evidence for phonics. And I cannot seem to understand, how your coming to that very strong conclusion.

    1. Hi Nate, I enjoyed the debate you organized and hosted, and will put up a link here once I have it.

      My claim is that there is little or no evidence that phonics improves a variety of reading outcomes, including reading text, spelling, and reading comprehension, and that there is even less evidence when comparing phonics to the types of reading instruction common in schools, such as whole language. This I my conclusion after considering all the meta-analyses and reading outcomes in England. Regarding the NRP, this is what I wrote:

      “In sum, rather than the strong conclusions emphasized the executive summary of the NRP (2000) and the abstract of Ehri et al. (2001), the appropriate conclusion from this meta-analysis should be something like this:
      Systematic phonics provides a small short-term benefit to spelling, reading text, and comprehension, with no evidence that these effects persist following a delay of 4–12 months (the effects were not reported nor assessed). It is unclear whether there is an advantage of introducing phonics early, and there are no short- or long-term benefit for majority of struggling readers above grade 1 (children with below average intelligence). Systematic phonics did provide a moderate short-term benefit to regular word and pseudoword naming, with overall benefits significant but reduced by a third following 4–12 months.”

      The values of effect sizes of d = 0.58 and d = 0.74 refer to short-term effects averaged across all measures (so will largely be driven by nonword and regular word decoding). But these two short-term effects are also misleading. Most studies in the NRP are not RCTs, and when Torgerson et al. (2006) selectively looked at the RCTs from the NRP they failed to get overall significant effects on comprehension and spelling, and once *they* removed an obviously flawed outliner study, they only got marginal effect on word reading accuracy. They also reported evidence of publication bias that inflated the effect sizes. I quite like the way Fletcher et al. (2020) in their response to me summarized the Torgerson et al. meta-analysis:

      “After assessing the evidence contextualized against rigorous inclusion criteria including randomization, Torgerson et al. (2006, p. 42) argue, “none of the findings of the current review were based on strong evidence because there simply were not enough trials (regardless of quality or size)” before drawing extremely cautious conclusions. The precise wording of the primary conclusion was that there was “No warrant for NOT using phonics” (p. 43)”

      This is how advocates of phonics interpret the outcomes of the best studies from the NRP. And as noted in Bowers (2020), the NRP (2020) and Torgerson (2006) meta-analyses did not even use the right control condition in order to assess whether phonics is more effective than the primary alternatives used in schools. When this is done, the effects are reduced further with the overall short-term effect no longer significant in Camilli et al. (2006). There are dozens of individual effect sizes people can point to in the NRP, and it is not appropriate to pick out a few outcomes as evidence for phonics not considering all these factors and other effect sizes (e.g., in the same paragraph where the .58 and .74 effect sizes are reported, the authors of the NRP note that phonics did not provide a significant benefit for “low achieving” poor readers above grade 1, with effect size of .15).

      And note, the NPR is over 20 years old now. There have been a series meta-analyses (all using the wrong control condition) that have found little or no evidence that phonics improves reading text, comprehension, or spelling. When significant effects are observed, they tend to be short-term effects on decoding that do not have broader impact on reading measures, just as observed in reading outcomes in England since introducing phonics in 2007.

      So, I do not understand how people can claim there is strong evidence that systematic phonics is more effective than alternatives used in schools. If I said at any point in the debate that there is NO evidence that phonics has an impact on ANY measure then I misspoke. My claim is that there is little or no evidence that phonics has an effect on measures other than decoding itself. Yes, out of 100s of studies someone can cherry pick individual outcomes of individual studies that support the claim that phonics is effective, but when you consider the findings as a whole, the evidence is extremely weak, at best. It should motivate people to look for alternative approaches.

      omment:
      Hi Bowers, you recently stated in a debate that I hosted between yourself and Dr. Garforth that there is no meta-analysis evidence to support the use superiority of phonics. You also pointed to the NRP meta-analysis stating that they did not find phonics interventions improved reading results. However, on page 110 of the NRP meta-analysis, it states “Phonics instruction produced substantial reading growth among younger children at risk of developing future reading problems. Effect sizes were d = 0.58 for kindergartners at risk and d = 0.74 for 1st graders at risk.”

      Now I will in advanced conceded that this was not directly in comparison with Whole Language. However, we can indirectly compare these two factors by the fact that meta-analysis consistently finds much lower effect sizes than the above one for Whole Language, including the NRP analysis.

      Moreover, I will also concede that the evidence for synthetic phonics specifically is weaker, (possibly due to the lack of studies). However, you stated there was no meta-analysis evidence for phonics. And I cannot seem to understand, how your coming to that very strong conclusion.

      1. Hi Dr. Bowers, would you be willing to send me the 2006 study you are referring to? I would like to read it and then respond.

        Thanks,
        Joseph

          1. So, I read the Torgerson study, and I have to admit it is a very well-written paper. However, I do not think the paper does anything to disprove the benefits of phonics. If anything, I think it provides another piece of converging evidence, pointing in the same general direction, of validation for systematic phonics instruction.

            If we look at the larger meta-analysis that includes the entirety of the evidence, we get a significant effect size for phonics, whereas we get a small one for Whole-Language. For example, as I previously stated the NRP meta-analysis found “Phonics instruction produced substantial reading growth among younger children at risk of developing future reading problems. Effect sizes were d = 0.58 for kindergartners at risk and d = 0.74 for 1st graders at risk.” While you pointed out that this is only one of many effect sizes in this paper, I would argue that these are the most important effect sizes from the entire paper, as they represent the overall statistical benefit for phonics in the early years of instruction, from the entirety of the evidence at that time.

            However, there have been so many large-scale meta-analyses that point in the same general direction. For example, Hattie’s 2018 meta-analysis found an effect size of .70 for phonics instruction and .06, for Whole-Language. Similarly, his latest meta-analysis found an effect size of .60 for phonics and a .06 effect size for Whole-Language. All these effect sizes are within the same range, which I think, says something about the consistency/validity of these findings.

            While Torgerson’s meta-analysis does show smaller effect sizes, I think due to the construction of this study this had to be expected. Not only is it an RCT meta-analysis, which typically produces lower effect sizes, in this meta-analysis only studies making direct comparisons were used. If we compare this to Linnea’s sub-analysis of the NRP paper, we actually get similar results. For example, Linnea found a .85 effect size for phonetic instruction with direct instruction and a .53 effect size for Whole Word instruction. If we compare these two effect sizes directly, by finding the difference, we get an effect size of .32. This difference is consistent with the effect sizes found in the Torgerson paper. Which found an effect size of .38 for word accuracy, and .35 for word comprehension.

            While Torgerson concludes that these effect sizes were not statistically significant if we compare them to effect size guides, they would be considered small but statistically significant. And while these effect sizes are small when compared to the general body, of education evidence. However, most education studies are not RCT studies, and most education studies do not make direct comparisons.

            Ultimately, even the direct comparison, RCT meta-analysis showed a positive effect size for phonics. The more generalized meta-studies show a significant effect size for phonics instruction. Comparatively, there have been several meta-studies showing a low effect size for Whole-Language instruction. Conversely, we also have some sound theoretical evidence (something you admitted to in your debate with Dr. Garforth). All of these facts show a large amount of converging evidence for phonics instructions superiority over Whole-Language.

          2. Below are some quotes from the Torgerson et al. (2006) meta-analysis. Given just this (ignoring the additional problems I identified), it is hard to make the claim that the study provides good evidence for phonics. Do you agree that these statements by the authors themselves are problematic for anyone claiming that the NRP or this meta-analysis provides strong support for phonics?

            “In addition, the strong possibility of publication bias affecting the results cannot be excluded. This is based on results of the funnel plot… It seems clear that a cautious approach is justified (p. 48).”

            “The conclusions one can draw are further weakened by the quality of the studies included in the meta-analysis, with the authors writing “

            “…none of the 14 trials reported method of random allocation or sample size justification, and only two reported blinded assessment of outcome… all were lacking in their reporting of some issues that are important for methodological rigor. Quality of reporting is a good but not perfect indicator of design quality. Therefore due to the limitations in the quality of reporting the overall quality of the trials was judged to be “variable” but limited. “

            “none of the findings of the current review were based on strong evidence because there simply were not enough trials (regardless of quality or size)”

            And these conclusions directly impact the conclusions you can make regarding the NRP, as Torgerson et al. meta-analysis was selectively analyzing the best designed studies from the NRP.

            Yes, RCTs often produce smaller effect sizes, but this probably indicates that the non-RCT studies provide over-estimates of the effect. And not only are the studies in the Torgerson study likely exaggerated due to the fact that there was evidence of publication bias, but also because they used the wrong control condition if the claim is that phonics is more effective than the types of instruction used in schools. The effects are much smaller in the NRP studies (and thus this study) when the appropriate control condition is used, as showed by Camilli et al. (2006).
            Note, you should not infer that an effect is reliable or meaningful based on its size if it is not statistically significant. Also note, this meta-analysis only considers studies carried out over 20 years ago. More recent meta-analyses that will include these same studies (and more recent ones) if they satisfy the selection criterion also fail to get significant effects on anything other than decoding (and sometimes not even that).

            What Hattie meta-analysis are you referring to? That was not considered by me (or other more recent reviews that considered all meta-analyses on phonics).

            But ignoring all the details here, just look at the above quotes. Clearly, neither this meta-analysis or the NRP provide solid grounds for supporting phonics.

  3. Hi Harriett, thanks for responding. One of the main motivations for my critique of phonics is to encourage people to explore new ways of reading instruction. I’m not very familiar with all the approaches you’ve noted, and I expect most of these methods are examples of adding to phonics rather than replacing phonics. And as I noted in the linked article, more research of this sort should be carried out. But I still think the most obvious hypothesis is that children should be taught all the regularities in the English writing system from the start, including the meaningful ones. And to use tools that highlight these regularities, such as the morphological matrix. The strong opposition to this hypothesis is hard to understand given the state of the evidence with current approaches.

    You say you are not trained to analyze the studies themselves. Fair enough. But do you not find it strange that no researchers have responded to this blogpost or my blogpost detailing a list of errors from Buckingham? I’ve certainly done my best to encourage responses, and now there is an “in press” response to Fletcher et al. in Educational Psychology Review. When someone says that the “science of reading” strongly supports phonics, I hope you pass them a link to this blogpost.

    1. I don’t know if ‘strange’ is the right word. A better one might be ‘unhelpful’ for us practitioners. When I was growing up, I faithfully watched a show called Point-Counterpoint, and this type of one-on-one discussion where the viewer who is unschooled in a given topic can hear a direct response to a specific point made was extremely enlightening. Your frustration comes through loud and clear because you feel your points haven’t been addressed or have been addressed unsatisfactorily, which I can’t comment on. So this has just been a spectator sport for me as I watch the back and forth. Having said that, I do feel I am quite schooled in teaching beginning reading, and for all the reasons I’ve stated previously over several blog posts over several years, I really believe that the research will not support supplanting–rather then supplementing–phonics instruction with morphological matrices. I also strongly believe that we need to put the ‘phon’ back into phonics instruction, which is what word building does. I will be watching for the next moves on both sides!

  4. Hi Jeff,
    As I’ve mentioned before, I am not trained to analyze the validity of studies, but as a reading specialist working with hundreds of struggling readers, I am closely watching this debate over the most effective early reading instruction. Recently, I have been exploring David Kilpatrick’s emphasis on the importance of ‘phonemic proficiency’ (not just phonemic awareness) for automatic word recognition and looking at Seidenberg’s 4-part processor model for how skilled reading develops through an interaction between the phonological, orthographic, meaning and context processors–all with the goal of promoting the ‘orthographic mapping’ in the brain that Share discusses in the self-teaching hypothesis and which Ehri has focused much of her research on.

    I have found that the word building activities in Isabel Beck’s Making Sense of Phonics to be very helpful for struggling readers. I’m wondering if you are familiar with this research and what you think of word building as a way to unite the phonological, orthographic and meaning processors as children build words that only vary by one phoneme at a time.

    From “Focusing Attention on Decoding for Children With Poor Reading Skills: Design and Preliminary Tests of the Word Building Intervention”, Bruce McCandliss, Isabel L. Beck, Rebecca Sandak, and Charles Perfetti, 2003

    “The intervention directed attention to each grapheme position within a word through a procedure of progressive minimal pairing of words that differed by one grapheme . . . After each transformation, the child decodes the new word, which looks and sounds similar to the previously decoded word. This activity is designed to help children attend to the subtle impact of a single grapheme change on the appearance and pronunciation of each word. Focusing attention on each individual letter sound unit within words may play an important role in developing fully specified representations of printed words.”

    Thanks!

    Harriett

Leave a Reply

Your email address will not be published. Required fields are marked *